 Thank you for joining the session and thank you for making it to the end of the day last session of day right there. They're always rough. They're always rough. So it's been a it's been a great, great, great event today. So I've been really enjoying it. So, but once again, thank you for joining me. My name is Josh Carlisle. I am a principal engineer at Zscaler and also on the community side. I'm on Microsoft MVP and I am also here with my colleague. And I'm a staff software engineer at Zscaler work with Josh. So we're colleagues and we work together and you're going to find out over the next 30 minutes what we have been working on. So to get started here a little bit, a little bit of our agenda here. So we're going to give a little bit of a platform overview about what our world looks like because we want to give this a little bit of a context, right? So that you understand a little bit more about our solution. And we're also going to kind of cover some of our, you know, buy versus build some of our considerations we made in a little bit of what our journey was like that led us to some of the decisions we made and where we landed on things. And then we're going to jump into a actual demo. You're going to see a a abbreviated version of kind of what our environment looks like a little bit of an easier to consume version that represents kind of where we landed on things. And then finally, we're going to be covering some best practices and some lessons learned, right? Nothing's perfect. And there's always hiccups along the way. So we're going to kind of be sharing a bit about what we learned and what worked well for us. So to give you a little bit of context before we get started, both Nita and I work on a product at Zscaler called Zscaler posture control. It is part of a category of applications commonly referred to as CNAP. And for those folks who aren't aware of all the latest ingredients, Gartner acronyms, it stands for cloud native application protection platform, which is kind of a mouthful. But in essence, we help our customers secure their cloud native applications. And we do that in a kind of a very specific way. So like most SaaS platforms out here, we have customers. And this is a little bit of a difference when we talk about tenants in our use case. It's a little bit different than what other talks may have been talking about when we're talking about tenants. We have tenants in the traditional conversation definition where we have a platform that we offer out to our developer and feature an engineering team, something we call subsystems internally. But we also have tenants as in actual customers, right? So the different spin on this is that in our use case, our multi-tenant use case, it's actually multi-tenant of multi-tenants. And that adds a little bit of a different spin on some of the complexities and some of the challenges we had along our way in our journey. So we have our customers and those customers like most customers are running cloud native applications today. They're living in Azure and AWS and GCP, right? And they're also using sets of tools to both build those applications, deploy those applications, define those applications in the case of IAC. And all of those platforms provide telemetry to us, right? And what our platform does is we essentially correlate and make sense out of all that telemetry coming in to help customers understand when they have misconfigurations that can lead to security problems, when they have vulnerabilities, maybe a pack. Oops, did we lose? Okay, sorry about that. That was weird. When we have vulnerabilities, for example, if there's a package that a particular service is using that has an exploit, and we correlate it all together and help customers make their applications more secure. So that's kind of the context that we live in. We consume lots and lots and lots of data as part of that. So what does this look like from a deployment standpoint? And where does this multi-tenant of multi-tenants start to kind of tie in? Well, this was a Greenfield project about 18 months ago, and that's both a blessing and a curse. We had a lot of freedom, but we had a lot of decisions that we needed to make along the way. And we secure cloud-native applications, so it makes sense that we should probably be a cloud-native application ourselves, right? So we have a lot of the traditional components that you expect from a cloud-native application. We live in the cloud. We are living in Kubernetes. We have microservices and messaging infrastructure that we host and database infrastructure, and a lot of other platform components, many of which are CNCF projects. We use open telemetry. We use CADA. We use just a lot of other CNCF projects to kind of help stitch together our platform. And from a traditional multi-tenant standpoint, we have multiple environments, right? And those multiple environments are deployed to what we call stamps or regions around the world. What makes things a little bit more challenging is that we also have customer-specific microservices and customer-specific components. When you're in a multi-tenant environment from a customer perspective, you have some challenges, things like noisy neighbors that you have to solve, noisy neighbor problems, things like customer data isolation, customer isolation. And some of those problems and the way that you solve them add a lot of complexity on how you deploy applications. So we have tenant-specific microservices. Microservices are just dedicated to tenants. It's worth mentioning we use another CNCF project I mentioned a minute ago called CADA that allows us to do that in a cost-effective way. So when nothing's running, we don't pay for it. Kind of like a scale to zero, a little serverless kind of model. But we also have things like customer-specific topics and customer-specific databases. And we have to be able to kind of manage those kind of things. So when we started kind of coming up with, you know, how do we handle... What do we want to do? How do we want to approach this? Remember, we're green field here, right? We kind of started making some decisions. We tried to start thinking about, you know, do we want to build our own? You know, are we that special, right? Everybody thinks you're special. Everybody, every product's unique. But do we decide, hey, do we want to build this ourselves? Do we want to use a traditional deployment platform? You know, things like GitHub Actions or, you know, Azure DevOps, things that are traditional out there. Or do we want to maybe embrace a few GitOps side, right? Because GitOps, cloud-native, they kind of go hand-in-hand. So that was definitely on there. But we had some strong considerations, right? We needed to support some of the special multi-tenancy use cases we had, tenants of tenants of tenants kind of scenario. We also were sensitive to not getting too early on new platforms because we wanted something that had some maturity and some long life. And we also, it was very important that our engineers were comfortable with working with the platform. Because key consideration across the board was really around our velocity, right? And if we had to introduce something really, really new to our engineers, it's going to slow down our velocity, and we wouldn't be able to meet some of our deployment goals. And then obviously, cost is a determining factor. So we had some painful first steps that we learned from. Since velocity was really our number one goal, we started to align with basically do whatever you want as long as it's fast. Which obviously has its pluses and negatives. And since we have subsystems, which are basically teams, we said, you know, do what you need to do to deploy what you're responsible for deploying in the way that you feel confident doing. And that worked for a while. That definitely got us through our hump. We got a first release out really quickly. A very large product in about eight months. But as we went GA, we started to kind of identify, this isn't really that great, right? There are some pain points here. It was fragile. It was expensive. When we started kind of thinking about, how do we overcome these? And we were still a little bit in the mindset of, do we build this ourselves? Or do we lean into another solution? And we started looking at things like custom resource definitions and defining our own resources. And that led us into looking a little bit closer, again at GitOps. And a lot had changed in that year. A lot of things had matured in that year. And so as we started to think about what's next, we started to very quickly align against GitOps. It had the maturity that we were looking for. Our teams were comfortable with it, because the teams are already using Git today. We're doing PRs today. It was a metaphor that they were very comfortable with. Key to us, and you'll learn a little bit more as Neeta jumps in and shows some more details about our multi-tenant approach. Key to us was being able to eloquently handle that multi-tenant architecture, that tenant of tenant scenario. And what we found pretty quickly, and we were looking at various options. We looked at Argo. We looked at Flux. We found fairly quickly that Flux really shined in that multi-tenant scenario for us. And it did it in a way that we felt was relatively simple and simple to understand. And then, of course, the Flux community is fantastic. When we had some hesitations like, oh, we're getting into something really new, right? We started reaching out to the Slack channel at a few other locations, and we were getting really great vibes from the community, a lot of support, very responsive to questions and answers, and we felt really comfortable about getting involved with something that is relatively new. So with that kind of context in mind about what our journey kind of looked like, some of our bumps along the lines, I'm going to go ahead and pass off to Neeta, and she's going to kind of talk a little bit about what our solution actually looked like and some of the best practices around it. Yep, thanks, Josh. And just to add to what Josh just said, it was not just only the Slack channel, but they had a really good examples available online from their GitHub repository that we could just take in and then customize it, pun intended, customize it for our use case, right? So I'm just going to start with my first example, which is like a multi, excuse me, a mono-repository example, right? So as our keynote speaker said this morning, Christy, that you copy, paste, and then you evaluate, and then you customize, right? So that is exactly what we did. We took up their example for mono-repository, and then what we have here is apps. These apps are customer-specific apps, right? So you see on the base, we have identity subsystem, which has a bunch of microservice and so does processor subsystem. And then each customer has their own Kafka topics and so on and so forth, right? So that's all defined here in apps. And then in Infra, Infra is something that's shared across all of the tenants on any particular cluster. So there we would have whatever, Strimsy operator or KDA operator and so on and so forth, that's shared across the tenants. That's what we define in Infra. And then obviously in clusters, you bootstrap your flux, and then you pointed to whatever Kubernetes cluster you're pointing it to. And for this demo, we have it on dev and proud that they're calling it, but they both are our Docker desktops running here locally on our Mac, just for this demo. And then obviously we have tenants, right? So this is where we thought that flux came in to rescue us, and just instantly worked for us. So these are our actual tenants, right? The tenants that are using our product. So when we onboard them, we do nothing but create a PR or an auto commit to this repository and add a folder there, and we will see a demo of that later. But this is a repository structure, pretty standard practice that flux suggests to have your repository structure like that. So this is what I have here, and here in here, if you see underneath that cluster, what I have is an Infra.yaml and a tenants.yaml, which is pointing to those folders in the repository, right? Infra is pointing to an Infra dev, and then obviously, you know, my tenant is pointing to tenants dev. And this is, if you see, it is under cluster dev, right? And then similarly, I have underneath prod. I have, again, the Infra and prod, which if you notice, on line number 11, it's pointing to tenants prod path, which is this path. So each of those environments will get the tenants that are defined in that directory. All right. So I'm going to show you how this looks on the cluster, and again, as I mentioned, this is my Docker desktop here. If I see here, what are all the helm releases I have, right? And then you see, look at the power here. Like, you know, I can customize my helm releases. So I have defined them in base, but then I'm applying patches, and my helm releases are now customer-specific. I have an ACME identity service helm release, and then obviously, I also have a Contoso identity service helm release. I can also pretty much the same exact helm releases, but then for each of them, each of the customers. I also have here, if I look at the Kafka topics, let's see if I've gotten my Kafka topics for each of those tenants, right? So I have an ACME topic, a Contoso topic, and so on and so forth, right? And if I go take a look at my pods here, this is what is really important for us, because each of the tenants are running in their own namespaces, right? All of their resources are in their own namespace, and right here, multi-tenancy, right? You know, they have their own service account, they have their own access to secrets, and so on and so forth. Everything is locked down by their namespace, and right here, I have like those two pods for ACME, and if I go in Contoso, I have their pods in here. And obviously, I'm using, again, from Flux, for this demo, I'm using their pod info pod. So if I were to go take you on browser and go to one of those ports, it is the pod info standard app running in there. Now what I would like to ask here to Josh, like if you see here on my repository in both of these clusters, I have only two tenants, right? I have ACME and Contoso, and I would like Josh to merge my PR. So here, I have a pre... like, you know, PR up here, opened against my repository, which is onboarding Aviato for demo. And here, what I am doing is just adding a commit to the Git repository, and what the effect should be is I should be seeing pods for this new customer. I should see a Helm release. I should be a Kafka topic for this customer or our tenants. And Josh, please go ahead, merge this PR. And I'm using the handy GitHub app here. So real time here, we're going to merge this and see if this works. All right. Is it... All right. Successfully merged and closed. Great. Yep. All right. Okay, let's go on open lens and see if I've gotten a new namespace, maybe not yet. Let's go ahead and take a look at the reconciliation. So if I go on the customization and look at my tenant customization, let's see if it's... What's the state? Maybe it's reconciling. And eventually, like, you know, I should have a new namespace and a new Helm release. All right. Here it is. So I've got my... I've got my new tenants, my new customer here, running all of the pods that it's supposed to run. Obviously, what is defined on the base and in the dev path for that. All right. So this is pretty straightforward, right? You know, it's what Flux promises to do and that is what it is here. But in reality, we have a mono-repository structure, right? So we have... For each of our... For each of our tenants or our customer apps or like, you know, their Kafka or data requirements and so on and so forth, like, you know, we have multiple repository structure in reality. And like, you know, these are some of the best practices that we follow on top of whatever I showed you here in the demo, right? So if I go and take a look at all my repositories here, so the ones that are in green are the customer agnostic deployments, right? And the ones that are in yellow are customer-specific deployments. So how do I make it all available when Flux is reconciling, right? So we have a bunch of include there, right? And everything is getting accumulated there on the customization. Everything is pulled in from different repositories and then pushed to the cluster, right? So that is one of the features, standard feature from Flux that we use. Another is Flux DEF. It is a very powerful tool that Flux gives us. So even before I deploy or merge the PR to see what the state would be on the cluster, we have Flux DEF that we run on our GitHub actions. And then this will tell me, oh, by the way, if you were to merge this commit, it's going to break on the cluster for this particular reason, right? So then we go ahead and we fix our things because, you know, we are using Flux DEF as part of our continuous integration. So that's really powerful that we use. And yet another thing is obviously, like, you know, alerts and notifications. So we have both, like, a provider for Slack and for Grafana. And here you see, like, you know, Flux is constantly reconciling and sending us notifications. And we have, like, Slack and Grafana. And we also have generic webhooks that we do certain things based on those. So we are using all of that. And also something really important for us is post-built variable substitution. Like, as you can see, you know, we have a bunch of clusters, like, you know, DEF clusters for staging and integration in U.S. Vestu region. Like obviously we do on AWS, but then we also have in EU central and so on and so forth, right? So here we rely heavily on this post-built substitution because, you know, each of the environments have different values for those things. Like, you know, so we obviously use, define them and then we use them depending on where we are sending our workloads to. So that's, again, really one of the best practices and really good tool that customize along with Flux ProVisors that we use heavily. So along with, like, you know, all of these things, it is serving us really well because what happens now is I'm onboarding a tenant by an automatic commit, but then also I'm off-boarding them when they are done using our product. I'm off-boarding them by just sending another commit and removing that folder aviato from my repository, which Flux then reconciles because it has the purge set to true. So if you look at my tenants.yaml, here you see line number 12, excuse me, not purge, prune true, which means that if a resource is unfound on that path, it will automatically go ahead and clean up all the resources and then delete the namespace. So, you know, I don't have to keep notes and reminders for myself, oh, by the way, this is off-boarded, go ahead, something wasn't deleted or like, you know, so nothing in there, everything happens just automatically. With that. And I'll add to the off-boarding piece and this is something that was really meaningful for us internally and I think different folks probably had some experience with traditional, like, more of a push kind of model. We were onboarding particular tenants, maybe we had had a trial or we were doing a POV, a POC with a customer and they were ready to buy, for example, and we wanted to have a formal environment for them. Or even in dev and lower environments, we were always spinning up, like, you know, practice environments and test environments and then we'd off-board them and stuff would get left behind and they're like zombies on your cluster and except they're very expensive zombies and that was a big problem for us and the desired state kind of model was really attractive, especially for off-boarding because we had a great deal of assurity that those resources would actually get cleaned up and we weren't paying for all these zombies sitting around in all of our environments, especially our lower environments where developers would spin up a new tenant to test with and then spin it back down and do that 10 times a day and each time something would go wrong and 20% of it would be left over, right? And that adds up day after day, week after week, right? And it also added a burden on to our DevOps team because there was always, like, man, you will clean up and need it to happen, right? And so this model, this multi-tenancy model, the multi-tenancy of multi-tenancy really, really, really helped us not only save on the complexity but on a lot of the costs as well. So that's real... Oh, go ahead, sorry. There we go. So let's, in the last few minutes we have here and we'll leave some time for some questions as well, there's a few learnings that we had from this, right? And no product's perfect. There's always pain points, right? And these are a few of the key learnings that we had coming out of this. Our organization relied heavily on helm charts, right? And one of the things that's super valuable, especially in production, with Flux is the ability to do drift detection, right? Someone goes in, you know, and goes in and says, you know, I'm just gonna delete this X resource, right? I'm gonna delete whatever it may be. And there's a self-repair aspect of it, right? That'll repair itself pretty quickly. And that works as expected for typical deployments. If you're deploying with the YAML, your normal manifests, or if you're even using Customize, it works as expected. One of the challenges we had is that traditionally, and this has changed, but the helm charts didn't have the same level of drift detection that we expected. And what that meant is that someone went in and messed with a resource. If they deleted the helm chart, things would reconcile, it would come back. But if they deleted a resource that the helm chart deployed, then we didn't get some of the expected behavior. Now, it's worth mentioning we were able to work around this issue with notifications and some of the different tools that we had that Nita showed you about understanding when things are happening. And we were able to get around that, but it caused us a few hiccups. But luckily, there's actually a new, and I believe it's in preview right now. It's in preview to manage when you're using helm charts to do the drift detection that you expect. So even though it was a pain point for us, if this is new for your environments, it may not be a pain point for you guys. But it is, we will be waiting until it goes GA, so we have a little bit of there. The other aspect that caused us a few challenges is there's no real super powerful admin console, right? Now, once again, we were able to kind of address this by spinning up some really nice Grafana charts. We had lots of telemetry. We had no shortage of data points coming in. We had no shortage of visibility. But for folks who weren't familiar with Flux, and they were like, just give me a web page I can hit that shows me admin stuff, right? We didn't quite have that experience. So we had to do a little bit of work ahead of time to deliver on some full featured helm chart. I'm sorry, full featured Grafana charts and some other dashboarding tools that allowed us to kind of get the visibility that needed it. The other thing we ran into is we're in a very large environment, hundreds of microservices, right? And as you probably noticed, even more so when we're doing tenant-specific, right? And what we found over time as well is that we had to kind of tweak a little bit of how often we want to reconcile certain types of resources. So we want to reconcile our tenant-specific resources pretty quickly because they might be spinning up and spinning down. We deploy things like Neo4j and Mongo and other aspects. Those we don't need to reconcile too often. If we're reconciling those too often, we're probably doing something wrong, right? So all those reconciliations do cause overhead in the cluster. And these days with hundreds of operators from various solutions all landing on your cluster, right? We found that we needed to tweak a little bit of that reconciliation how often, how frequently we want to reconcile stuff because we were adding a lot of unneeded overhead onto our cluster with frequent unneeded reconcile cycles. The last thing, and this is more of an interesting antidote, is most people here are probably familiar with the garbage in, garbage out kind of statement. If you have badly defined deployments, they stand out exceedingly 10 times more when you're doing flux because every time you reconcile, it will let you know that you're doing something you shouldn't be doing as opposed to what would normally take the place of manual. So the quick story that we were telling is that, and lucky this was just in Dev, most folks are aware of different policies and Kubernetes around persistent storage and non-persistent storage. And if you don't define your storage with the way that you want to define your storage, it may be more persistent. And someone maybe happens to in Dev goes and deletes something and flux will reconcile it. You will lose that storage, right? So it's not a flux thing, but it made maybe some QA challenges in lower environments before the manifest were fully QA, a little bit more challenging because it became evident very, very quickly when you have a problem, right? Garbage and garbage out. We could put it in there, but flux made us fix our deployments, made us aware of how we should have a volume policy attached to our volume, whether we want to retain it or whether we want to delete it. So it brought some of the inconsistencies or some of the things that we overlooked to write on our face that, hey, we got to go fix this. Which is both, once again, a blessing and kind of a first kind of thing, right? One thing I want to mention as we wrap and open up for any questions is the demo that Nita did, that's all available up on a GitHub repo, and yes, it's up there, the demo code is on there. You can clone that and run it. We tried to make it so that you could take and apply our learnings, but didn't have all the crazy dependencies that we had, right? So you can actually pull this down and use it as is and explore how we were able to do multi-tenancy of multi-tenancy with some pretty simple use cases. We had a little bit of extras in there with a Kafka cluster and deploying topics so that you could kind of better understand some of the common dependencies that you have when you're deploying these as well. But a couple of quick resources, obviously one of our best experience was working with the Flux community. Definitely check out that Flux community link on there and they'll be able to explore how to kind of reach out to the community through the Slack channels and other mechanisms. Really, really helped us on our initial journey, reducing a lot of pain. And obviously, if you're interested in Flux, follow that Getting Started guide. And for both me and myself, please reach out to us on LinkedIn. Go ahead and feel free to connect with us. We're happy to answer questions and I think at this point in time, did you have anything else you wanted to add? No, this is great. We love Flux. Yeah, it solves our problems. Thank you. And we've got just a few minutes left for questions if you'd like to, if there's any questions from the audience. No questions? Okay, great. Thank you so much for joining. Okay, that's awesome. Yeah, fire away, please. Oh, good question. So what tools or how did you determine what intervals to use for your reconciliation after the fact? So we didn't use any tool, but we looked at the usage pattern, right? So for example, what are some of the things that are changing more frequently than other, right? So let's say if I have a microservice for a tenant that is not changing as often, that can be reconciled at a higher frequency, like, you know, maybe 30 minutes or 40 minutes. Like, you know, I could do that for that. But if there is something that is changing often, then that is like, you know, reconciliation faster. So for example, if I'm onboarding in dev, I get like, you know, hundreds of tenants every day. So I'm reconciling. My tenant, my frequency on my tenant customization is much shorter, versus like, you know, my microservices and changing does often. Like, you know, this is just done. And I know that we are not working on it. That stays longer. But we did not use any tool. It's just like a usage pattern and some of the best practices from Flux. If I have a lot of many reconcilations happening, I got to tweak my controller, source controller or customized controller. So I just practice, like, you know, the best practices for the documentation and then I just make sure I'm running the right number of threads for that. So for your infrastructure stuff, it's going to be like quite a long reconciliation versus your apps. Correct. Correct. Right. And what we are thinking as we are learning from it, maybe instead of doing all of the stuff together in one customization, maybe we can break it. So some of the things, like, you know, that we want to reconciliation often, that can be as part of one customization and something that is, like, you know, kind of be there, like, you know, happy, go lucky, stays there forever, could be another customer. Same as for tenants, like, you know, I could have, right now, I just showed, like, one customization for the entire tenant directory. I could have it, like, you know, do it by maybe paying me money, more money, what are my bigger customer, they get reconciliation faster than my other customer. So things like that. And it's important to know that, too, from the standpoint of, like, you're going to know that you need to do this because you're like, why am I keeping on having to increase my resources and resources over and over again, right? And to a certain extent, you run into, there's a potential to run into kind of like a deadlock scenario, right? Because reconciliations, you know, are the frequency of them. So the trigger was, is we were looking at resource utilization of all of our operators, you know, and there's so many operators from these days, right? That we were like, okay, we probably need to do something better about this. And I think the key thing, too, is keep it simple at first. I think you've probably heard that in a lot of conversations and a lot of presentations, is iteration is the key. You can absolutely overthink this and you can overarchitect this and it's easy to do. You can help with some crazy directory structures and you can go the other end of we had a monorepo for the demo and then for something very simple and straightforward, you know, you have 10 repos, right? Breaking things out. And so keep it simple and watch the telemetry and it will guide you on kind of what direction you need to grow. Watch, you know, how the patterns that the developers are doing PRs on, right? What are they triggering, so on and so forth. And the nice thing is, and once again we had a really good experience with working with Flux, it was super flexible to allow us to evolve. It wasn't a big deal to go, okay, we need a little bit more structure here or here, let's pull this out, put it in a different repo. It wasn't a big deal. It wasn't a scary thing to be able to do that. Another thing is also what it does good for us is the dependency management. So if I know that if my help chart A needs B and C and D before, it can reconcile that, right? I can do that on my Helm release throwing the dependencies that it need and what it means that Flux will only make sure that the dependencies are available before it reconciles that chart again, right? So I'm defining those for my Helm releases, so it is not constantly like, you know... So we don't have a, we don't want a scenario where we're spinning up a service that uses a topic that hasn't been created yet. Or you're referencing a database that hasn't been created yet, things like that. But good question, good question. Any other questions? Flux. Creators. Are we doing things right? Is there anything we can improve? For Flux creators, did we just scare you? Yeah. Yeah, so I guess right now, you know, we have maybe like a bunch of customers, right? But we are definitely looking into seeing, you know, once we have like customers in thousands, how are we going to scale our Flux controllers? Like, and obviously we're going to have some, oh my gosh, kind of moments, and we will reach out to you and obviously we're going to fine tune our controllers to see... And we're going to iterate, right? We're going to learn. And hopefully we'll come back here and maybe next year and say, this is what happened when we had 5,000 customers, right? And this is what we did. It's a good place to have 5,000 customers. It's a good problem to have, right? It's a good problem to have. But we're pretty confident in where we landed in this in terms of successes on this. Our developers are happy with it. We feel like we're getting the velocity that we want from it. So we're pretty happy with where we landed, but we know we're going to learn, we're going to grow, we're going to probably make some mistakes and hopefully we'll share those in what we did for them. Yes? Say that again? Sharding support. No, no, no. Have you seen the new sharding support? No, no. You should check it out. Sharding support. Absolutely. Where would we improve with that? What would we improve? What you can do is if your flux controllers are bottlenecking because you have too many... I'm assuming you're going to need this before you hit 5,000 or 10,000 customers because we have a couple of people who said this is a blocker for us. Okay. So it was sort of fast-tracked into the GA. So it's in... I think we have 2.0.0-rc.2 now. So you can check it out and see if it solves a problem for you. Awesome. And please mention your name. Oh, I'm Kingdon. I'm a flux maintainer. There you go. Thank you. You're on record now. But thank you so much. I hope everybody... I know this is the last show of the day, but I hope everybody... There's more? Okay, there's more. Sorry. Thanks. We're still going. Okay. Well, enjoy the rest of the day and thank you so much. Thank you.