 I have the absolute honor to introduce to you our last speaker or better to say the last Q and A, something like that. Illuminary in the world of technology, Kelsey Hightower was over two decades of experience in the software industry. Kelsey has not only built and led high-performance teams but also being a key contributor to groundbreaking projects as Kubernetes. As a esteemed developer advocate, author, speaker, Kelsey has become synonymous with the Kubernetes community, providing invaluable insights into the cross and the evolution. His passion for open source technologies and his ability to make complex concepts accessible have made him a thought after presence at conferences around the globe. And in his role as a distinguished engineer at Google Cloud, Kelsey has been key in shaping the future of cloud-native computing. And as over 56,000 stars on his no-code GitHub repository shows, he is also a proven minimalist. What else could I say to that? You might cannot hear you, but please give him a warm welcome, the inspiring and the innovative and the incomparable Kelsey Hightower. Hey, thanks, Max. I think I'm here. We're gonna do a bit of a Q and A. I'm happy to be a part of the fundraiser that you all are doing for such an important call. So thanks for giving me a little time to participate. Thank you for being here. And I would like to drop you the first question into the room which we see already in the chat, which is like, what is Wasm anyway? Yeah, look, you know, I'm a non-expert. So the original title for this talk is, what is Wasm a non-expert's opinion? And so for me, you know, coming from my view of all of this space, I'm someone who typically writes back in applications, spent a lot of time writing Python, where there's a runtime that has limitations. And that runtime has a global lock interpreter that tends to slow things down. If you're gonna do something like multi-processing and multi-threading, you have to work around those. But overall, Python definitely works for me. I've done some Ruby, has similar limitations, also written Java. And when I think back to the Java days, you know, this idea that there'll be a runtime between the OS and the code that I write to produce by code, Wasm makes me feel like Java all over again, but the goals are very different. I think most people have interacted with Wasm at some point, you know, just look at your web browser. A lot of those extensions or plugins or the ability to run native code directly in the browser. I would say Wasm did a killer job going beyond what JavaScript can do and giving companies like Adobe, if you're running Photoshop in a browser, the ability to run C++ in the browser at native speed, wonderful. Now here's the challenge. Is Wasm suitable for the server side? Now this is the part where I'm not quite sure. I would say for me, I love the frameworks that I get to use. The standard library in Golang is great. A lot of people have used things like Ruby on Rails. And so we've seen this a bit before, right? There was Unicernals, right? Before containers are right around the time of containerization. A lot of people were excited about Unicernals, this idea that we can shrink down what we ship, something smaller than containers, something more secure. I think Wasm has similar aims. You know, there's this whole Wasi standard to say, hey, what do we need to add to the Wasm runtime to make it work well on the server side? Like the ability to connect to a network, reading files, right? All the fundamentals. So it seems like we're building those things up from the ground up and ideally in a secure way. So in short, a non-express opinion, Wasm is a new runtime similar to what you think about the JVM scoped way down for the type of workload it aims to support. That could be a plug-in for an existing app, like if you have a web server like Nginx and it wanted to have extended functionality, you could do that via Wasm module. And it will feel like a web browser in that case with new extensions or it could feel like the JVM but tightened down for security. Do we really need all of those language facilities when we're just writing HTTP applications? Maybe the answer is no. And if that's true, then we can start with a much lighter weight runtime, aka sandbox and see where we go from there. Great, awesome. And I think this also answers a little bit the questions which we get from Sven, which is on the first hand side. Do you have non-expert opinions on features you want to see on Wasm installer for Kubernetes? I think the number one feature is why would people use it? Like seriously, like why? You can use Golang, you can write apps and Rust. You can keep using Ruby on Rails, that works just fine. Or maybe you prefer Node.js. I think the question we still have to answer here is why would you want to target Wasm? So to me, Wasm is almost like a different computer in some ways, right? You're not going to be able to do exactly the same things you were doing before. So if you're used to having the entire Linux system call interface at your disposal and your libraries and frameworks do that, if you run in Wasm runtime, more than likely those things won't be there. So things may not work like you expect them to. And so now we have to ask the question, why would I make that trade off, right? All software developments about trade off. So we get the sandbox stuff. So to me, I think it needs to be easier to tell if the code I'm writing will work or not. So to me, I think there's a little bit more on the early adopter side of I have an existing app. Can I get any value from Wasm? So if I take my standard go code that I tend to write, is there a way to parse that file and say, hey, this is not going to work? Or recommend a Wasm framework that will allow me to have a drop-in replacement so I can keep writing that app in the way that it does. And I know there's some startups that are working on this where you import a library and they take care of all of this kind of bootstrapping for you so you can go back to writing what feels like normal style code. So the number one feature I want is, will my code work? Whatever that looks like. It could be a static analysis tool but just give me something to say, here's how you go from where you are to where, to what's next. This actually fits quite well together to say like going back to minimalism, having it simple. Developers need to be able to use it right straightaway, not wrapping the hat 10 times around until they like, find any kind of ways to troubleshoot it and open somehow the, this kind of black box after. I will say one positive thing. The way the community has been interacting with the existing world, like the container de-shim that allows you to run like wasm edge runtime, package your existing watch a module in the OCI image, put it in a standard Docker repository. And for a lot of people, that is a really good way to just reuse a lot of infrastructure that they already have. So I would say bonus points for adopting the existing deployment rails that is very different than how some other run times tried to enter the market by telling you that you had to just start over with all of your infrastructure versus being able to leverage what you have. So kudos and bonus points there. Great, awesome. Do you have some good starting points for the people who have never heard about wasm before where it's like, oh, there's one two resources you definitely should hit up and take a look at? I would say if you're new to this but you have experience like with backend applications deploying anything. I think especially since we're talking to the Kubernetes community I would definitely look at the native integration that you see for like Docker if you're just using plain old Docker or the stuff that you can see in the Kubernetes community where you can kind of jump in with a new container D runtime. So today we have run C that runs your standard container images. In this case, you get a different runtime that runs side by side. And I would just go through the Hello World tutorial. So number one, get your infrastructure the current infrastructure you already use able to execute wasm modules. I would say things like wasm edge are a good fit there. There's really good tooling, really good documentation. Once you're there again just start with the Hello World stuff. Make your wasm at, you'll feel the limitations of what you can and can't do but then soon enough you'll find you'll be right back into the process that you're familiar with. Packaging it up in the container image store it in the registry and then run it inside of your existing cluster. The second thing I would probably do is look at tools like, you know Cloudflare has a whole ecosystem around extensions inside of Cloudflare. And so I think those are written in wasm or they support wasm. So maybe your favorite tool already supports wasm as an extension or plugin layer. And so for a lot of people if you're not interested in writing brand new apps or this new sandbox maybe you're interested in extending your existing applications. I remember writing my first Lua module for Nginx. It was really cool that I can extend the functionality. If you start to look at systems like Istio, service meshes, various other systems including databases, they're starting to support this idea that wasm will be the common layer. So go look at some of your existing tools and see if they support wasm as a plugin architecture. Then you can just start writing snippets of functionality and deploy them. And that might be a good entry point instead of writing whole new applications. Yeah. So following actually the approach to also have seen from coming monitor dig applications towards microservices, start small, get the first pieces done, make your experience, find out what doesn't work and keep moving. Awesome. Spend us further more asking like, do you see a use case of wasm containers for multi-architecture clusters? Again, we're back to the runtime discussion, right? In Golang, I can easily cross-compile between the various architectures. The JVM also gives me a lot of functionality in that space. And so I don't know if I'm gonna pick wasm to get cross-architecture support. I think you can get that with existing tools. Let's just be honest with ourselves. Today, multi-arch support is really, really good even at your laptop with a macOS, right? You can use things like Rosetta to run x86 binaries on an ARM processor. So I don't know if wasm is that's the number one selling point. I honestly think it's gonna be a slimmed down sandbox and fast startup time. Look, I work at a large cloud provider and so we have a spectrum of compute, everything from VMs to container platforms like GKE that are backed by Kubernetes. But what about the serverless side? Is wasm a good way to probably do faster serverless? Is wasm a good way to think about embedding functions inside of tools like Bitquery. So you can do data analysis with custom functionality. That's the way I'm looking at it. So less about multi-arch support and more about can I get better benefits from the security sandbox? And remember, Linux containers are not necessarily considered the most secure thing in the world, right? Even in that space, you can see us using sandboxes like GVisor, which sits between the OS and the app and emulates those system calls so that way we get a tighter sandbox. And some people are going back to even VMs to be the sandbox layer because they can start up pretty fast. So maybe wasm is all about maybe how do we get VM level isolation and not necessarily have to adopt all of that overhead. So I think that's probably gonna be the biggest value but you never know. I'm pretty sure some people will like the cross-platform nature of targeting a runtime with bytecode but we've seen that before. So I wouldn't overindex on it. Well, it's always a little bit hard to predict the future, right? In preparation for the day, we were just for fun asking JetGBT, hey, what do you think about the future of Kubernetes? And obviously the data which JetGBT is trained on is a little bit older. So surprisingly or not surprisingly, it perfectly matched what we have seen throughout the day, some GitOps, some security. Now, I think you're a little bit more aware of what's going on in the ecosystem around Kubernetes and JetGBT since 2021. What's your prediction? What are the next topics beside wasm? If Kubernetes is super successful, it will disappear. If it's successful, it goes away. And it goes away in the way that Linux goes away. It's still gonna be there. It'll be more important than ever. More people will use it than ever but it won't be so in our face, right? It will become so reliable like roads and bridges and freeways. And I think it just gets buried. And then I think what people then start to focus on is the platformy level of things, right? And we'll give that new thing a new name, right? Just like Kubernetes is a different name than Linux because it represents a whole different cellular abstractions. I got a real feeling that eventually we'll get to a point where most people just talk about workloads, not deployments and pod disruption budgets and mounts and volumes. I think the libraries will just get a little bit more sophisticated. I think people will focus more on building apps. It's funny, I work at Google, we just open source a tool called ServiceWeaver, just last week. And what ServiceWeaver does is you write standard Go code and the framework, depending on how you deploy it, it will decide to break that monolith up into multiple pieces, create RPCs in between those components, and then make all the underlying deployment decisions on top of something like Qube. So in the case of ServiceWeaver, it treats Kubernetes like one big computer, generates all of the necessary startup code, aka pod manifest, et cetera, under the hood, including when there's time to do updates. And so in that case, you can see that Kubernetes has now been abstracted away into a low level implementation detail. So I think in the future, as Kubernetes continues to get better, thanks to the community, by the way, it starts to fade away. And then we all go work on the next set of abstractions that hide those. Yeah, that's a very interesting perspective. And I think also one of the biggest challenge yet, which we see also working with end customers. I mean, in reality, most larger enterprises have like at least two or three cloud provider in their, in their tech stack plus a lot of stuff on premise, sometimes not very good connected with each other, but surprisingly everywhere is running somewhere as a container or a Kubernetes. And we see that there is already a lot of like work on unified in some way, because even so Kubernetes itself is somehow everywhere like compliant with each other, it's still different talking to API gateways, to the storage, sometimes to just provision different resources or simple stuff like, well, simple stuff is maybe not the right word, but like key management. And all these challenges often come across the end users and see them, yeah, very much challenged. And I like this perspective. We recently discussed with a customer about the cloud of cloud platform, like the strategy for leverage, like forget what you have there. I mean, one thing I would say, and maybe we unfairly did this to Kubernetes, but we ask it to do too much. You know, managing data volumes, managing secrets, rotating secrets, managing metrics and observability platforms, that's a whole job and skill set of its own, right? This would be equivalent to try to ask the post office or FedEx or UPS to also manufacture TVs. That's, these are separate layers. Those logistic companies are really good at moving things that are in the box. And to me, that's what Kubernetes is good at, right? It's good at moving applications inside of container images. That's what it wants to be great at. But I think we see this a magical control plan and we say, huh, wouldn't it be nice if we can get a similar API and management style for all of these other systems? And I don't know if it's the perfect fit. It works, but I think at some point we have to realize that these control plans, we do need them. These workflows and orchestration engines, we do need them, but do we need Kubernetes to be the universal thing for all of those things? And that's where I would pause and say, I'm not sure. I do think using Kubernetes to configure those things, but those actuation engines, they deserve their own focus. So that way, just like CDNs gave us the ability to distribute media across the world, Kubernetes gives us the ability to distribute containers across multiple machines. Why don't we still focus on good control planes that may stand alone, that may run outside of Kubernetes to provide those equal benefits for the other layers, secrets management, logging, observability, you name it. Do you see already projects in this direction, which is like the Kubernetes plumber platform? Well, yes and no. Yes, they're called cloud providers. They're called CockroachDB SaaS offering. They're called Gmail. They're called search engines, right? They're called Stripe, right? These are all platforms of their own that have APIs for doing purpose built things. Kubernetes happens to be a place where you could run one of those platforms. You could use it to keep the application containers that back those platforms running, but I do think we have to have a mentality. I would say from an open source perspective, I think Let's Encrypt is a golden example, right? There's this universal service out there that allows anyone in the world to use a common API to mint TLS certificates and rotate them. That is a hyper-specialized service. It's open source, but in this case, it's not a, I mean, you could run parts of it yourself, but it isn't a running yourself situation. It is a global service that you could use. So I do think that open source mentality has to progress beyond source code that you download and build and run yourself. What is the open source equivalent of a managed service backed by a community? And so if you're willing to go there mentally, then everything doesn't need to be installed in your local Kubernetes cluster. You can expand that to be more global services. And then you use the ones that are specialized. And then now you're just interacting with APIs like Let's Encrypt is just one example. And one day, hopefully, we'll see a global service or Postgres, where you don't need to install Postgres necessarily in your cluster. You can just leverage a fully distributed, replicated Postgres instance via library and credentials. That's a very interesting perspective. So you think that one of the big steps for Kubernetes, maybe at one day, Kubernetes version two, fingers crossed is like that. We have an unlimited amount of servers out there which uses a similar Kubernetes API with all that it's Kubernetes itself, but it's a format. Yeah, but think about it. Everyone running Kubernetes today, your organization, big or small, isn't that what you're trying to do for the people that work there? You don't want them thinking about how to provision Kubernetes or what version Kubernetes is that. Most platform teams are saying, we will take care of that component. Our job is to unify that process, whether you're using something like GitOps. GitOps is a good example of our attempt to provide that facade. I could have clusters running across multiple regions. Hell, multiple cloud providers. But in that case, I don't want to leak all of those details to everyone at the company. That would be highly inefficient, right? We don't want to talk about what version of Prometheus that I install. We just want to tell people that if they expose slash metrics, we will configure a loop to pull those metrics into essential stuff. Isn't that what we're building? So if that's what we're building, then we know for a fact that Kubernetes is just the ingredients. We're the chefs that are trying to create the full meal for the people we work with. Don't you think at some point this will become the new baseline and then ideally new providers that show up will just say, we have compute everywhere. But all you have to do is just give me your Kubernetes manifest and objects and hell, maybe there will be a few custom ones too. And then you can just leverage the system without running it yourself. I think this is where a really a lot of organizations are going into this direction and looking for it. But yet it feels often like they get this very big box of Legos, which is open source projects, and try to build out of these different stones with the different colors and the different shapes, something which is useful for them. And I think there's- Yeah, it's probably right because Linux was like that, right? You have Linux, you have the shell, you have said, awk, you have bash, then we get puppet chef Ansible, then we get terraforming cloud providers, and we step back 10 years later and we bring it all together and we call it Kubernetes, which does a lot of what those individual pieces we're doing ad hoc into a more standardized API. And look, give it another five or 10 years and all these patterns of the people that are listening are building will become the new thing. And then we'll just use that and maybe less people are starting from scratch. Yeah, cool. We received another question from Sven. He's very active. Have you seen Chris Nova's a raw way? And do you think that those projects like would compete with Kubernetes or contribute to evolving concepts, for example, node management? All right, so I'm going to go look at this right now. I haven't seen it. So I'm going to just breeze through it. It looks like there's a distributed systems runtime Damian does, is it written in rust? Right? Yeah. And I'm just going to read the, read me really quick and pretend I know what I'm talking about. And it says, or it is an admission to be the most loved and effective way of managing workloads on a node. Our hope is that by bringing a better set of controls to a node, we can unlock a higher, a brilliant higher order distributed systems in the future. All right, so just reading that sentence I'm going to pretend that I know what I'm talking about. It says it deploys a memory safe runtime Damian, a process manager as PID 1 to initialize the system's scheduler processes, containers, virtual machines and the whole nine. All right, so now let's back up. So pretending like I knew that already, I can see this is where they're going because when I was at CoreOS we wanted something similar, right? There was a Docker Damian. We thought it was too complex. We thought it had too many moving pieces and we thought we needed a better PID 1 and we thought the better PID 1 was system D. And so we built a system called Fleet that would leverage system D and actually we advocated for system D to grow more and more features. And system D I think can create VMs. System D can actually pull containers and start them or it can just run your standard library from a path. And so I look at this project and say, hey, look, what if we had a better system D with a better API? And for some people that was Docker and just like all innovation, people like Chris say, hey, we've learned a lot in the last decade. What can we do better now giving the new tools and knowledge to us? So what I look at this is her collapsing the layers. Right, saying, you know what? Let's take all that we've learned, all the moving parts, let's squish them down again and let's try again that has something that's way more unified and more purpose built for this case. Does that replace Kubernetes? It could, right? If you don't need Kubernetes and all you need is a big node that does the right thing or maybe it challenges the cooblets, right? Should the cooblet have more capabilities and features? And if so, does this become the new cooblet that runs on the machine that happens to be able to be interacted by a scheduler like Kubernetes, right? I can see this also being great for a system like Nomad, right, a universal process management that can actually manage more than just processes, containers and VMs. Hey, we'll see, right? If it pans out and we need a better layer at that layer, I think this is a good thing to be in running. Awesome. Now, something more un-technical as you were being very long time in the community. And we see that the Kubernetes community day is popping up everywhere around the globe in different cities and places where we maybe even not would expect them. What would be your five cents you would give to all of these little communities when they get started and what is your hints for like getting successful in their jobs and organizing the local community, grow the community, which is sometimes the hardest part and make it flourish. I would say this, you know, there's something so important about being local and understanding the needs of your local community. I think some communities made the mistake of trying to copy another region like the US or try to copy a global event like KubeCon or CNCF conference. Those are cool places to get inspiration, no doubt, but more than likely there is something missing. Sometimes it's as simple as translating the documentation material into the native language so more people will have access to those things. A lot of times you know that there's opportunities whether that's in your local government, local companies that can't really afford to hire internationally, maybe they need to develop local talent. To me, those are the biggest opportunities that someone like me or someone that only operates as a global entity cannot do. You know the unique challenges and opportunities of your local environment, double down on that. And then more importantly, once you do, export those unique solutions that you come up with. Right, cause maybe your local needs, like there are some countries that don't necessarily have advanced infrastructure in terms of networking. And so that what they have to do is create more efficient networking protocols. Maybe they need to have controllers and not use so much of the network when they're talking to the Kubernetes API so they make it more efficient. If you solve those kind of problems, don't forget to share with everyone else because maybe we don't have those problems but we can definitely use the efficiency gains from those that do. So don't forget to share as well. That's awesome. I think that's also what the community stands for. Share, stay together, get into the exchange, learn from each other. Now we are heading towards KubeCon Europe. Are you going to be there? And if so, what are you most excited about? I do plan to be there. I'm excited about, I think I'm gonna do a couple of workshops, maybe a panel. And the workshop's gonna be about helping these open source maintainers and projects figure out actually how to turn the corner in terms of running their business. A lot of these maintainers want the dream job which is to work on open source full time. And some of those maintainers would love to do that independently. Not everyone wants to go work for a large tech company in order just to work on open source. So how do we do that? How do we help those individuals make that transition to a small business owner? Not everyone wants to raise capital. But those that do wanna raise capital, how do you actually leverage your community in a very authentic and honest way? We all know that open source projects need to make money in order to survive if they wanna be a business. But how do you go about that the right way? And so I've advised lots of companies, Docker, Acuity, Versailles, all of these companies have really good track records in terms of dealing with the intricacies of open source and enterprise. So hopefully I can share some of that knowledge with people that are looking to travel in one of those two similar paths. That sounds exciting. And I think it's also well needed because there are so many cool new ideas, fresh ideas, but quite often they are missing time to get really developed and getting kick started. Now I will shortly check if there's any other question popped up so far from the... Yeah, there's one more that popped in. It was like a K-Wasem. Do I know anything about K-Wasem? Nope, I'm gonna show you how I do it though. I go to the website and I clicked on kwasem.sh in the first sentence says, and I love when people get straight to the point in their descriptions. So thank you for whoever wrote this. K-Wasem is a Kubernetes operator that adds WebAssembly support to your Kubernetes nodes. It does so by using a container image that contains binaries and configuration variables needed to run pure WebAssembly images. So without diving too much deeper into what it actually is, I'm going to assume because this is what we used to do way back in the day in the early Kubernetes days before we had Damon sets. And it's also how we leverage Damon sets. So if I wanted to tune the kernel for my Kubernetes nodes and you want to do it the Kubernetes way, you might run a Damon set just to cheat, right? One that had like the right amount of privileges to go and tune the kernel to a certain set of parameters. So in this case, I think there's a similar pattern that's happening here. In order to make K-Wasem work on Kubernetes, you need to add some pieces. You need to actually either A, have a new container D-shim and install something like the Wasm Edge runtime. You got to do those things and configure it to actually work. And then at that point, you can probably leverage Kubernetes to start running Wasm containers that will use the right runtime. Maybe there's something different happening in this case. Maybe they have their own runtime that runs at a privileged level. And I used to do this with Go, I used to cheat. I used to have a pod that was ready to run any binary that it found. And so I ran that as a Damon set and then I can just drop Go binaries at a URL and this pod would pull that URL and I would just do that. So I never had to package my Go apps and container images. I had an empty container image that was just looking for a certain spot on the file system or looking for a URL. And that was a magical way of just using straight Go binaries in a Kubernetes cluster. So it looks like Hey, Wasm is doing something very similar giving you this illusion that, hey, number one, get your cluster ready to go by using something like a Damon set, AKA a controller, making sure that everything is set up correctly. And then once it's done, my guess is leaving behind either an API or the right things you need to just start using container images or Wasm modules directly inside your Kubernetes cluster. Cool. Well, let me just asking, what do you think about AI supporting day two operations for SAE guys? You know, I got this question a couple of weeks ago and someone asked me about, you know, when to use AI or ML inside of operations. And I had a Twitter space not too long ago and someone helped me understand the difference between like what AI is trying to do, which is predict things versus what we tend to do as developers, which is symbolic programming. If this, then that, right? That's very straightforward. We use symbols like a mathematical function. When we know what the two inputs are, we can calculate the actual result like a calculator does. Very straightforward. So if you know the answer, then use it. So let's talk about SRE. So in the SRE world, let's say you know your app needs 500 megs of RAM to do its thing. You already know. So just configure it using your YAML file with the 500 megs. How do you know that it's 500 megs? Because you did some load testing. You did some benchmarking. You've tuned the workload to work within those parameters. Okay, you can use a very straightforward process, but let's say you don't know. Let's say you're a hosting company or you're a platform and you're getting random workloads. Now you don't have a clue. So now you have to guess or predict. And so in that case, if you want to do this at scale, because you may not be in a position to benchmark all your customers' workloads, nor are you in a position to ask them to do so. So you may have a feature that says something like, I don't know, auto configuration. Well, how would you do that? Well, now you need to predict what the value should be, but you need input for that. So I think when it comes to SRE, we have a day one, which is do we have enough data about our workloads? How much traffic is coming in? When do they fall over? Historical context. If you have all of that data, and I think Kubernetes gets a head start there, right? Because container images, all the information we get from the kernel, you got things like for Meet This, there's so much there. If you were to take all of that data from the network, from the app itself, from the rate of things flowing inside of the network, and whatever behavior you can derive from things like logs, you might be able to pull together a really nice model. All right, the same model, a great SRE person would have in their mind when they see certain things, they make certain adjustments, or they can be proactive and say, hmm, things seem to be getting busy, I'm going to autoscale even though we don't need it, so we can handle the load. Yes, AI will give you the ability to create similar models. So maybe we have an AI ML backed autoscaler, but remember the predicate here. We need data that can actually help us predict when to scale things up and down. So I think Kubernetes gives us most of the facilities to make decisions, it gives us enough of that data. Now the question is, what models are we talking about? And those models are going to be a live experimentation until we get it right. But I would say Kubernetes gives us a great head start. If you want to start using AI, but remember this ain't magic folks, we are literally just saying we're going to create models based on data that we see inside of our environments, but you know what, I'm going to guess, the majority of people won't need that. The majority of people can do something like time-based autoscaling. You know for a fact that your customers come in around 8 a.m., you know that you're going to launch a campaign during the World Cup, you might want to scale ahead of time versus predicting, and the last thing I'll say here is, even if your AI model predicts that you need a hundred more nodes in your cluster, the cloud provider may not have a hundred nodes available in that zone or region. So you might want to be a bit more proactive in some of these situations. All right, thank you very much Kelsey. In respect of your time, I know it's very early at the moment. Maybe it's a good time to get breakfast for the kids and the wife. Again, thank you very much for taking the time, joining us and it was very interesting to talk with you. Wish you great day, great week, and yeah, hopefully see you in Amsterdam. Yep, awesome, bye.