 All right, so thanks for coming, Ron. And from Ron AI, we really appreciate you kind of joining us on your little car ride. Hopefully, you haven't been to Bell Isle, so you can see that. It's like one of the famous points in Detroit. And really pretty, I think. But yeah, thanks for coming. Yeah, thanks for having me. Yeah. So first up, OK, so what is it that Ron AI does, exactly? OK. And so Ron AI is an AI computing platform. And we're working with large enterprises, the largest in banks, automotive, health care, a lot of research institutes. So I know research institutes are hard. So we're working with a lot of advanced research institutes. And we essentially help them centralize their AI infrastructure. Gotcha. And allow them to make better use of GPUs and their computing resources. And essentially provide easy and simple access to computing power to their researchers, data scientists, AI engineers, and so on. Gotcha. OK. And so how do you present it to the end user? Is it part of the Ron AI system to actually, like, for an end user to participate in it? Or is it more on the kind of the off side? And you mean in terms of the platform? Can I roll up there to get a Jupyter notebook? Yeah, OK. So we're being a lot of technology in terms of building ourselves on top of Kubernetes. And there's a lot of things going on under the hood. And I know researchers and data scientists that don't want to know much about Kubernetes. And they just want to run their jobs, experiment with data and with models. So for sure we know that. And we're bringing tools for them to simply run their jobs, run their workloads, experiment, open Jupyter notebooks. So we do all of that as well. But what UNIX with us that we also truly open, we like to call, we can integrate with any tool, MLOps tool, or data science tools that run on top of Kubernetes. I got you. You can get, like, simple, very native integration to our platform as well, because we're based on Kubernetes. And we built it like that. So we have our own tools. And then we integrate with any other tool. I got you. OK, that makes sense. So you were saying before that you're kind of doing a lot more with GPUs. Like, you mean like kind of surfacing it up to end users? Or like, how is it, what is it that you're bringing to the table that helps making GPUs accessible? Yeah, so GPUs today, there's a big problem, I think. And we encounter it time after time. There's a big problem with efficiently utilizing GPUs. So GPU utilization is a big problem. And we see it in a lot of organizations. So it's an expensive resource. I think on Amazon, when you take one GPUs, it costs like $30 per hour. So the newest GPU. So it's an expensive resource. And researchers, data scientists, they need a lot of it. And it's really important for their work. So getting access to those computing resources and utilizing in the right way, in an efficient way, that's really difficult. So it's utilization. And the other thing is sharing the GPUs between your team members, between your organizations, between multiple teams. That's difficult to do that dynamically. Yeah, well, because GPUs are definitely not designed for that. Yeah, so they're definitely not designed to that. And I think if you look at GPUs and you compare them to CPUs, so for CPUs, for fairly years, a lot of software was built to just abstract the CPU usage, right? Right, right. When you run your workload on CPU, you don't even care what on which exact core CPU or the workload runs. But it's not like that with GPUs, you know? Well, a bunch of my students have been having problems lately because they're running M1 Max. And a lot of a CS class, right? There's virtualization and stuff. And they're like, totally can't do anything because it's an ARM chip and they can't virtualize anything. Yeah, so it's funny because it didn't even cross my mind when people started showing up with M1s that this was going to be a problem, you know? Yeah, exactly, right? The only thing with our engineers, something broken, the abstraction layer, and now everyone knows this. And with GPUs, that abstraction layer, it's not exist, and it sort of not exist. Well, because most people want to write raw to the GPU in its traditional model, right? If you're writing a video game, right? You don't want any abstraction layers between you and the GPU. Yeah, so people walk like that. They get access to one GPU, to two GPUs. They hug those GPUs. Right, right, right. And they often utilize them, often not. And so you're getting a problem with the utilization of those expensive resources, and many times we see idle GPUs while other teams, other researchers are waiting to get access to GPUs. Right, right. So we're doing a lot of work in that sense and providing tools and efficient software and abstraction layers to help customers better utilize their GPUs. Gotcha, yeah. It makes a lot of sense. And I think you kind of mentioned it a little bit, but I think that separation problem is also really, really important, right? Because I'm not actually a researcher myself. I'm more on the clinical side. So, but I've done a lot of software. And it's, you want to make sure that you're not tainting the work of somebody else, especially on a platform that is not really designed to be shared. So is that, do you feel like you're doing a good job of that? What are those things that you really feel like Runei is bringing to the table? Yeah, I think today researchers, they deal a lot with the infrastructure and it relates to that on how they shift their environments and software from one place to another. So that's really difficult today. Just shifting your environment from one GPU to another GPU, that would take like two weeks for you as a researcher sometimes, right? So with containers and our platform is built on top of Kubernetes, so you can simplify a lot of that, right? And if you can, we're building a more dynamic platform to help them shift their environments, shift their workloads between GPUs and then use those resources in a smart way. Right, right. Yeah, that's pretty cool. So of course, I am biased towards open source because I used to work for Red Hat and now I work at University, but you were saying that you've been pretty heavily contributing to two open source projects? Yeah, yeah, we created recently two open source projects, and one for monitoring GPUs and then the other for to help the community and people share GPUs and then provision GPUs in an easier way. Oh, nice. Yeah, so because we're based on Kubernetes and we're doing, I love Kubernetes, I think it's like an amazing framework, it's an amazing piece of technology and it brings so many good things to the world, right? So we're using Kubernetes and we're based on Kubernetes, so we enjoy what the open source created and I feel really, I believe in karma and I believe that we got a lot and we need to give back, right? So I'm looking for ways to help back the community and just solve problems for the community. So one problem that we identify is a problem with monitoring the utilization. Many times people don't even know the utilization. Well, how bad the utilization is, right? Yeah, how bad is the utilization and what even is the utilization? They, IT teams, they only get requests to buy more GPUs, to increase the limits in the cloud, to have more GPUs, but they don't even know what's the usage. So we created this tool, we call it RMTOP and I think it really simplifies non-Cubanese environments to monitor non-Cubanese environments. The other tool is called GNV. GNV, we took a lot of inspiration, our engineers, except me, I don't do much, right? We have amazing engineers and they took inspiration from PyNV and Conda, Python libraries for data scientists, so they created GNV and it helps teams of data scientists to share GPUs in an easier way, simpler way, provision GPUs in a simpler way. So we just released it like a month and a half ago, I think it already has like hundreds of downloads. Yeah, that's cool. We integrated it into PyCharm and into VS Code and we're getting a lot of downloads from there, that's fun to see. Yeah, have you talked at all to an organization called Massachusetts Open Cloud by any chance? No, actually. So there's an organization that is, it's sponsored by some industry companies, like so Red Hat is obviously a sponsor, that's part of why I know about it, but BU is also a sponsor, but so is Harvard and basically what they're trying to do is like, can they build an Open Cloud? And but really what they wanna do is they wanna give access to researchers to like the underneath parts of a cloud, which you can't really get from like Amazon, you know, for that much, but then, but in order to give real world data about what's going on on the under cloud kind of, they also need to give, they wanna have real workloads. So they have researchers kind of sitting on top doing data science research, but then they have other research looking underneath to seeing what it does to it, but that might be a really good kind of place for, you know, you to get involved with like the GM, right? It's like, that's the kind of thing they're experimenting with is like, how can they, you know, shift workloads, say between clouds, for example, or how do you make one of the big things they wanna do is like Harvard, let's say, has, I don't know, 20 machines, but at any given point, right, at two in the morning on Thursdays, they don't actually have anything running, and so they wanna loan that to another organization, but they want enough control that they can bring it back, right? So that's one of the problems they're trying to like solve, but it's kind of the same utilization thing, except it's cross org. Oh, yeah, because Harvard wants to give the resources to BU, and then, you know, some later day, when BU's got free resources, they give resources back to Harvard, because it's a bunch of universities, so they don't have a vested interest in like, you know, kind of protecting their little silo, and then Red Hat, you know, gets involved because they wanna run, they wanna see how their software performs, right, at that kind of scale, so it just might be a really good, you know, opportunity for a relationship. Sure, we would love to do that. That's, you know, we're doing a lot of work in terms of sharing resources between teams, between organizations. Right, and I think the mass open cloud one, right, is even a harder problem, so you can solve that one, right, you can definitely back into the within one organization problem. Multitenancy, it's always a bit more difficult. Yeah, especially if you don't design for it from the get go, like, that's one of the really difficult things to like retrofit onto software, at least in my experience, but yeah, so that's really cool, I like it. So, here's Bella, it's not as pretty today as it was yesterday, but it's still pretty. Yeah, and so this is all US territory, but we're like an inch away from the line in the water between us and Canada. Oh man, there are houses on the lake, it's a lake? No, that's a river. It's a river, right? Nice, oh, that's, okay, got it. One of the big ones. That's beautiful, yeah, that's beautiful. But it's, especially for an American, I don't know, do you live in the US, or? No, I lived in the US, now I live in Israel, so I did my postdoc here in the States. Oh yeah. And so I've spent here several years in New Jersey and all over the place. Oh, okay, all right, all right, yeah. Not everyone's favorite choice of parts of the US, but I understand, but so you've probably heard the phrase like our neighbors to the north of Canada, but Canada's actually south of us here in Detroit. Oh yeah? Yeah, because Michigan kinda curls around and Canada's kind of underneath it, so it's really weird because you go over the river that's south to get to Canada. It's very confusing for most Americans, I think. But yeah, are you in Tel Aviv? Did somebody tell me you were in Tel Aviv? Yeah, I'm based in Tel Aviv. Okay. I'm based in Tel Aviv, we have offices in the States, but Tel Aviv is a, have you been in Tel Aviv? No, so what else do you say? Red Hat has a big office there. And that was one of the places that I was like, I need to find an excuse to go visit that office because I really want to go to Tel Aviv, but I've never been. Of course, KVM is an amazing, yeah, KVM right there. Red Hat bought the start of the event. Yeah, can you remember when they were called? It's, yeah, the founder is one of our investors. Oh really? Spanish Nider, yeah, he's amazing. Oh, that's cool. He's the first founder, he's the first investor in our company. Oh, nice. We love him, yeah. He did a lot of things in virtualization and we love virtualization and the abstraction there. So, yeah. It seems natural, yeah. It seems really, yeah, it seems natural. So, Red Hat have big offices in Tel Aviv and all the big players, right? Google and Amazon, all of them are having thousands of employees in Israel, but Israel became like an amazing hub for startups. Yeah, yeah, yeah, and I think of it as like security and crypto, you know, especially. Security, crypto and AI. Yeah, that's interesting. Yeah, I haven't, yeah, I mean, now that you say it, I'm like, oh yeah, okay, I can see that. But I tell you why, because a lot of people are coming out of the army. So in the intelligence units in the army, everyone go in Israel to the army when they are 18 years old, right? So, it's actually something I think they should do here, but yeah. You know, your service or whatever. It does good for Israelis because, you know, many people are going to the intelligence units and they deal with technology. They're doing a lot of things around cyber and security, right? Yeah, that makes sense. And they're doing a lot of things with AI. Yeah. And then you have a lot of AI in the defense organization. So they go out of the army and many of them are starting companies. Michael Fander, for example, he served for many years in the army. He went out and we started to get around AI. Oh, gotcha. So, yeah, so there is. Oh, I did the same thing the other day. Yeah, there's like a bunch of roads that are closed and I, and so I was with Liz Rice in the car and she got a nice little tour of this parking lot as well. So, here we go. But yeah, so yeah, I've actually always wanted to go there, but just have never kind of had a chance. But, you know, one of these days we'll see what happens. Yeah, there were like three offices at Red Hat, you know, that I wanted to like try to get before I ever left Red Hat and, you know, that were left, you know, I had a whole bunch of them that I wanted together. That is amazing. You should come in the summer though. Yeah, yeah. Summer is amazing in Tel Aviv. Even not too hot? Oh yeah, July August is a little bit too hot, but, you know, summer time in. Like late spring would probably best for me. Summer time in Tel Aviv is like most of the year, right? Yeah, right, well, that's true. Two or three months of winter, right? Yeah, yeah, yeah. Yeah, it's kind of like, yeah, I was going to India a bunch, you know, and there was a conference in Pune that somebody wanted me to come and speak at. And I was like, your conference is in August in Pune. I was like, I, you know, not the time I really want to visit Pune. You know, I'd much rather go like in, you know, January. Yeah, yeah, yeah, sure. But, yeah. How is Boston right now? So you did. It's very similar here actually. You know, it's got as much choppiness about like, because I don't know how many days you've been here, but like it's been gorgeous all week. And then, you know, and then today it's kind of cool and rainy. So Boston's been the same. So it's like, you know, it's been, it was a bit of a cold snap and then it got really nice again. You know, and it'll do that a bunch for the next month or two and we'll see what happens. But yeah, I like the cold. So I have a hard time with warm temperatures. Got it, got it, got it, got it, makes sense. Okay. But that's part of why I live in the Northeast. So, so yeah. So when, so why, you know, so all of your kind of infrastructure runs on Kubernetes, which I like, I totally get then why you want to be kind of engaged with the community at KubeCon. But, you know, do you, do you also have kind of customers you want to see here? Or do you have like developers or, you know, are there people that are kind of in your target market that you feel like are at KubeCon as well? Yeah, for sure. Yeah. Yeah, no doubt. So our customers are many times that the, the IT side, IT and when I'm saying IT is that the people who are building the platform to machine learning platforms or machine learning infrastructure, right? So those people are usually engineers and they know a lot about Kubernetes. And then they are building the platform internally. And so they are engineers, they know something, they know a lot about Kubernetes. And so they are here as well in KubeCon. KubeCon is exciting. That's it. Well, it's fun too. Right. That's not usually enough justification for spending the money. And you know, that's one thing. So customers for sure. Yes. Yeah, customers for sure. But also we're doing a lot with Kubernetes. Right. We're not just building on top of Kubernetes, but we're also, we're, we're, you know, our engineers are doing a lot of things internally in Kubernetes. So we've built our own scheduler. Oh yeah, that's right. Yeah. Yeah. So what, like why did you build your own scheduler? What did you feel like you were trying to do differently? Yeah, so, you know, the problem of utilizing GPUs relate much to scheduling problems. So scheduling on GPUs, that could be sometimes difficult. Right. And for AI, when you're running an AI workload which are really compute intensive and they are using expensive resources like GPUs, and what we saw that are scheduling capabilities that are known from HPC, the HPC world, I don't know, you know, high performance computing, and so scheduling, like scheduling concepts from there, from HPC, and scheduling concepts from Yarn, from Hadoop. Okay, yeah. So that's it. So when we built our platform, we saw that many scheduling capabilities from there, like queues, queuing, and preemption, and fairness, sharing, allocating the resources in a fair way, quota management, advanced quota management, and gang scheduling. I think that those things were missing in Kubernetes, and we saw that they are really needed to enable like efficient platforms for machine learning. So those scheduling capabilities are really important for AI. So we built them, and we built a schedule that plugs into Kubernetes environments, and do bring those advanced, like HPC capabilities into the AI world. Right, right. And yeah, the schedulers are really interesting. I mean, one of the things that, you know, it's funny because you talk about AI, right, is that I don't understand why we haven't seen more AI kind of to create schedulers, you know, or it's schedulers that are enabled by their own machine learning. You know, it's like I was talking around with an idea for a while of like, you know, there's a whole organization at Red Hat, there's whole organizations elsewhere, too, that are like putting out perfectly optimized tunings for like databases on RHEL, for example. But it's like, well, how expensive would it be to actually just run a little machine learning model on the machine itself that's kind of watching the kernel and saying, hey, we're getting spikes like this, why don't we twiddle this knob and try it for a while, right? And then go on to the next one. Oh, that didn't work very well, so let's just turn it back. You know, I wonder why, like, I feel like we could be using the machine learning techniques we know to kind of modify schedulers based on what's literally happening at the moment, which I always think is kind of interesting. But you have to have the capability, right? Yeah, scheduling is interesting. I think if you look at the history of the schedulers, like, right, and the HPC schedulers in Hadoop, and Kubernetes, MesoSteel, right, they came with also with their own schedulers, so schedulers like the heart of the system of orchestrators, right? Platforms, so it's an important aspect of the system working. So scheduling can be done in different layers. So you're speaking about the low level layer. And for sure, I think I totally agree with you, you can do things with machine learning just to optimize certain specific workloads with specific patterns. So you can use AI to schedule, yeah, it's somewhat, yeah. Like, why do I have to tell the computer there's a database running on it? It can figure it out for itself and then tune itself is kind of what I've been wondering. And so with like, but Kubernetes schedule, right, distributed system schedulers are super interesting as well because you have all these other kind of variables at hand, right, because you not only have the obvious stuff like load, but you also have things like our resource, like the amount of resource we have changes, right? As you add machines to the cluster or remove them from cluster and stuff like that, you know, you have all these other things going on in a distributed system environment, you know, that's way beyond just, you know, kind of a simple, you know, single kernel environment. Do you think it will go to a place where, right now users are designing on the amount of resources that are going to be allocated to each workload, right? So it's just totally a mistake. Totally a mistake. Yeah, because, you know, it's like, and this is where like, you know, kind of, we're already kind of doing it or recognizing it, right? That's what auto scaling is, you know, is like, okay, so the humans can't actually predict in advance what the scaling needs to look like. So we've introduced these auto scalers, but can't you just do that with all things, you know? So yeah, I wouldn't be surprised if that's the way it's going, you know? Yeah, for sure. We, in terms of GPUs, we're providing like basic scheduling capabilities. Well, because the feature set in order to schedule the stuff wasn't really there either, right? Like you had to, that's why you had to go and look for, you know, ways to monitor the actual utilization because then it can inform the scheduler whether or not to put something there. Yeah, so right now in Kubernetes, you can allocate just whole GPUs, one GPU or two GPUs, right? You can't really allocate fractions of GPUs, not like with CPU or with memory, right? And in terms of the request and versus limit, when you request resources, you have the request value, the limit value, with GPUs, they are always equal. Versus in CPUs and memory where request and limit can be different, and that can provide value in terms of all the provisioning. Well, and can you do that on a GPU? Like is it? You cannot. Yeah, yeah, you can do it. Okay, well, that's what I mean by like, no, but I mean, on the physical hardware, can you use a fraction of a GPU? I guess that seems odd to me. Like I feel like you, it's kind of like on or off. Listen, you can do things like time sharing. Right, right, yeah, that's true. Definitely. But yeah, NVIDIA also bringing more advanced capabilities into like hardware fractioning, you just splitting up the GPU into hardware pieces, that's different, but we're doing more like time sharing on the GPU itself, and then you can allocate fractions of a GPU. Right, okay, all right, yeah, that makes sense. Yeah, that makes sense a lot for when you have, when you're deploying your model's inference, right? On GPUs, then not always you need a full blown GPU. Right, oh, for sure, yeah. Yeah, I would say the utilization of most GPUs is probably not high, even under an AI scenario. What we see, we see that the average utilization is much below 20%. Yeah, that's what I would say. And 20% is high, usually we see even less than 10%. So that's crazy because organizations are investing tens of millions of dollars, not hundreds of millions of dollars on the GPU infrastructure, and the utilization at the end is like 10%, that's... Right, that's not a good ratio. That's more than a lot of money. Well, I mean, it's funny, right, because it's kind of throwback to before virtual machines. You know, it's like the utilization to get on straight hardware was on CPUs was terrible, like 20%, you know, and then even when we'd started doing virtual machines, getting utilization, you know, you'd start to get the utilizations in like 60%, 70%, but I mean, that's why containers are so awesome because you can get ridiculous density and you can over provision such that if there is even some utilization room, you can shove something else in there really easily, you know, which I think is really interesting. Yeah, yeah, totally, I totally agree with you. That's containers are amazing. Right, well, it's funny because it's like, even in my advanced classes, you know, because I teach very, very, like first semester, freshman year students, and I also teach like ones who are like seniors and graduate students. It's like both ends of the spectrum, but even the seniors and graduate students, like trying to explain to them, like, because they didn't live through it, right? So, but trying to get, you know, show them, it's like, okay, so I have straight hardware, right? And then I have virtual machines and then I have containers because what we want is we want every iota of computing power to get used, right? To maximize the amount of, you know, if we spend a certain amount of money, we'll want to use as much of that as we can. And we don't do that in most other scenarios. We do get very good utilization, right? You know, and I think like now a big story that is happening in the world right now is around like, even increasing those obstruction layers in terms of like the layers, the level of the layers of obstruction with hybrid cloud and multi-cloud story. And that's an amazing story. Right, right. We're hearing it a lot from customers who are, you know, and they are building the platforms, the services, and they strategize around multi-cloud or hybrid cloud solutions. And so we're hearing it a lot from customers. That's big. That's big. And it's a lot enabled by Kubernetes. Right, right. I believe like, I believe that no, all companies and all startups and now on will build their services or their platforms as a hybrid cloud or multi-cloud solution. If now I'm building a new service, I will build it as a multi-cloud service on top of Kubernetes, right? There's no reason not to do that. Right, right, yeah. So because the value is so big. Right. If my service, my platform for internal use or external use can run on any cloud, on premises. Right. And the same way with the same stack for the same platform, that has huge benefit to customers, right? To users. Yeah, exactly. Extracting that cloud infrastructure out of there, right? Right. And it makes you a lot more flexible, but I bet it also brings the utilization problem back. And it's, you know, it's like, it's as good, you know, you have your perfect data center if there didn't exist, right? And you've gotten that really high utilization with containers and Kubernetes or whatever. And then you're like, oh, but we want to do hybrid cloud. And so now you have, like I said, you've just brought that utilization problem back and you need to optimize for it all over again. Yeah. So it's kind of, It's a game we'll follow, right? Right, yeah. Extracting and then improving and the efficiency. Yeah. And then you kind of take the next step or whatever. Yeah. Yeah, that's totally cool. So are you giving any talks or anything? You're keep going? My guys had, sorry, not my guys. My engineers, they had a presentation yesterday in the AI day. Oh, okay, cool. Yeah. Oh, that's right. You did one of the pre-conference events, right? Yeah. Yeah, yeah. The AI day. Right. And we had a presentation there on GPU utilization. Yeah. And it was amazing, I think. The entire day was amazing. Well, I really loved it. I really loved it. Yeah. And so it was really, really fun. Yeah, it's weird. Like, you know, I've been doing this stuff a long time, right? And it's like, and so, you know, but I get really fascinated by, you know, kind of talks about things where it's like breaking all the things, you know, and AI has actually been a long time love of mine. Like, actually, when I went to college, my grand plan was I was gonna do philosophy undergrad and then go to grad school and do AI by doing CS. Because in those days, because I'm old, people thought of AI as a hybrid of philosophy in computer science. Now they think of it as more like psychology in computer science, but yeah, so that was my original plan. And then I took a CS class in my sophomore year, like second semester sophomore year, it was the intro to CS. I was like, oh, I'll just try it out, you know? And, you know, and it was like a cakewalk of a class. I was like, this is so easy. And I was really having a hard time in philosophy because I don't learn other spoken languages very well and I had to do Latin and it was not doing very well. And then I was like, I'm just gonna switch. And so I switched to CS and like, you know, that's where it went. Listen, I'm an electrical engineer. That's my background and I started a software company. Right, right. So that was fun. I learned so much on how to build a software company and how to build a, how to create good software, good software products, right? And, you know, when we started, me and my partner, Omui, we, both of us electrical engineers. So both of us knew that we wanna build a software company because that's the way, you know, it's a software companies, that's the way to go, right? And the benefits are amazing. You can really change the world with software. And so we knew that. We also knew that we wanna be in AI. We saw the amazing things that are going on in AI and then it's really a big evolution. It's gonna change the world. Oh yeah, it already is. It already is. For sure, we're driving in a car, we're driving. Yeah, like, I mean, but even just like, you know, navigating maps, right? Like, I mean, you know, auto navigation. I mean, it's just, it's in there all over the place. You don't even realize it, which is the promise in my mind of like good computing, good software is like, you shouldn't even see it. All right, it should just work. For sure. And it's changing the world as you see already now with autonomous cars and everything and much more than that. And, you know, we wanted to just be part of that, that revolution and help, right? Right, yeah, the, I think the, you know, we'll shout out to Ford, right? I mean, like, you know, this car, right? It's like, it'll do like auto parking and auto driving to some level and stuff. And, you know, I have both the commudgony aspect too. It's like, you know, I actually like driving. You know, like, I like kind of the control and the, you know, and I missed having a stick shift, you know, but at the same time, you know, like, you know, one of the things that really blew my mind about like autonomous cars was I saw this like gift somebody had put together to try to explain, you know, autonomous driving. And one of the things that just kind of blew my mind was that when you have computers doing the driving and there's a stoplight, all, if you start going, all the cars start moving at once. Oh, at the same time. Because they can all move at once, because they're all, right? So it was super interesting. And I never really thought about that, because as humans, we have to, there's a delay in our response time. Right? Yeah, right. That's so fun to see, right? Yeah. They are all synced with the light. Right, right, you know. So, thank you so much for time. We definitely are out of time. And, but really it was a pleasure having you. Yeah, that was fun. Thank you. Yeah, hopefully you enjoyed a nice little tour. I did. I did. That's amazing. All right, thanks so much.