 Everyone's settled. I'll start. So before I start, I'd like to thank Kelsey for actually basically forcing me to submit this proposal and then Michelle for reviewing it before I submitted it. So she gave some awesome feedback. So before we start, there's not a lot of people in the room. So we can do a poll. How many of you are actually running Mesos and Marathon in production? OK, well, everyone's here. So we are the Mesos and Marathon people already. OK, great. So this talk is about moving from Mesos to Kubernetes without anyone noticing. And without anyone noticing part is an asterisk, because there are some conditions applied. My name is Mishra. I used to work for a company called Hootsuite. I was in the production, operations, and delivery team, which was basically responsible for building our microservices platform on Mesos and Marathon. And then we migrated it to Kubernetes. So a bit of a context, Hootsuite is a social media management tool that lets you manage different social media accounts in one place. And we have things like sentiment analysis, analytics, scheduling, and things like that. We have a north of 15 million customers with 800 of the 1,000 fortune companies using us. Where I work now, I'll talk about it at the end. It's not relevant right now. I also work and maintain a project called Atlantis, which is basically a Terraform workflow tool. My contribution to the Kubernetes community has been basically being part of the community for a year, talking meetups in conferences about Kubernetes and the cloud-native tools. My small code contribution is in-hand, a bunch of bug fixes. Hopefully, in the future, I can kind of contribute to Kubernetes upstream. So one kind of disclaimer, this talk is not a Kubernetes versus Meso stock. Both these platforms are great. We already know the platform that's better. I'm just kidding. The agenda for this talk is basically Hootsuite's journey from moving from Mesos to Kubernetes. It's very much a user story. And then before we get into actually the implementation details, we'll explore the microservices pipeline. And it turns out the abstractions that we've built in the microservice pipeline actually help us move away from Mesos and Python very easily. Then we'll talk about things that we did to minimize disruption. Then I'll dare to do a live demo for you guys, which is actually moving a service from Mesos to Kubernetes, similar to what we did at Hootsuite. And then we'll conclude whether lessons learn and conclusion. So a bit of a context, Hootsuite now is 120-plus developers with 60-plus microservices written in both Scala and Golang. We have two cluster schedulers, which includes Mesos and Kubernetes. Mesos still runs our Jenkins pipeline. And Kubernetes runs our stateless workload. And we have 1,500-plus servers all running on Amazon. So before we even get to schedulers, it's really important to understand how things before. It'll give you a really good kind of where we were before and why we moved to schedulers. So 2014, we were just starting to do microservices. A developer would ask for a microservice. Suddenly, an operator would appear with five screens. I don't know why they have so many screens. And then you had to actually explain the operators why we're going towards SOA, why we want to do a microservice-oriented architecture. And then in the end, the conversation would be like, hey, just create me a zero ticket and you'll get your servers. So minutes later, the developer would just create the ticket. And then this amazing conversation would start on Jira with a common thread, like, hey, how many CPUs do you want? How many servers do you want? And how much memory do you want? And things like that. And then somehow you'd end up in confidence. A developer had to write an ops runbook, which is basically before they actually build a service, you had to define how the service would operate. So you basically hit every other Atlassian tool that's there in your company before actually creating a microservice. This is our way of kind of giving back to the enterprise. So weeks later, the servers would be ready. And then the developer at this point is super sad. It's already taken weeks. And all they wanted is a microservice in production. And then on top of that, we were doing Scala microservices back then. So they actually needed to install Java. So they needed to write some Ansible. Also write some sensory checks to monitor the service. And then on top of that, create a Jenkins pipeline to deploy to those servers. So all that work was still to be done. It's already been weeks. So overall, the situation isn't good. It's not happy days. The whole promise of microservices is not basically happening. So fast forward a couple of years, 2016, 2017, a developer still want microservice. They keep on. They love microservices. And then in this time, like five minutes later, they actually able to deploy the microservice to production without talking to anyone. So the five minutes part isn't interesting. The part that's interesting is actually the cells of nature of the platform. And that was one of the keys to how we actually migrated off of the platform as well. So the microservice pipeline for Hootsuite looked basically made up of these big things. One was the project generator. One was the idea of pipeline as code. And then the actual platform, which was Mesos and Mathon. So the project skeleton is basically the first thing that a developer would interact with. It was as a group at Hootsuite, we came to a consensus that we want to build microservices a certain way. So we basically chose the languages. We chose the libraries. We chose the structure of microservices. And we created a Git repo and committed that to basically Git. So a developer would clone this repo and provide things like the service names and the nice name descriptions, the maintainers of the service, and choose a language. And then part of that project was also a Go binary. You just run that binary. It would actually generate your fully functional microservice in Golang or Scala with a bunch of other stuff. Another part of this repo was the actual Jenkins file. So you can actually define your whole microservice pipeline in Jenkins using Jenkins files with stages like build, test, and deploy. And it may be in the future, when we were actually migrating off of Mesos to Kubernetes, this would be super useful. And you'll see how. Once you commit this Jenkins file to Git, Jenkins can scan your organization and automatically build your fully featured microservice pipeline with build, test, features, and things like that. In terms of the actual microservice, as I said, we wrote both Scala and Golang microservices. Early on at Hootsuite, we made a decision that we want to kind of unify the packaging format. And we decided that, hey, we'll use Docker for kind of unifying that. We embraced the polyglot nature of what our microservice were going to be in the future. And we decided Docker is going to be the way we want to manage those dependencies. In terms of the deployment files, we also have the deployment files checked in as part of this project. So it has things like a replica account and resources and memory and things like that. Hell checks, defined already as part of it. So this is all generated for you. And this is, again, important in the next few steps. And then we also have the make file. This is just the ease of use thing. To do things like building your service, testing it, deploying it to dev staging and production, we just have simple targets that are just defined in a make file. In terms of the platform, we run both Mesos methods. Some of you actually did not raise your hands. I'll just kind of mention what Mesos is. Mesos is basically the resource manager that pulls all these different nodes together and makes them feel like one machine. And then marathon is the framework that sits on top of Mesos and lets you schedule long-living workloads and things like health checks and stuff like that. So our setup was pretty straightforward. There's nothing special here. We had three, four masters or whatever. And they were running the Mesos master process and then marathon alongside it. And then we had slaves that were running the Mesos slave and then the Docker container. So the way this worked is Jenkins would just say make deploy dev staging and production. This post call would come in, and this container will be scheduled somewhere in the Mesos cluster. And that's how we kind of did things. So now, if you're using schedulers, you probably have more than one service running in there. So to route between these services, like how did the actual Mesos routing work? And this is where things get interesting. And I'll kind of spend more time in routing and show you guys how the actual migration happened to in the next few slides. So we kind of said that, OK, we're going to do this like fat middleware approach, all the service proxy, all the service mesh. There's so many names for it now. But we kind of decided on doing that really early on. And we chose nginx as our service proxy. And then we chose console, which is a tool by HashiCorp, to do our service discovery for us. So it kind of connected all the containers for us together. So let's say a service one would come up on some Mesos slave. We use Registrator to kind of register that one Docker container in console. And that service is now available for console. And you can kind of interface with it using a DNS interface or an HTTP interface. And now the way the actual routing would work is, let's say if this service two was to talk to service one, service two would just do a simple local host call on a special port called 5040, which expands to SOA out. And then it's basically a path-based call, right? Like right there, it's just a simple curled path-based call. Service is hard-coded there, the name of the service you want to talk to and the actual endpoint you want to talk to. Then IngenX would already have this upstream defined using console, because console is kind of discovering those Docker containers for us. And then this IngenX router would forward call to the other IngenX router that's actually running this workload. And also there's a translation here, like we convert the call from HTTP to HTTPS. And then take the request in a special port called 5041, which is SOAN. So you kind of get the idea there. And then what happens is this IngenX router just proxies forward this call up to the actual instance that's running. So all this is great. You guys are like, OK, you actually had a great platform. What did you guys do next? We spend a lot of time on actually getting this adoption, like going at sweet. So spend a lot of time writing great documentation, getting teams on board. And guess where? So by the way, this graph is the service adoption curve, basically. So that's the number of services, and that is the actual timeline. So this is where we did the 0 to 5 minutes thing. You can see immediately everyone wants to use microservice. They want to write it on the scheduler. So at this point, sorry, that was sneaky. At this point, we have great autonomy in the company. Our service platform is doing really well. Everyone loves us. Developers are loving the whole new platform five minutes thing, because it took weeks. So there was a lot at stake when we said, let's move to Kubernetes. And now you guys will ask, why did you actually move to Kubernetes? We actually blogged about it. We wrote a pretty in-depth blog about why we compared all these schedulers, the cluster managers and schedulers, and decided on Kubernetes. So at the end, I'll link this to the slides in the end, so you can kind of go through it. I'm not going to go into the exact reasons why we did. But basically, it came down to Kubernetes had the right abstractions and primitives for us to build basically a microservice on. And we resonated that as a business. And also in terms of the developer tooling and the ecosystem, it's amazing that you pretty much get everything for free. So it took us four months, and it was three of us. Luckily, I have both of these guys here. So Mark and Luke also here. So if you have any questions after the talk, you can ask them as well. So my whole team is here. It's awesome. It took us four months. And we got Kubernetes running in production. On Amazon, we told Kelsey, he was super happy. I think he did that. In my mind, he was doing this, basically. So it was a awesome day. It was actually time to move these workloads now. So now we have all this stuff running in Mesos. We first need to deploy that in Kubernetes, right? Second thing, we need to figure out routing. You have to route from Mesos to Kubernetes and also stuff that's outside of Mesos into Kubernetes as well. And then the developers that want to use a new Kubernetes platform have to adopt that and learn about the platform. There's great Kubernetes documentation, but there are some implementation details on how you would roll that out in your company. So you have to basically lay that out for them as well. So the way we solve these problems is interesting. So the Mesos marathon platform was composed of these things, right? Project skeleton, the pipeline as code, Docker containers, and then the dynamic service discovery using console and Nginx. Pretty much everything was used to move off of Mesos as well. So each of these things were actually important to move off of it. And then we also wrote some great documentation to get started on Kubernetes. And then we wrote a tool called Mesos to Kubernetes. I'll explain what that was. So the first thing was, once you start looking at these Mesos projects, we had to actually migrate them. We started writing out these Kubernetes YAML files. And how do we do that? We were doing that manually every time. So when we would do a pull request, I think Luke actually just pointed out, we should just automate this. We're doing this every time. We know how both schemas look like. So we wrote a simple Golang binary that you can download. And you can just run it on your project. It would read all the marathon deployment files and write the Kubernetes YAML files for you, because we knew how the structure was. In terms of the actual deployment files, we also added some new simple targets in the makefile that you saw earlier. So we just added targets for deploy, k8, stab, staging. So we basically had both clusters running at the same time. So in terms of the actual pipeline, we end up deploying to both the platforms. So we duplicated the workload. We deployed everything in Kubernetes as well. So once we migrated to deploy everything. But the tricky thing is that we only routed to Mesos first. So basically, everything was as it is for all the dependent services and the service that was to be moved. So in terms of packaging, Scala and Golang microservice package using Docker containers, this is the benefit right here that you can actually move the stock containers to Kubernetes. You don't have to figure out the implementation details for Scala or Golang, which is really nice, both Mesos marathon and Kubernetes understand Docker containers. So this is the benefit there. In terms of routing, as I said, the workload is duplicated in both places. You already understand how the routing works in Mesos to Mesos. So basically, we spin down the Mesos side. And the Kubernetes side is already running. And it also has that nginx router and console. So console discovers the Kubernetes nodes and make them available in the backends for nginx. And nginx basically does a routing call translation, and you'll see how. So let's say that the service two was to be calling service one again. So it would again do that local host call, which is pretty much the same for the service. But this time, you see there's no upstream back end available for service one because it doesn't exist in Mesos anymore. And it would go to this special back end call, the bridge. And this is, again, an implementation detail. You don't have to do this. We have something called the bridge, which is, again, powered by nginx and console. And it basically lets you literally bridge the gap between two data centers. So with console, we kind of went greenfield. We said, OK, we're going to set up a fully functional VPC and its own network, its own networking, and then spin up Kubernetes. So we basically treated it as a different data center. So what would happen with the request is it would get translated to the bridge. The bridge would say, OK, I'm trying to find a microservice with this name in your data center. I couldn't find it. And then with this simple dumb routing, we said everything else should just go to Kubernetes. So this was a pretty, like, a decision that we made because we kind of understand how our microservices work. And at this point, it ends up in the Kubernetes side. Here, the nginx router just translates the request to actually say the Kubernetes conformant request. So at that point, it's just kubedns kicking in. With the service name, default, svc.console.local, and then aport8080, basically. So at that point, Kubernetes takes over. It basically goes to the service, the Kubernetes service, and the service forwards using IP tables into the pod. So it's pretty straightforward. So on the other side, the service actually didn't. The service, too, did not notice that the service actually got moved because it's still doing that localhost path-based call. It didn't realize that this new service is now in Kubernetes. So it's nothing special on the service side. And this is where the service meshes and service proxies actually shine, that the applications can be super dumb and the mesh can do the crazy routing for you. In terms of rollback, this is interesting. We basically had it so that the routing, as I said earlier, would always prefer meshes first. So in this case, if something went wrong, we would just spend these workloads back up in marathon. And all the routing would just still work. So we would just end up hitting the meshes side, basically. So how did service calls go outside? Let's say service one running in Kubernetes, and it was to be called some foo service, which is running in EC2 or meshes. The way routing worked is we just basically faked the Kubernetes service. So we created a Kubernetes service with the name foo. And then we just forwarded all the traffic on a special port 5040, which would go to the local nginx router. And then once that call was made, this request would get forward to the nginx here. Then we had this amazing glorified regex, which Mark wrote, which would just translate the call into the path-based call again. So you would end up with some request looking like the path-based request again outside from the nginx. It would go to the bridge. Bridge again knows how to route this. And it would route that to the actual foo service. This is how we do things. And this was kind of a matter, because you had to create a list of all the microservices dependencies before. So in terms of the project skeleton, which is basically where a developer starts their microservice journey, what we did is we had a branch for this. Basically, a git branch that was doing the Kubernetes and Meso's both of the things. And maybe we were pretty confident we basically deleted the Meso side. We wanted to make sure that all the new services that were getting spun up would actually end up in Kubernetes. So we just merged that branch. And boom, all the new services were now getting created in Kubernetes. I've been really careful, because we don't want to create technical debt and move those services again. This is, I think, the most important part in the whole presentation, the documentation. Whenever you bring in something new, you really need to write a documentation that's readable, that's something that people can follow. So we created a doc that would just basically give you adoption to the new platform. So you'd say, OK, how to create a service in Kubernetes? How do you call it? How does logging work in Kubernetes? How does alerting work and things like that? On the other side, we also created a doc to migrate. And you see that warning right there. It says, talk to us first, which is basically our team, just so that we can onboard them to the new platform, try to give them tips how to migrate the service. In terms of the actual migration implementation detail, we wrote everything down in the doc, and literally, the bash commands that you have to run, all the stuff that I just talked about, they can just follow through and do it if it's a simple service. All right, so now we come to the important part, which is the actual live demo. Let's all bow our heads to the demo god, which is Kelsey, here, right? OK, let's do this. OK, so here I have a three node Meso's marathon cluster. It's very low-scale. This is not how we run stuff back home. And then Kubernetes as just a one node cluster. And then we have console, basically. So that's marathon. Nothing's running there. That's console. There's only two services that are about console service itself. And then the two services are the Kubernetes services that are being kind of discovered dynamically. So we can discover that Kubernetes node dynamically. OK, so in terms of the actual microservice today, we're going to migrate. It's called KubeCon 2017. It's a simple, like, go lang service. I'll just show you the code. There's nothing exciting here. Basically, what we're doing is we're returning back a response with a random name that we are generating, and then the host name of that instance that's running it. So we can kind of know where the app is running. So it's clear to us. OK, so let's go back here. OK, yeah, so I've already built a Docker image for this stuff. So we don't waste time. And we won't be using any CI CD pipeline. So we won't be going through Jenkins and stuff, because it just takes longer. OK, so in terms of right now, we have the Meso's deployment file. So if you look at the deploy folder, it has the Meso's JSON file. Let me just get that for you. Yeah, so it's a simple Meso's JSON file that you probably know. OK, and then we have the few make targets here. So we are just deploying to production, one environment. And we just say make deploy production. And that should go to, that should actually spin this up in marathon. Yeah, just deploying that in thing. And we have two healthy instances of this app running. And then also, consoles automatically detected this new container containers, and then they would just register with the tag HTTP service and would run on Meso's master 1 and 3. So that's all working. This is using the Registrator, basically. So OK, that's all good. That all works. Let's see if the actual routing works inside of Meso. So let's go to a Meso's master and try to hit the service using that path-based thing. So if I go up here, hopefully I'll hit that in my history. So here I'm doing a local host call on 5040, as you can see. Slash service is static here, and then the name of the service is KubeCon 2017. So when I do this, you can actually see this routing is actually working. It's responding with some random names and their host name. And what's special here is this header called XSkylineRouterIngress and egress. We just inject that in IngenX. Skyline is the name of the framework that we call internally. And here, we're just always preferring the local instance since I own master 1, and this container exists. So I'm not going out of the machine. I'm just hitting the container that's inside. And you can just do that with simple vading on IngenX back end, basically. So that all works. So routing works from Meso. So this curl call is actually acting as the dependent service that's hitting Mesos, that KubeCon 2017 service. This is the dependent service which shouldn't notice anything basically. So now we've set this up. Now we'll actually start doing this migration, which is the interesting part. Let's go back here. So the first thing we'll do is translate the file from, basically write the Kubernetes files. So we'll use the Mesos.k8s tool. I'm just faking this. This is not the actual tool. I don't work at Hootsuite anymore, as I said. So this is just a simple goal advantage, just output stuff. This is just going to output saying, hey, I just generated the Kubernetes folder for you. I'm just moving a file in the background, basically. So I'm being super transparent with you guys, so not faking anything by the way. So here we go. So here's the Kubernetes.json.yaml file. So let's just get that too. Yeah, this is pretty simple here. We have a service that we're creating, and then also the deployment. And in real life, we actually use hem charts. But for this demo purposes, I'm just generating yaml files. So now, next thing, we'll actually add the make targets. So if you go to the make file here, we'll just uncomment this, which is deploy to k8s. So I'm just doing a kubectl apply. In real life, as I said, we use hem charts, which basically translates the certain variables that you define in your service, which in yaml format, and injects that to the hem chart. And this is nothing special here. We do make. We just run this make deploy k8s. So now it's created the service in Kubernetes. We do kubectl, Rusty, get pods. Here we go. Yeah, actually get pods in service here. So it's created the coupon service, and also the actual pods for those services. Again, two instances were created. At this point, to the routing, if you see nothing's going to Kubernetes yet, we're still preferring mesos, because this app exists in mesos. Now this is the nerve-racking part. This is where we scale this thing down to 0 and wait. I want Wi-Fi, so please Wi-Fi don't screw up. So that's gone. And then in console, we see the service go away, basically. So this console local DC says, OK, I don't have this service anymore. Now, if you go back to here, routing here, you see the ingress is going to Kubernetes now. So seamlessly, this is moved. You might see some upstream errors based on the way you shut down containers and things like that. You can do some smart retries on it, but for the demo purposes, I wasn't doing it. So if you were to see this demo as soon as it changed it, you'll see some upstream errors, 502s or something. But here you see ingress is actually going to Kubernetes now, and then the pod name is basically the host name for the container. OK, so let's say if something really bad happens, as soon as you see the engineer see error rates go up and things like that, you would just scale this back again. It was like, well, just get me back to where I was. So scale this back to 2 again. Yeah, so this healthy container is here. Let's go here. KubeCon, again, this service is available, and you see we're back on Mesos. So basically, that's like the rollback mechanism. So that is what I have for the demo. Let's go back to the presentation again. Kelsey is still chilling. So in terms of the migration results, we migrated, moved 20 services in one and a half months, with three people Luke being the technical lead, and then the two interns that basically moved most of our services, which is not the best, but yeah, I wouldn't advise on that, but yeah. The things going really well, sometimes things actually don't go that well, and I'll talk about what went wrong. Things weren't as seamless as they seem. We had two outages that we remember. One was what I call the bad config which is when we moved these services to Kubernetes, we were using Kubernetes environment variables to kind of inject the config based on environment and things like that. And one of the variables was basically how the routing, the external routing works, like the fake service foo bar and stuff. So we basically misspelled that or something happened like that, I don't exactly remember. And then we couldn't route from the service that was inside of Kubernetes outside. Like I was like as simple as I was just a human error. Second one is like the classic security group, a white listing error, which happens all the time in Amazon. So let's say a service in Mesos was using an RDS database and we have a whole like security groups of white listing that to the white RDS database. We moved it to Kubernetes, forgot to white list the database, the call starts failing and we moved it back, we rolled it back. So yeah, two big things that cause like decent amount of impact, but overall it was pretty seamless. So what were the lessons that we kind of learned? And this is where like I feel like the technical implementation details don't really matter so much, but like the high level stuff matters. So choose, always choose the least important service first. This is pretty common, pretty simple to do. This is important because you wanna basically make sure that there's no business impact when you do these big translations and kind of understand like there's so many things that you won't realize that you can't foresee. This will kind of help you debug that. Always have a rollback plan. In our case, like things went wrong a bunch of times. We had to roll back a bunch of times. So yeah, having that mezzo's fallback was awesome. In terms of writing down like the actual deployment pipeline, so sometimes you have to actually visualize how your deployment works because they might be different flavors of services getting deployed. So it's nice to write it down and then kind of figure out where you can abstract things, where you can inject things to make the degradation like basically small. In terms of documentation, it should be readable for humans. Don't do like a brain dump of this is how I did it and these are the bash commands. Explain why you did it. This is really, really important, especially when you're having interns kind of go through and do that, right? In terms of being pragmatic, I think this is another important thing. Nothing is like, like usually you won't get services that are built the exact same way. Sometimes you actually have to go in and write some code. So it's okay to do that. There's gonna be some special service. We all have them and you can just go in and do that. It's totally okay. This is I feel is like the most important part is like minimizing disruption. I've said this multiple times, but I can't emphasize this enough. This is for both the services and actually the people. The service part is like basically making sure like the dependent services aren't seeing anything, any errors and things like that. But for people, you can't be asking like developers to do these migrations, ask the product manager and ask for like bunch of story points and stuff. Like they won't, you know, they don't like that. They already have stuff to do, right? So you want to make sure that the disruption of actually like the workflow is also like minimized. And we focused on it a lot. We made it so that it's actually similar to the road tools that would make the translation easy. And that actually helped for the great adoption of a new platform. It's really easy to tarnish like an image of a new tool. Like you bring in Kubernetes and then you move a service and it causes outage. Everyone's like, oh, Kubernetes actually causes outages. And it's like, it won't be Kubernetes fault. It would be something else. You really want to make sure that that image is maintained. And so that way you can drive that option, basically. So in terms of links, I have a link for that blog post of like why we moved to Kubernetes. I also have a link to the abstraction that we built on top of marathon. I think that's important because we kind of define everything in YAML as you saw in the slides, but in the real life I was using JSON files. So we kind of abstracted that out in the API. And then consoles link, which basically helped us link between infrastructures from both Mesos and Kubernetes side. So thank you, really appreciate it. Appreciate you guys coming up. This is the last thing that I work at, which I said I won't talk about. I'm joining HashiCorp as a developer advocate. I'm super excited to meet you all and talk about HashiCorp tools. Thank you. Do we have time for questions? I don't know. Yeah, here we go. We have time for questions. That's a bit about the infrastructure side of the migration. How do you went from a Mesos cluster to a Kubernetes? So in terms of the actual Mesos side, Mesos side of the infrastructure was basically built using Terraform and was like three nodes with some slaves with some console nodes and stuff. That was like the backbone basically. And we used Terraform for like writing all the defined things on, define things on like all the Mesos mastering stuff. And then we used Packer to kind of build these images. And we used Ansible to pretty much do all our config management at Hootsuite. So it's pretty much like in an auto scaling group like the master's one in auto scaling group, the slaves were, and they were imaged. So you can actually, like when a new node would come up, the slave could just add itself to the Mesos master and it would just work. On the Kubernetes side, we took, as I said, our green field approach. So we created all the way down from the VPC to the actual nodes. Again, we managed everything ourselves using Terraform. And then everything was in auto scaling group. So the API server, the slaves, the XCD nodes even, that was tricky. And then hopefully we eventually blog about it, we should. And then, what was interesting was like, we also generate like a search for every component of Kubernetes using wallets. So we generate basically dynamic search for every component. So we spend a lot of, that took us so long. Took us a long time to kind of get to a production ready Kubernetes cluster managed by us. And we use an overlay to do the networking. We use Flannel to network between the Kubernetes nodes. Now, EKS, Amazon services out. So we'll see how that goes. I'm sure these guys will make the right decision. So it's up to them. Okay, anyone else? Everyone's happy with me as well, okay, great. Thanks guys, thanks. Really appreciate you spending some time here.