 Good afternoon, everyone. It's great to be here in Barcelona for the OpenStack Summit. And I'm Joseph Sandoval. I'm the Cloud Platform Manager with Tube Mogul. And just to kind of give you a little bit of a level set about who I am. So first off, I've been around OpenStack since about the full-sum release in two other previous companies. One, a retail cosmetics company, where I started doing some early iterations of trying to do private cloud in a data center, primarily for PCI reasons. And then after that, I went to Lithium Technology, where I ran a team. And if you saw any of the work that we did at Lithium, we were featured on a keynote last year with some of the work that we were doing with containers and Kubernetes. Also, a lot of other challenges that we face there in regards to getting to public cloud. So I've had quite a bit of background in regards to trying to get to production with OpenStack. And the Tube Mogul, I've just been there about five weeks now. I joined a team that has been on a similar journey. And it's kind of great because you'll get some of the experience that I had from Lithium. And Nico Bruce, who could not be here, he was gonna give the talk. He gave me a lot of the data points that I've picked up since I've been there. And especially now that we're really marching towards our fourth region going live, I'll be able to hopefully give you some takeaways that when you start thinking about if you're running public cloud workloads and you think you have some candidates that you'd like to migrate in, what are the key considerations that you should really factor in that will help some of that decision-making? And I would love to say that this journey was one that was just smooth and easy, but like a lot of things, it's an iterative process, but yet they still were able to be successful. So hopefully the takeaways you'll get here will help you do some planning and be able to save you some of the challenges that we faced as well. So to kind of get into it, just to kind of the prerequisite of what Tube Mogul is, what our company is about. Tube Mogul is a company that is an enterprise software that is really for digital branding. It's for companies that, you know, they wanna be able to place ads and be very strategic about how they spend, how they optimize and get a really quick feedback loop to be able to, you know, make sure that they're targeting the right customer. So just to kind of give you some scope of like how active this platform is, in 2015, there was over 12.6 trillion ad auctions that were happening all behind the scenes. All of this as you're browsing and you're going about these transactions are happening right when you hit pages, brands are trying to target their customers. As far as like other part of our business, we have programmatic television, which from that we're able to have 3 billion ad impressions that we're able to deliver through our solution. And the key thing to note about this is that all these transactions have been very fast. So when we say here that we process bids in less than 50 milliseconds, that's really at the end that it could be very, you know, like the highest level of our SLA. Ideally, we're way on the other side of it where we're delivering within just one or two milliseconds. And overall from end to end, from when a customer is placing a bid, this must include the round trip that it takes to go out and make this bid on this platform. So really you can kind of think of it as being like a stock exchange, really what's happening behind there. Companies are going in there, they're bidding and it's all in real time. So there's definitely a lot of like tax that's put on the system as well as the application workload. And through this last year, although we didn't deliver all of this, about five petabytes of monthly video traffic was being transacted on this platform, just to show you how successful it is, but also the demands that are put out there. As far as like the team that is running this, who's the guys behind the scenes? You know, what is the composition you may wonder about? And they've kind of gone through some evolution, but primarily it's really, you know, operation engineers that, you know, are the common ones that you find who have data center backgrounds. We have it broken down by our SREs. We have SCs who really kind of do a lot of the bulk, the heavy lifting. And then we have DBAs obviously dealing with like a lot of our stateful workloads and really a lot of the, you know, behind the scenes kind of things that really drive our platform. And these individuals are really responsible for keeping things performing and making sure that, you know, that we're really delivering a platform that is able to help our engineers, you know, serve up their applications. You know, we want to really reduce that friction. We want to help them to be able to, you know, create features to really help create value for our customers to continually to use our platform. And so the key thing about it is that, you know, cost is something that is very important for us. You know, here we're delivering a platform that is hybrid. So we have infrastructures of service. We have bare metal. And depending on the use cases, we decide which path they're going to choose. And as of this, originally when we gave this talk, there was about 2,500 servers, primarily virtual, but a lot of physical ones as well. And that actually has gone up. We're probably now close to above 3,000. And as we go live with another region, we'll probably be about 3,500. And as far as like the core OpenStack team that this journey is really about, it's comprised of three individuals, two with a little bit more of a development background, one with a network background, as well as myself and previously my boss. So it was a very tight core team that was going to be delivering this solution. And the way we kind of have deployed it currently, just to kind of give you some scope of where we're at, is it's key for us to have the right proximities. And so Amazon was really like a key platform for us to be able to get there. So we would be deployed into AsiaPak, into EU, as well as into the US. And so right now, currently, we're running a hybrid platform as we are going through this transformation. We still do have some public clouds available. There's some that are very strategic where we're actually running like 50-50, where 50% is running in OpenStack, 50% is running in AWS. But that's kind of like the lay of the land. And as we transition, you'll probably see a little bit more on the data center side of things as we're starting to find that sweet spot of like how we want to deploy and where we want to deliver our application to. So this slide may not mean much, but the key thing that I wanted to kind of point out here for you is that this is kind of like a high-level overview of like, from our side of our operations is how we look at it. And when we went on this journey, instead of like really trying to tackle everything and just shift everything into the data center, we really decided to just focus on some key parts that we thought they were like the right candidates to go into the data center, into OpenStack. And so we really focused on what I talked about a little bit earlier, that bidding layer, layer that also serves up the ads when things do successfully go through. And the key thing to note there is that, it's highly transactional, latency is key. Tons, millions of, trillions of packets that are going and flowing through the network that are really supporting it. So those are some of the key requirements that we had to stay focused on as we did some analysis and decision-making about how we were gonna go about like migrating this back into the data center. The other problem is, I think maybe a lot of us have this problem and I'd love to say that, we're much more tighter about this, but we have a polyglot of languages. There's a lot of freedom to choose the technologies that you're using and we're no different. As you can see, there's quite a bit there and we're really starting to kind of get a little bit of handle on these things, but it was something that we kind of had to face. So we had to really think about, okay, what are the key parts of this stack? What are the dependencies? So that we really kind of had a good idea of like what we were bringing into the data center so that there would be no really big surprises when we got it into OpenStack. So what really kind of got us here at this point was that, at TubeMogul, the one thing I'd really like about it is that we really like to challenge ourselves about all the assumptions that we made, even when we have been successful in deploying things into the data center, or excuse me, into Amazon. We really kind of decided to take a step back and do an analysis and one thing that we're looking at was just, of core philosophy that they carry there is trying to do more with less. And that was one thing that really kind of drove some of this decision making because they started looking at like what was happening in Amazon and how our platform was performing. And because of like the volume of traffic and the latency challenge we had and trying to deliver something very close to where the partners that matter to us could get to it really quickly, that was a challenge. Even though despite Amazon having that broad footprint, there still was something a little bit left to be desired for us. The other part that got really challenging, and this is some of the things that really started causing us to examine how we were delivering our application, was the data sets that are really happening behind it. This is all really the decision making and this is the part that really decides where things get served up. And so we had to really over provision to kind of get the performance that we need. And that's always a challenge because then you start to eat into your revenue and that's where we started really like examining where maybe this platform isn't gonna scale for us as you're getting to like a million dollars a month, 1.5 million dollars a month, you start to really look at like, how is this gonna grow if our business grows three times more, four times more? So we really started analyzing what was happening with our spend. And then the other part of it, as I mentioned earlier, like with the packets per second that were happening, a lot of our choices in regards to instant sizes was really governed by that meaning, we had to really accommodate for like the HA proxy traffic. So we had to actually choose larger instant sizes to be able to accommodate that. So it's really where the drivers are really starting to make us look at, maybe the public cloud is not where we're gonna stay at, maybe we should look at what we're doing in the data center. And then the other part that was a challenge as well, which I think maybe go away over time, but a lot of times we bring workloads into the public cloud that maybe they're not as cloud native as we'd like them to be. So we're trying to get our applications to get there to where they're very resilient and they're able to deal with kind of like some of the invariables of being in the public cloud. But at times when we would have disruptions without any real clear root causes, that definitely caused some challenges for us in regards to being able to keep our SLAs and to meet the requirements of our engineers. So we decided on a strategy and this was one that was the initial approach. It was kind of like, you know what, let's bring it back. There's a lot of solutions out there. You know, we could go to the same locations at Amazon's at, hit three continents, let's try to get this done in six months. You know, we're gonna move quick. We're gonna make it happen and that's kind of one of the things you're gonna hear a lot at Tumogo is, you know, just make the shit happen. And so we figured, hey, get it done and then we're gonna be in Vegas and it's gonna be a good time. So I think that worked. Well, I would say we had to go back to the drawing board. This is actually a little bit longer of a journey than the iteration. So, you know, that six months went to three years. Then we had to decide, well, some of these workloads, some work great in an open stack. Some are better served up on, you know, an automated bare metal approach. Instead of two part-time guys, we had to get three dedicated engineers because two part-time guys just wasn't enough. And then how do we keep this thing going? How does it happen? You know, you start thinking about the life cycle, like the day two challenges of like, you know, where is this going? And so we really kind of realized that, wow, this is a lot more. And, you know, to just kind of look at my experience at Lithium when I was there as well, we were almost doing the same exact parallel kind of journey because it was also kind of like where open stack was at. There was a lot of this, a lot of rapid development. And so people were still kind of figuring these things out. So, you know, that strategy is ideally where they started at, it's kind of where you ended at. So if anything, I would just say, be a little bit more pragmatic about how you approach this. Just set small inner milestones so that you can kind of successfully reach it. So as far as like the things that you should be asking yourself when you talk about like TCO and like even just even considering, should I bring things back into the data center? You should really look at as like talked about earlier, you know, we could have made a case to bring everything back. But I don't think that we would have been successful in taking that approach. We really were very discreet about the things that we should bring on board. So consider what the infrastructure you should be moving is while it's kind of like, you know, when you look at even just the instances that you're using in Amazon, you know, you gotta realize that it's not like you're comparing like two types of apples or the instances are different, performance are different. Great thing about being in OpenStack is that you can architect the type of hardware that you're choosing. You can kind of be very deliberate, especially if it's like something that is very like CPU intensive. So coming into that thought about being CPU intensive, you know, you start thinking about things like, you know, can we get more dense with what we're doing? So you have to really think about your ratios of like how you want to over commit and to be transparent. We know we had some challenges there. We had to really figure out that, hey, maybe some of these apps are not that resilient. They, the CPU still can really affect how they perform. And you know, there's other things that come into it that you want to like kind of consider, you know, in regards to just kind of like the OPEX efforts, because a lot of times you don't have to absorb that when you're in AWS. You just kind of skip that underlay and you skip all like the engineering to bring something up, but you have to count those things in there. You want to be fair. You want to make sure you're looking at things from all angles and to really be able to make sure that this is the right decision. And as well as kind of like, you know, like when you start looking at your design, like are you designing for, you know, bulletproof, I'm going to be up, you know, five nines, or is it some trade-offs? You know, can you think about like ways to, you know, enable high availability, you know, shared nothing architectures. There's a lot of ways that you can tackle it, but these things will like dictate like how your TCO will work out. And as well as kind of like, you know, when you really are starting to build up your stack, you know, where's your team going to really build these things at? How are they going to test things? This is where it comes back to the day two challenge. I think we're starting to hear more and more of that in the OpenStack community about like, how do you go about, you know, upgrading something or what if you have like a zero-day exploit that needs to be patched tomorrow? You know, how are you going to address that without blowing up your environment and all the workloads that are running on top? And you know, one thing that I think is really key, and you know, I'll probably get back to this towards the end of the talk as well, but it was part of the journey for us as well in how well are you at managing your public cloud or even your, how well are you automating your infrastructure? I think if anything, that's one of the biggest metrics that will allow you to be able to decide if, you know, it's going to make sense to be in the data center or if you should stay in public cloud. And then as well, it's kind of like, you know, where the locations you're looking at, you know, for us, the thing that did kind of come up for us. So we have two locations in the U.S. We have one in Amsterdam and one in Hong Kong. And the one thing that did come up with our Hong Kong environment was in regards to like our network provider, you know, costs are a lot different. So these are things that you need to consider about like where you're deploying to. So these can affect like the overall impact of like what it's going to cost for you to get there. But if we had to boil it down, we really had to kind of get to like the key things for you to really think about and to be able to, you know, really simplify like this analysis is really to go after your public cloud providers. You know, that's kind of what we did. And you know, it's funny because there's no shortage of TCO calculators are going to tell you that Amazon is going to be that much more cheaper. Azure is going to be that much more cheaper. I've been with like, you know, some open stack distributions where we ran TCO analysis that showed us to be way cheaper to be in the data center. And really you're at an advantage, meaning you know your exact costs. You know what, you know, what your OPEX expenses are. You know everything behind the scenes that they don't. So really challenge them, share with them what you come up with and find out like, does it really make sense? Hear it out. It's two different opinions and then you can kind of make a really more informed approach about, hey, this decision to go into open stack makes sense. And the other thing that we really found to be important was to just kind of keep things very simple. You know, we really focused on a subset that was going to be our sweet spot. What was going to help us to, you know, find success in regards to like the workloads that we're bringing about. And the other piece that I think sometimes can be a sinkhole that you can fall into is that if you're trying to feature match a public cloud, that's not, you're not going to be successful. It's going to be very difficult to kind of do that unless you're in the public cloud business. I mean, it's just the nature of it. So just be very, be defined about what you're trying to deploy. If you think about it, it comes down to really three things. Compute, storage, network. And you're trying to address these things programmatically via an API so that you can have infrastructure as code. You stick to that. And that's kind of like your baseline that everything else kind of really falls into place. And you'll find yourself a lot more successful than trying to like look at every new as a service thing that's rolling out there. So how we got started is, you know, it's small beginnings. And so they started with Eucalyptus at that time. You know, they really were just trying to like go in an environment that was safe. And I would say the one takeaway here was, you know, they weren't trying to take a workload that was going to impact the business. But in a sense, it did. It did not work out too well for them. There was a few different reasons. I think part of it was just, they were just early in the iteration cycle of it. And then they decided, well, maybe there's another kind of like approach that we can take. Well, they tried it. They went down and did another iteration with Cloud Stack. And, you know, they found some margin of success. I think part of it that at that time was, you know, they were looking for, you know, to really find that all in one solution. And when they really kind of got into it, that there was some challenges for them with Cloud Stack. So at that time, Open Stack was really developing. And so that's when they kind of realized like, hey, this is what we want to be at. And I think this is one of those things where, if you've been in engineering, and this is not even an Open Stack kind of like challenge, but anytime you're in delivering software or product, you know, there's the thing where you want to kind of like do things like perfectly. And then there's kind of like, you know what? I could just get this kind of done. I think here there was this ideal of really like, let's get a, let's harden our Linux. Let's get a Linux that is so optimized. And let's do this with Open Stack as well. The challenge was that, yes, they were able to successfully get things up and running, but it was the integration environment that was running on it. And when things did not go right, because at times, hey, things are gonna, you know, fail and you're gonna have to go in like troubleshoot, the mean time to recovery was very long. And so, you know, that really did not enable them to be successful. So if anything, the lesson was kind of a takeaway was, you know, don't try to make things so complex, you know, where you can't recover quickly. Think about like, you know, when things do go wrong, how fast can I recover my environments? And so this one was definitely the area where they had some challenges. So, you know, instead of like really looking at these things as just, you know, absolute failures, there's definitely some lessons learned. You know, one thing that they kind of realized was, you know, sharing out your lab. That's kind of challenging because if you're really doing OpenStack, right? If you're doing continuous integration, you know, things are gonna break. I mean, builds break. And so when you have people depending on that, well, they're not gonna be that tolerant for you when you do break these things. So keep those things segmented. And you know, just don't assume that, you know, people are gonna be okay with these things. And, you know, it was kind of, when I was at Lithium, it was the same thing. We had that same area where, you know, we had an environment that we were testing on, blowing it up, and other people were depending on it. So you don't wanna like be in that position. And the other area where it was a challenge for them was in regards to block storage. And so we're using Ceph there at Two Mogul. And, you know, at Lithium as well, that was kind of very similar. We had Block Storage Challenge there. We had more of like quality of service. We wanted like really predictability. And here they kind of had some very similar challenges as well here at Two Mogul. And so the key thing is, is just, you know, when you kind of get in there, just, you know, you wanna do some really upfront planning, really understand, hey, are IOPS a key consideration that I need to be really be thinking about? You know, what are my Block Storage use cases with the applications that I'm working with? And the thing is, you know, ideally you're not making last minute changes. I mean, I'd love to say I haven't done that because I did that at Lithium where we chose solid fire to come in there and to stabilize something, but it definitely will blow up your TCO if you don't really think about this and you go in there and you have to make these changes. And then the other thing is that a data center is not agile when you actually deploy one. These things take time. And as well as kind of like when you start really doing integrated racking, you really try to get very modular with the design. You know, you gotta be very, very realistic about the timeline that you're gonna need. It isn't gonna happen overnight. It's not gonna be like a public cloud. And so you really have to do some, you know, upfront planning, think about your capacity so that you find, you don't find yourself that you're really like locked and you don't have enough room to grow. And the other piece is just, you know, the complexity of OpenStack, like I talked about earlier, is just, you know, keeping things simple, making sure that, you know, you're only consuming what you absolutely need. That'll help you to be able to be successful when you're like deploying and you're not spreading yourself so far where you're just kind of like a mile wide and an inch deep trying to support a large environment with all these services. So there was kind of like a reboot, kind of like all these lessons learned, really kind of understanding, all right, you know, we definitely have learned some things. The great thing is we failed fast and we took those lessons and there's been a lot of approach and I think there's still something to be said for it in regards to choosing a vendor and, you know, there's been a lot of like, you know, companies that have come out and provided really turnkey or, you know, solutions that can allow you to really fast track your deployment. And there was some success there and I think the only challenge that really with this approach that they realized and, you know, another thing we faced the same thing was that when you start dealing with the day two, if you have automation and your automation doesn't integrate with your open stack, then you're in a tough spot because, you know, you can't just stand it up and be like, all right, success, we're here, our stack's running, things change day to day, your cluster change, you're gonna need to patch, you're gonna deal with all these things that you need to maintain, how do I upgrade? You know, and then what do I have to do to inject my own personality into the cloud, you know, as far as like, you know, our public keys and private keys, you know, all the things that are unique to us. You need to have an automation that really ties into it. So, you know, as far as like proving that workloads can run on it, it was, but it just wasn't where we needed to be at. And so finally at this certain point, we kind of out of that maturity out of all these things that we learned, the thing that we kind of got to a point was that, you know, we decided to use Ubuntu and it was almost kind of like, it met the majority of our requirements of what we needed from a distribution. And yet, we could really put it into a framework that we could serve up the platform in a way that would allow us to have that life cycle management. Meaning that, you know, we're using Puppet in our infrastructure, we have Jenkins jobs that run. We wanna be able to make sure that we're treating it like an application so that, you know, if we can, we can test things. And that's the key thing. You wanna be able to know if you need to make any changes that you can test them out ahead of time and know that your infrastructure is not gonna blow up. You know, that's the thing for us is that, this thing is going 24-7. We need to find ways that we know that we can go in and do maintenance and feel confident that things are gonna work. So that kind of gave us that really ability to say, all right, we can kind of move forward. We're gonna get into production. And so initially when we went in there, we had, you know, traffic switch went great. You know, the initial footprint was a 40% footprint decrease. And as I mentioned earlier, there was a few different components that we picked out to deploy. And we chose the ad platform piece of it. And the great thing is like we got that density. So great, you know, it was a very good approach. And because we were using bare metal, we were able to kind of like decrease a lot of our load balancer footprint. So that was a tremendous win for us in regards to like what we were doing there. And the thing that did kind of come up is that a lot of times you feel like you kind of got a good handle of like how your application is communicating, how it behaves. And what we kind of learned is that it's not always that easy. Meaning, you know, when you start moving things, you start moving environments, you realize that there's dependencies. And especially when you kind of run in this like this hybrid kind of approach, you start seeing like who's talking to who and who's dependent on who. But you know, for us, it was a good chance for us to kind of actually really kind of close the loop end to end on what our application was doing. So we are now running kind of in a hybrid environment. Things are kind of great. Our VLANs, we're running kind of VLANs on bare metal, able to kind of like bridge across to our open stack and all the data services. And like I said, we're in three different locations as of this last quarter. And the great thing is we're getting our last location for this year in Hong Kong going live. So the team has done a tremendous job at being able to get these things deployed. And we feel very confident that, you know, this is not the end of it. We actually have plans now to move forward to other locations. So what does this all mean? How does this roll up? You know, what can you really take away from it? Well, plan for that learning curve. I mean, there's parts of it that are challenging. But the great thing is if you look really at open stack and like the last few releases that are out there, you notice that a lot of the feature development is kind of like slowing down somewhat. And the focus now is on stability, performance. And that's great because now you have a chance to really kind of catch up. So, you know, spend that time really to get up on the learning curve of open stack, understanding the services that are key to you and that are going to help really drive your success there. And then have a good design phase. Really do some upfront planning. I think some of the earlier iterations were, you know, very quick, let's do this. And I think with a little upfront planning, there may have been a little bit more, you know, better requirements gathering, understanding of like what was being tried to accomplish by even going back into the data center. And really look at like the makeup of like the team who's going to be running this, you know, do they have the right skill sets? And do you have enough time to be able to do this? Because at the end of the day, Tube Mobile is not going to be known for running open stack. We're going to be known for our real time bidding platform. And that's what's core. So we need to really be able to kind of like show back to the business like, why are we doing these things? And is it worth justifying the costs and time that we're putting into it and bringing an open stack back into the data center? And really being clear about kind of like I said earlier, like the features that you feel are valuable and mapping these back to your applications and the application logic and your business use cases, are these things really tightly aligned so that you're going to really get that ROI that you're looking for, that that TCO is going to be there at the end of the day when you get out because it's not about like, you can do all this upfront analysis, but it's always about like that post TCO that's going to really be like, hey, this was the right decision. And for us, the one thing that we kind of learned was, it's almost like, what new lesson am I going to relearn today? And I think this one was one where, when we started switching things and moving things back into the data center, there's a cost between things that are talking between the public cloud and your data center. So really be ready to adapt quickly, understand what things are talking to what and really understand that impact because there's going to be a financial impact a lot of times if some things are really chatty or talking a lot. And even though we initially focused on the packets per second that we were delivering, we also realized that there was a lot of like, other bigger data behind the scenes that were communicating as well. And like I said, sometimes you think you know everything about your application or even how it's going to behave, but we move things into OpenStack and we started realizing like, wow, there's a lot of packets happening here and we'd hit this air with IP connection tracks being full. So you started like relearning these new lessons about like how your application behaves. So just be adaptable, be observant. I mean, that's probably the biggest key thing is just having a team that can really observe and watch how your application is behaving because you want to be able to give that visibility, create those dashboards and that monitoring so that your engineers can really understand how your cluster is behaving. I think that's probably one of the biggest challenges that you're going to face in OpenStack is when someone says, hey, it's slow and you have to go and kind of like really peel back like, well, what's slow? Is it what you're running? Is it the OpenStack? So these are things that you have to really kind of think about. So to answer the question, was it worth it? Well, for us, there was a 30% cost savings. We definitely had a reduction in our server footprint. When I talked about earlier, we had the unpredictability of like the network and being able to really understand what was happening. Having that end-to-end visibility for us was important and it really helped us in regards to troubleshooting like different parts of our stack. And I think there's probably something that is outside of TCO that I would say that this journey, or actually two things that this journey kind of gave us. I think one that I think is great about the OpenStack community is that they've done a great job about really putting open source first as a strategy. And for us, there was a lot of lessons learned. I think in my career, I've consumed open source, but was I really being a part of the community? Was I really taking full advantage of sharing in the development cycle, giving feedback and getting into OpenStack? If you allow yourself, it's a great opportunity and these things kind of carry over into like all our other toolings as well. Now we kind of have this as our standard. And the other piece for us as well was just kind of like, you know, we got better at a lot of our automating and how we were starting to approach our infrastructure. And so these are things that really made it worth it for us. So as you go about that, I would say, you know, aside from the TCO and the gains that you could possibly get, the open source journey itself, as strategic as it is, is definitely worth it as well. And it's definitely another side benefit that you're gonna gain. So appreciate you listening to me and just wanna say thanks for taking the time to hear the talk. Any questions? Sure. So if I understand your question right, so, you know, I think at Tube Mogul, it was iterative. You know, there was some initial kind of like early learnings that they were kind of taken on. It's not uncommon even when you look at like the public cloud where people move workloads in and then you kind of learn things the hard way. You start kind of realizing like, okay, maybe I should think about like how something fails versus like trying to treat it like a pet and just trying to maintain it. So for Tube Mogul, it was kind of iterative. I think somewhere halfway through that three-year journey is when they started really kind of like understanding like the challenges of like running OpenStack and understanding the need for you to like run it like an application to have continuous integration, you know, to have a more, you know, a tested proven environment, really running it as infrastructure as code. So I think that's kind of where that kind of came up. They really like over the last, probably the last year, a lot of that knowledge and learning really kind of paid off because then when the case was made to really go globally with it, because that's really what I was impressed about was that, you know, they went all in. It was not like just let's just chip away at like a new net new service. They went after kind of like core platform, but it was only because of like that iterative cycle that kind of happened. Some of those early lessons learned that kind of got us to that point to where we could go and make that strong case. Okay, well, no more questions. Thanks for your time.