 It is something more you have to worry about. But the other alternative is what? The other alternative is managing all these things yourself with like terminals. Just like have a bash scripts that log in and check everything. That is not going to work. So yes, there are certain things you want to make sure you play nice when you're deploying things. When you're thinking about what tool I want to use, this is going to work across my whole environment. Is it going to work for all the things I need? Or is this kind of a point thing? So you can help reduce that overall complexity that way. But we are in a world where you have a ton of infrastructure and it's only growing. So hold on. And today's team is infrastructure as code. And I have two of my favorite guests on the show. Just to be fair, all of my guests are favorite, but these are more favorite. Paul is Senior Product Marketing Manager at Akamai and Rob Herschel, CEO and co-founder of Racken. And Rob, it's great to have you folks on the show. It's great to see you both on the same screen actually. Thanks for having us. Swap, it's a pleasure. Thank you. This is a topic Rob, you and I have talked. We actually ran a whole series there. So I want to start with you. And today we are not talking about ISE from a journal or kind of developers or teams perspective. We are looking at vendors perspective. So if I ask you, what does ISE mean for Racken? Oh my goodness. Infrastructure as code is the very core of what our product and our mission is about from that very fundamental statement. Infrastructure as code for us is this idea that we can turn automation into software, that we can provide development-like processes with modularity, composability, reuse, the ability to distribute and version control automation and by extension infrastructure all the way down to that level. So without really strong adherence to these core principles, we couldn't write automation that is reusable across our customer base. And that is fundamentally where Racken develops value is by helping companies not have to write automation over and over again and share and reuse and collaborate. So yeah, I can't imagine a more important topic for how Racken helps our customers. You basically summarized the discussion we're going to have today. Well, let's unpack it, but. And Pavel, if you looked at Akamai after the condition of Linux, you folks like do so many things scalability. So if I ask you, how would you define ISE from Akamai's perspective? From our perspective, I'd say it's certainly critical from our own operations perspective. Remember, we do have 400 Samad, 440, 450,000 servers. And if you've ever taken a Noctur, you will see the level of automation just not in infrastructure, but in operations, monitoring and all that stuff. From the other side from a product standpoint, I mean, like Rob mentioned, it's critical. Customers, whether it's folks deploying, SaaS kind of solutions, media, e-com, all of this needs to be managed. This isn't just some monolith, right? Your favorite OTT provider isn't just an S3 bucket and the web server in front of it. There are thousands upon thousands of VMs, functions, containers, buckets, databases, all this stuff has to be managed, right? Serverless functions. So we see it from our standpoint is trying to fit in there in as much as possible to the normal standard kind of tools that people are using, right? You want to make sure that if you are like console or Nomad or whatever you're using, you're fitting in there for every piece because what we see is there's a million different pieces, a million different technologies amongst a million different providers and that's increasing, not decreasing. So if you look at ISC and its complexity, you talked about does ISC kind of reduce some of this? Rob, if you remember we talked about it, we are the complexity that's going to go away. We have to learn to deal with it. So what role does, you know, probably, I mean, of course, as you're talking about the whole sprawl, ISC play in either reducing the complexity or this is one more thing that teams have to worry about? It is something more you have to worry about but the other alternative is what? The other alternative is managing all these things yourself with like terminals, just like have a bash scripts that log in and check everything, this is not going to work. So yes, there are certain things you want to make sure you play nice when you're deploying things, when you're thinking about what tool I want to use, this is going to work across my whole environment, is it going to work for all the things I need or is this kind of a point thing? So you can help reduce that overall complexity that way but we are in a world where you have a ton of infrastructure and it's only growing so hold on. Puff, you make a really important point to me and something that Rack End started when we began our journey, which was acknowledging that the complexity and the tool sprawl is reality and real, that you weren't going to fight complexity by ignoring the fact that you have multiple vendors or multiple tools, right? And that's true and normal. There's times when we have a tool that works well and then an innovation comes along and we supplement one tool with another tool and they overlap and that's normal and ignoring that is actually standing against innovation. And so part of what we have to do here is we actually have to embrace the fact that there is a lot of heterogeneity in the infrastructures that we're building and that that heterogeneity is actually not working against us. And that to me was one of the core things that I think people need to recognize an infrastructure as code is it's establishing a way to cope with the fact that things are constantly changing, that there is drift, that there is sprawl. And instead of looking at these infrastructures we've built as wrong because they have multiple vendors or multiple techniques or a lot of different tools that we have to connect together, the thing that was the big aha for me in our journey with this was recognizing that that was normal, that dealing with that type of variation and drift and configuration change was normal. And instead of trying to undo it, we built defenses for it. But the sad thing is, and it's not sad, but the reality of it is that the defenses are adding complexity. The challenge that people need to remember is that complex systems are actually more resilient because they have those defenses built into them than simple systems. And people sometimes often forget that the complexity itself is not bad. It just requires a more cognitive load and we have to work to defend against that. That's the trade-off. How much adoption of IAC you are seeing today where it has kind of become a de facto default or you still see that teams are still understanding it and early in different phases of it or this is not a problem that they have to deal with at all? I've seen across our customer base people are in different phases of it. And to Rob's point, somebody who's thinking of this and going, okay, I have a complex system. I have all these services I need to deal with. Maybe I've got different regions and maybe I've got different product lines in my offering. Those are the folks whose complexity is almost twisted their arm into doing it because if you do have all these product lines and you do have all this infrastructure and you are growing, you're probably not gonna pull it off with some monolithic, very manual approach. You're just gonna flame out because you're gonna get a customer that says, hey, I need XYZ capacity or you're an e-com person and it has Black Monday coming up and you have to scale up and you can't. You just can't do that manually. So I guess it's common across the folks that are pushed to do it. You have increasing, I have seen some of the smaller companies going, no, we're starting at this. We're starting this way, right? And this is usually folks that may have gone around or two trying to do it the non kind of IAC way. But yeah, the big folks have to and increasingly we see the smaller startups going, no, I'm gonna do this and because we aim to be successful and when we scale and when we need to deploy other stuff and when we need these complex systems we need all that resiliency and everything that's built into it. So we're gonna start because I'd rather start building this to some level complex thing but I'll start building that then try and electronic transition it later. So it's really, it really varies. It's becoming very important. And I could say that a lot of folks are literally asking for different providers, different plugins, more so than in the past. It makes a lot of sense to me. I think what Puff is talking about here is this core internal process that companies need to have where you were saying manual processes what I was hearing was a person, right? What fits inside of a person's brain and what they can do and what you're describing and what I think infrastructure is code at its best is about is recognizing that we don't wanna have things in people's brains and then put them into our critical dependency path. It limits scale, it leads to burnout, it causes all sorts of security and other performance problems. And so what we're really trying to do here is we're eliminating manual processes but what we're really doing is eliminating humans as the bottleneck in our system's growth and scalability. And a lot of what we talk about with infrastructure as code is putting things into Git where it has visibility, it's transparency, right? The Git's not like, putting things in Git is not and source code control is not the thing. It's actually the fact that you were putting in a place where there's visibility and we have a way to collaborate on it and have multiple people work together, right? Have checks and controls before automation has changed or before configurations are changed. Fundamentally, what we're looking at is every time a person becomes a touch point or a bottleneck or something inside somebody's head is the source of truth, those are places where we're limiting our ability to scale our organizations. And that's really, I think when if people sit back and look at infrastructure as code that's the first principle you need to come back to is we're not trying to eliminate jobs. Actually, usually the operators who are doing this find they're adding a lot more value into the systems but we are trying to eliminate that what some people would call a bus factor or a lottery ticket factor depending on how you like it framed but this idea that no person in the organization should be so indispensable that if they disappear or are not available then our systems would go down or not be maintainable. That's a huge point and I'll expand on it in the security sense because the other thing when you check things in is libraries, reviewing, lest we all forget and just you can say, hey, open source, right? Everybody's looking at it lest we forget heart bleed from not that long ago, right? Open source is awesome. I love it with all my heart. It does great job. There are people out there finding bugs but that's not enough in production code. You can't just have people like you can and do and we've seen a lot of these problems in fact have a scanner that looks for this stuff which is why we know you can't just take, you shouldn't just take random libraries and go, oh, this is a great blah, blah, blah library and centralizing that in a way where it can be reviewed where it isn't just somebody goes, oh, you know the Swapnil over there has got this awesome algorithm for sorting whatever but he's the only one that knows it. That's the kind of stuff that you get away with, right? You're the get away from, right? When it is standardized and when you can have people look at it and you do know it is secure rather than just like, you know, finding some get read bug on, oh, I'm sure this command is fine, right? And you laugh but in all these lines of code even in a moderate library like who's gonna read through, you know even 50K of libraries, right? That code, that central repository is very important for that aspect as well. I would add to that, there's a vendor, you and I are both vendors here and one of the things that is helpful even in an open source library, if you find a bug you still need somebody to fix that bug. Very few people, even if they could read that code couldn't necessarily fix it and then propagate that change and if it's a well-managed open source library there is no single fix unless you build your own copy of it. So you're gonna find a fix and you're gonna promogate it through all the gates and there's a lot there. Having a vendor as a partner enables you to sort of get those changes and make those things happen. But I think another piece of this that people forget is when you're doing infrastructure as Codewell you actually have artifacts that you can share and get help with. A big part of what you have to do when you're supporting customers, what we do when we support customers is we actually have to listen and understand what the customer's done, what their challenges are. And when we have a customer who says these are the scripts that were applied this is the configuration that was applied and they have access to those things that we can help them so much faster. The times that we have those Eisenberg those hard to troubleshoot things are when we just had this happen we have two teams and somebody's coming to us like your system's broken, your system's broken. We're like, we think it's the networking can you get them on the phone? And it took a series of phone calls to get the right person who understood the configuration changes. It would have been much easier if they had said yeah, we'll just give you the how our switches configured and we could have pointed to the line of code in that configuration. So I feel like in infrastructure as Code I talk about collaboration all the time and it's fuzzy and not specific. What we're doing here that I love is that we are being incredibly concrete. Infrastructures code creates collaborative environments with your teams but with other teams because it's created a place where you can look at how things were done and share them and talk to them and recreate them. That's what collaboration ultimately means. Right versus looking at how do you know the state of something? Yeah, how do you know the state of how many machines were implemented or we're running at one time, right? Or whatnot? This is almost a declarative way of kind of debugging and tracing things. I know from both of our perspectives if you're dealing with a service and somebody's saying your service is down or broken or not working right, the first battle is finding out what they did. And if they're like, well, Bob who's on vacation at the moment runs a script that's on his desktop, we think sometimes he tweaks it by hand, right? That's exactly the counter argument of what we get but those things happen to us all the time. It's sadly a normal part of support but and you have to have tools and processes that reinforce, make that behavior less desirable or even better harder than the infrastructure is code process. Well, cause they're usually doing that to solve a problem, right? They're doing that because they don't have a tool. Nobody wants to write a script to monitor something or like check to see if something fell over and like kick it to restart it or whatever you're trying to do. Nobody wants to do that. They're doing that because they don't have a tool. They don't have some other mechanism by which to do that. So that's a gap right there, right? If somebody's, a script and somebody's shell on somebody's laptop, like, oh, sorry, I don't know, Pavel's Linux password. So we're not going into prod tonight. That's probably not a great idea. This is a good example to me, right? I mean, you're running a global infrastructure that's absolutely core to doing business, to keeping the internet up. And that means that you're gonna have controls and governance and operators are gonna have reviews and checks and ways to do that. That's infrastructure as code. It's just like developers reviewing each other's code and having gates inside of their CI CD system. Operations in an infrastructure as code format, they're gonna have the same processes. This idea that nobody makes a change without having a review on it, right? Especially into a production system. But you have to build those checks in and people have to come back and say, yeah, it's really inconvenient sometimes. I can't make a production change without your review. But that's governance and compliance. And the organizations that do that, right. Those ones are the ones that pass audits and sleep better at night. The other ones are always worried that somebody's gonna misconfigure a file and take down their data center, which happened. I completely, completely understand your point and agree with it, because especially for us being PCI, SOC, blah, blah, blah, blah, blah. Touching anything can have knock-on effects, right? For PCI, if something's not encrypted in one hop, the whole thing is no longer compliant. And especially with the four coming out, there's a lot of new stuff that was there long libraries. So you're right, like the whole point of, oh, I can't just make a change in production? Yes, you cannot. And you never will be able to without a bunch of people signing off. And yes, you're right, it's slower, but it's just going back to your point of a complexity of a system. There are so many dependencies, especially on, like when you've got these platforms that people are running their stuff on, that they expect you to be encrypted, they inspect you to not write stuff to disk, all this stuff, and one single mistake could be bad. So slower is sometimes better, I think, in this, especially, you know, I'm not suggesting maybe a game, yeah, fail fast, but you can fail fast if you're like an airplane manufacturer. I think that people misrepresent what slower means for production, because when we talk to our customers, you know, a lot of times they are still in break fix, what we call the break fix merry go round with this, right, where they think they're going to accomplish certain tasks and priorities, but instead something breaks, they spend time fixing it, right, most operations teams, without this discipline, chase, issue to issue, crisis to crisis, security patch went heartbleed, you brought that up, that's a great example. I mean, we had customers go offline for two or three months, they couldn't do new work because they were busy, you know, updating their systems for that system, for that security vulnerability. And, you know, slower from an ops perspective, actually translates into more predictable, it translates into fewer emergencies, which actually speeds you up, it translates, right, it's a cadence and it's a predictability that's really important in these systems. It's not, it can feel slower because you're adding more steps, you've got some more complexity because you've got to go through these processes, but that reliable cadence actually those teams, what are very concrete experiences, the teams that implement those processes dramatically improve their end velocity, they do things faster, they innovate more, they bring in new technologies faster, they have less time, they're chasing things. It's actually a remarkable envision, but there's a learning curve, you got to get over that hump first, put in the processes, put in the gates, train people on the gates, and it does, it feels a little bit like a step backwards before you're two steps forwards. One more thing when I listen to you folks is also, is the kind of importance of reusability of the course, so that, you know, once again, once again, we've done a dedicated session about that as well. Of course Rob, I will ask about the kind of, you know, importance of reusability, but I'll also ask, you know, sometimes companies like Pulumi, they also come up with the concept of, you know, the whole IAC library of reusable code. So can the same idea, when we look at, you know, the application code reusability can be implemented brought into the IAC space. So there are a couple of questions there, and you folks are having great discussion there. So let's, let's, you know, talk about reusability as well. This is really a core challenge because every data center is a little bit different, even inside of companies own systems, right? That innovation or vendor drift, sprawl, does cause a lot of variation for this. And again, we've spent a lot of time trying to address how to solve this problem because our ultimate goal is for everybody to be using the same or, you know, 90% the same automation code. One of my favorite analogies here is that to make this work, you have to recognize that data centers aren't layer cakes, where every layer of technology is cleanly stacked one on the other. They're more like fruit cakes where all the ingredients have been mixed together. And the challenge with reusability is that you actually have to be able to pull information from system to system and then share it. To do that, what it actually requires you doing is two things that are hard. One is you have to be able to build an automation stream that has standardized process and injection points so that you can have variation in a controlled way. Something that we call, it's like a CI CD pipeline, we call it an infrastructure pipeline. So you can have a standard sequence of operations and known points where you're allowed to inject customization. That's been really important for making this work. The other one is a degree of acknowledging that you're gonna have complexity in the software. And so you have to have a way that allows people to add to existing automation to make it work in multiple scenarios. And this is very counter to what a lot of people are used to. A lot of times people take a working script, they take out all the stuff they don't need and then they end up with a copy of a script and we end up with hundreds of copies of scripts that only work in one scenario. What we encourage customers to do and help them to do that takes some effort is step back and say, well, wait a second, this is very similar to, this Sentos thing is very similar to this Debian thing. Then maybe I can have one piece of automation that supports both operating systems. And that takes a degree of curation and work and collaboration and things like that. The rewards are really high because it means you don't have to keep writing stuff over and over again and there's less maintenance. So your maintenance in the amount of code you have actually goes down over time if you make that investment. But like we were talking about with going slower, it requires you to invest in this idea that I'm gonna work in somebody else's code base because I want shared ownership. This is back to open source and the benefits of open source, right? Open source means shared ownership and collaboration but it requires an investment to make happen. And that's been one of the biggest challenges is teaching people to make that investment. And I think speaking to the benefits, that's absolutely right. Because I think you look at it, one of the things aside from Apex, Apex, CapEx, everything cloud not managing it, right? That was cloud, right? You could put it anywhere. You just move it around, it'll be great. And some of that materialized, but a lot of it didn't. I think largely because of the difference in environments that Rob that you mentioned, right? It's not going to be the same even if I go to most cloud providers. But at the same time, I think if you can't ignore the reality that all of a sudden this push towards distribution, whether it's for data localization, whether it's for performance, whether it's just companies want, or users want the application locally for latency reasons, there is that distribution coming. We see it, right? South America is finally honked off that they have to get served out of Miami and Atlanta and deal with all that latency, right? There's a lot of parts of the world where they're like, no, what, why? Why do we have to go to USEs? That's stupid. So to do all that, you really do have to, if you're running these things, you do have to, like Rob said, invest in that extra effort to be able to move it, because think about it, right? Think about the benefit of if you do that. If you go, okay, and you know, pick your favorite cloud provider. I don't like you anymore. I'm leaving. Move chunk. Now granted, that's very hyperbolic, right? It's never gonna be that easy. But think about it. If you have the majority of your stuff, like the big parts of the workloads, all of a sudden that is easy. And then you can move around. And not wholesale, right? Because remember what we said at the beginning. It's probably not gonna be wholesale, right? You're probably gonna move this one thing in this one region, because you probably got a better rate or something, right? And having that investment that Rob mentioned in that portability by having it as code and dealing with the if Debian, if Centos, right? If whatever, it's worth the investment because then you just up and move, right? You're a lot master of your own destiny to go back to a Seinfeldian quote. But yeah, I think it's usually important to do that. It's always hard to make trade-offs on what you work on, especially at the beginning. But these, especially as you grow, we've seen it really pay dividends. Yesterday I was recording a short end. One of the topics was technical debt. That's one of the biggest items on the balance sheet. I wanna talk to you about what impact IAC has on technical debt. Does it reduce it or does it increase it? So it's interesting. We're discussing some other forums. I mean, we're discussing the potential for large language models and chatbots and things like that to actually create an explosion of technical debt where people write a whole bunch of scripts that they don't actually know how to maintain. And so I think these concepts of technical debt are real. However, in that conversation, one of the things that we reframed, a lot of times people say technical debt when they actually mean system maintenance burden. So just to be a little bit specific, technical debt means I built something knowing that I have to go back and fix it later. And there is a lot of technical debt, but I think there's a lot more sit, people write code thinking it's good and then don't budget the maintenance to keep that code up to date. And in operations and systems, that budget actually is really important because things break all the time because your APIs change or somebody revs the system or there's a security bug. So that code, automation code, especially just sitting somewhere decays at an astoundingly high rate. And you have to keep it up and keep using it. And that's not technical debt. It's actually system maintenance burden. And I think what infrastructure is code, if you're testing it and you're rehearsing it and you're collaborating on it, you have the opportunity to be finding those issues faster or even in the ideal case, other people find them and then share the code back with you and then it's improving. But this idea of technical debt or system maintenance cost is very real. And in the automation space and infrastructure space has to be paid or your systems will break. And so I do think infrastructure is code as a way, especially to collaborate around this is a really big deal. Cause if you can have a team of people or a team of companies or a community who's helping you pay that system maintenance cost, that automation maintenance cost, that is a huge burden off of you. They're gonna find things and fix things and keep up with things that you, you're only paying a small portion of that. And that's a very, very big deal here. And I really would encourage people who are like, oh, we have so much technical debt in our code base. It might be helpful to frame it as system maintenance debt because it's just like maintaining your house and your car, if you don't pay that debt it's gonna break and be much more expensive or that maintenance is gonna be much more expensive. I don't think people think about what they've built from an infrastructure as code perspective is having that maintenance requirement enough. This is such a great distinction, right? Cause technical debt, like you said, that's, I know I'm gonna have to fix this, right? Like I just put some tape on this at the moment cause I need to get home, right? Whatever it is. And too many times people, like the house analogy I think is perfect. Cause yeah, it's gonna wear, stuff's gonna break. You need to work on this, right? Bug fixes, whatever. And too many times like, oh, you know, I have all this technical debt. Like no, you have to operate it. You have to run your system. That is a big difference. And I'd argue one of the ways that you would look to reduce technical debt and not concentrating on particular IAC type of technology but that's one way to reuse it, right? Are you going in? Is it somebody's laptop that they just run a shell script that our copies like some logs over and kicks a process, right? Like that is technical debt. So an operational maybe of the sort but rather than like, you know, bad design, maybe both but you know, that exists too. And IAC can definitely help with that, without automation. If again, you kind of do it, right? Cause you can, you know, just because it's IAC doesn't mean you fix it tools. You can make a big mess out of it. You know, it gives you all the rope to hang yourself with too, if you really want. Yes. It's fascinating to me and we see this in very real cases is that a lot of times that expedient technical debt thing, oh, I just needed to r-sync some logs and it was on my laptop. So I got it done and that became the process. And all of a sudden we have, I see that, we see that a lot. The funny thing is people really do a bad job estimating the cost of the technical debt solution or the do it right solution. The times when, you know, we sit back and do things right, always feels like it's gonna take forever. It never takes as long as we think. And then the benefit of the maintainable solution is pays dividends back so much more quickly from that perspective. And now I'm using financial analogies again, but I do think that there's a do it right dividend if you wanna take the other side of technical debt. And it's just people really overestimate the cost of doing it right, just over and over again. It's very frustrating to me how often we haven't done the right thing because we thought it would take too long and then we went back, fixed the problem and we're like, oh, that only took a week. We're done. Yeah, you overestimate the difficulty of fixing it and underestimate the cost of not. Now, since as you're talking about there are so many ropes, you know, there you can climb or you can hang yourself. One other thing is like immutability, you know, that of course you cannot change. So are there any cavears, you know, when we look at immutability that comes with infrastructure code? Oh my goodness. That's a word that used to drive people crazy. I think a lot of operators couldn't even pronounce immutable, let alone understand the concept. It would come a long way in part, thanks to Docker and containers. There's a core concept here that I think is important to understand. We used to argue, going back 20 years in Ops, about patching things. And, you know, it's very difficult to roll back. We used to always have this fantasy that we could make an error and roll it back. The reality is that in Ops, you only really can move forward. So you're always patching forward or rolling forward or things like that. And once you embrace this idea that you're moving, you never move backwards in time, you're always moving forwards. This idea that the things I use to make, to move forward on, need to be consistent and repeatable and not changeable, right? This idea that I can't change anything, I can only move forward. That's the core idea behind immutability. And there's, I think we could talk all hour about different ways to execute immutability. And we find it incredibly powerful and much faster than patching and fixing. But it's this idea of always knowing where you're going. I think it's something people should understand. No, I agree. I think it's a different way of thinking a little bit, right? And it's not really surprising that when you change processes, when you change, you have to think a little bit differently. It's, yeah, it's going forward. Let's be honest, right? Rolling back, sometimes it works. But I mean, for any of us who've been like at 2 a.m., like at the end of the maintenance window, where you're like, all right guys, it didn't work, roll back. Yeah, exactly. We all shudder when you get to that point. It's not like, we're losing out this big awesome thing without being able to roll back. Cause I've had worse luck than I've had good luck with that kind of thing. So I don't, you know, good riddance move forward like Rob said. One more thing that I want to talk about is that when we do talk about these technologies, you know, of course everybody looks at the next shiny object. Everybody wanted to get on the Docker bandwagon, that Kubernetes bandwagon, then everybody wants to get on Gen AI bandwagon. Then look at interest code. Is it something which is for everyone or there are certain use cases which are ideal and certain use cases where you don't want to touch it? Or this is more or less like, no, everybody should be doing some form of interest code. I think there's definitely a tipping point with respect to size where it makes sense. So for example, if I am, you know, running a web server for my local high school, wouldn't really worry too much about like instrumenting too much, right? Or like my local animal shelter, great. However, the more complex it gets, the more you need this to for all the reason that we just talked about for the supportability, for the traceability, for the ability to look back and see what changed. So I think if you look at that, and then also if you look at capacity, people that need a lot of machines or a lot of machines, containers, whatever you want. So when you start looking at that, and that's because like you were saying Rob earlier, manual means somebody clicking, whether it's a CLI or a web UI, right? That's not how production systems work, but yeah, especially when it's in the dozens and hundreds. So the kind of folks that generally are people that have a big, big workload, right? A lot of different machines probably spread out, probably really important to have a secure availability, kind of, you know, always up type of thing. So you start getting this picture of a lot of the FinServe folks, a lot of the SaaS customers in our world, right? When somebody goes and starts up a new account, right? All this provisioning, all this stuff has to happen, right? That is going to be done through some automated way, right? Nobody's getting an email going. Somebody else just signed up for a new instance of SaaSX. Oh, quick, let me go stand up, you know, an EC2 instance or something, right? So those are the folks that generally have to use it. And then you see increasingly a lot of the, especially like in the gaming world, video gaming, not like gambling gaming, but like video gaming and online gaming, maybe them too, using infrastructure because they have very spiky workloads, right? New game, all of a sudden they're like, oh crap, you know, a new patch of Witcher comes out. We need like, well, that's not an online one. New patch of some online game comes out. We need new matchmaking, new instances of matchmaking servers, right? Because the second the new version comes up, everybody's going to get on and try to do it. We need local matchmaking, we need local, all the points, all the chat services, the stun, the turn, all the crap that makes all the online stuff possible. So those are the folks we generally see use more of it. It is growing though. It is growing a lot. We have those conversations with our customers constantly about asking like, hey, how do we integrate? How would you suggest you do this? How do we do the CDN piece? How do we do the security piece? We definitely, you know, have seen the expertise curve changing here, right? Without a doubt, our initial success in infrastructure as code were, this surprises people, because we do a lot of bare metal work, our best customers have been the ones who were brand named cloud user companies, right? Media companies and gaming companies, adult and video, you know, and banks. And what, you know, they're very sophisticated operators who have these very demanding needs where the infrastructure as code capability lets them move much faster. And, you know, when we actually help a bank, they call it repaving their data center, they can, they have to from a legal requirement, but they rebuild their data center on a regular basis just to prove that they can do it. And once you get that down into the one hour timeframe, it completely transforms how they think about running their data center. It's transformative from that perspective. But I have a hope that with all this infrastructure as code, every day that goes by, those libraries, the shared code, the collaboration that we're talking about, that gets baked more and more into the system. And so all of these advanced users that were described as they work their logic deeper into the system, we actually make it easier and easier and easier to use these complex systems. And so your high school or your pet shelters or your home, right? I can see a path where the infrastructure as code is such that you can take standard infrastructure, plug it in and it does the right thing because we've gotten this shared body of knowledge of how to run the infrastructure and codified it using infrastructure as code. So it's portable and reusable and version controlled. I don't think we're definitely on a path. It's hard to estimate how fast that comes, but we're on a path where the knowledge of all of that advanced operations gets into the code and then that becomes accessible to people. And so infrastructure as code here is actually creating a pathway to make infrastructure more accessible to people where more people can do that work because they don't have to know all the nuances and which server am I using and which OS am I using because the automation works in all of those cases because of the community effort. And so that gets me really excited about what's coming as these processes and techniques, not just get adopted, but then the knowledge gets cycled back into the communities. You want to add anything to that, Pablo? No, that's just a great point. I mean, think about open SSL like we talked about, those libraries didn't exist a while ago, right? And now we have elliptic curve encryption and libraries everywhere on our machines and phones and Raspberry Pis. So that would be amazing, right? What if the school can just click a button in code and the cloud provider just gives them like a web server or a community thing? Like how awesome would that be instead of like getting swindled by some local web server or whatever. But yeah, I think that's a great point. Rob, Pablo, thank you so much for taking time out today. And also thank you for a great discussion. And I look forward to having you folks again on the show. Thank you. Always a pleasure. Thank you both. Thank you.