 One thing that Linode always focused on was that developer experience, the self-service, and the simplicity. Lowering the barrier to entry to go from zero to production with less screens to go through, it's kind of a running inside joke that it's so easy you can almost accidentally deploy a VM. Now let's bring something like Akamai into the equation with the largest network in the world and a lot of enterprise customers that need a certain level of scale. So where we sit here in the market is we're taking the simplicity of Linode and keeping that simplicity model, that developer experience, but to match the scale that enterprises need. Hi, this is your Sapin Bhartiya and we are here at CubeCon in Chicago. And today we have two guests from Akamai, Billy Thompson, Manager of Solutions Engineering and Stephen Rust, Lead Principal Software Engineer. Stephen, Billy, it's great to have you both on the show. Happy to be here again. Thanks for having us. Yeah, first of all, tell us a bit about what I want to hear from both of you is that what has been the experience here at CubeCon so far? There's been a really big crowd here this year and a lot of interesting sessions going on. I really find the learning and the technology really reinvigorating for what we're doing back at the company. So seeing a lot of outside perspectives in the community and then talking to folks that are doing similar things to us around the conference and the booths has really been good so far. Personally, I'm getting very interested in the concept of pushing cloud native out to edge native because those have some overlapping principles but are still different buckets at the moment. So I'm appreciating that they're both very well represented at this conference. And it's giving me a lot of input and feedback and ideas to aspire where I'm trying to go with that. What contrast did you see in this CubeCon versus the previous CubeCon? I mean, it may be not contrast but certainly platform engineering and platforms, internal developer platforms is a common theme over multiple years. There's also a big security focus this year. Observability is big. And I think one thing is about the edge. When you talk about edge, in this community edge means actually many different things. So for an Akamai edge, it's more compute located closer to your users. The edge in cloud native can oftentimes mean IoT devices, more direct edge device. So it's a bit different from what the Akamai edge is, typically. You mentioned edge and I want to talk about edge quickly is when you look at edge and Kubernetes, there are a lot of Kubernetes distribution which are lightweight, which are meant for edge, you know. I don't want to name any vendor but now Suza, Rancher, they have, you know. So different vendors, K3S and all those things. From LK's perspective or Akamai's perspective, what do we have for that market? I think Kubernetes at the edge is a super important part of our market. LKE's typically in what we call the core sites, the core data centers. And expanding that presence into the edge is something that we're working on and moving towards Kubernetes offering there. And I think your deployment model, your application model, as long as you're writing to Kubernetes is same across the core and the distributed site. Really, last time, if you remember, we talked about the portability expect. So when we look at edge, when we look at the big massive and Akamai, you folks have data center all over the world. I think even in Antarctica or even in the middle of the oceans of air. So talk a bit about the importance of portability from the context of Kubernetes irrespective of where it's running. So when we talk about, let's, this is a follow up on edge, right? Where I said, I want to push cloud native further into edge native. The concept is still the same where you're developing an application where there's that abstraction from the infrastructure resources. So you still need that portability. You still need to use open source. You still need to use the interoperable, the standardized tools and so on. But we're talking about a different kind of paradigm where you can have edge enabled cloud native applications that can take advantage of these edge hosts that have resource constraints and other things where it makes sense to where necessary where it needs to be ephemeral and then that can run on Akamai's edge that can run on any edge, right? So the concept is still the same with portability. We're just looking at a design pattern on how that can take better advantage of the edge capabilities that are emerging. When we look at edge or when we look at big data center, Akamai, you folks have your roots in a lot of areas that now the cloud native community is trying to solve. Security can be one, you know? I mean, CDN was, you know, and which is more or less like things closer to where users are. So when we look at, just forget about the technologies when we look at Lenovo, Akamai, you folks also acquired on that, you know, also, what are some of the pain point that you see when folks come here, you meet them when they are leveraging Kubernetes and when we look at Akamai, you folks are solving some of those problems for them to make their pain less painful. So low latency, real-time applications, right? You have event-driven applications, you have streaming. A big one that we've been cracking open is data persistence, distributed data layers, right? So where you have distributed databases and you need eventual consistency really, really fast from all these various locations, that's the problem that we've been solving and we're continuing to explore and continuing to get better and better at solving. We're building up partner ecosystems, we're putting together different combinations of technology stacks and cloud native tooling and we're demonstrating really, really, really low latency eventual consistency from all of our various locations, including literally doubling our data center footprint in just a few months to add more to that experience. So that is a big problem and it's a problem that central cloud providers have not been also great at trying to solve and that's where we fit in, that's where we shine. What edge Akamai has over other players that you folks have because let's just focus on developers. We talk about developer experience a lot nowadays where you are helping teams solving the whole problem of deploying and running. I think we have a very unique position there and that is because one thing that Linode always focused on was that developer experience, the self-service and the simplicity lowering the barrier to entry to go from zero to production with less screens to go through. It's kind of a running inside joke that it's so easy you can almost accidentally deploy a VM. Now let's bring something like Akamai into the equation with the largest network in the world and a lot of enterprise customers that need a certain level of scale. So where we sit here in the market is we're taking the simplicity of Linode and keeping that simplicity model that developer experience but to match the scale that enterprises need because that pain point is a pain point. It doesn't matter if you're SMB, mid-market or enterprise and it's even one of the things that has really been stunting a lot of multi-cloud adoption is for example, they have this much of a hurdle to do X, Y and Z on a hyperscaler. Who can imagine doing that times two? That's a terrible experience. But then when we can give them that easier and more streamlined experience to get started with and then from there it becomes easier to rethink how you're going to deploy, how you're going to leverage, how you're gonna leverage portability and sort of work your way out of, I just learned the bells and whistles of this one platform and it took us five years to get proficient at that. We're cutting all of that out of the picture and still giving that speed, that performance, the low latency, the distribution and so on. And if I could just add to that, we've actually talked to AWS engineers and developers that use Linode for their POCs and their testing and their personal development because of that simplicity and that ease of use focus, whereas it'd be much more complicated from using AWS. So starts with the developers and then you can imagine expanding different offerings that are then more towards an enterprise offering. So you start with a developer, grow into larger customers and then can address those use cases on the same platform. In the context of Kubernetes, how important is developer experience and what is Akamai doing to enable that developer experience? I think what's important about the developer experience is how easy it is to use. So you take something like there was, I'll call it the Kubernetes boom. Like right now we're in this big chat GPT AI boom, right? The Kubernetes boom was a few years back and you had a lot of developers at that time because it was the hot topic, because it was what the cool kids were talking about. You had a lot of engineers whose leadership just pushed it down on them and they didn't have a great experience with that. Combine that with the complexity of a large hyperscaler platform and all of the hops that you have to go through just to get something set up and then you're trying to learn something that also in itself is constantly maturing and evolving at a really rapid space. So this goes back again to the simplicity model, right? When we can keep that there, keep it easy to use and keep it in that same flavor, there's a reason why, for example, Nigel Paulton who's a very renowned teacher and instructor for new folks coming to Kubernetes, why he over and over again would recommend our Kubernetes product because it's the easiest for them to get up and using. It is a hurdle for a lot of people to get started and that hasn't changed a lot. It has in some regards that Kubernetes has matured to the extent that it more and more just works but again, now everything's going to the edge. Now we're pulling in a bunch of AI. So with that rapid involvement, so our position is keeping the simplicity but it's also our responsibility as a provider to be providing that prescriptive guidance on how to do it this way. So it's not just we put this out here and you figure out how to use this, we put it out here, we talk with you, we have that human element that's another thing that sets us apart from many of our competitors, that human element, we're going to be part of your journey here. Consider us a member of your development team and we'll get you where you're trying to go with Kubernetes and figure out, have that consistent feedback loop so we can have that very seamless developer experience. Yeah, just from a technology perspective, you talk about developer experience on Kubernetes. That's historically not a great experience, right? So back in the days of Docker, that was an amazing experience for everybody. So how do you build services? How do you build tools around that to improve that user experience, that developer experience? Sure, so we can come up with new managed services, things like serverless as an example to abstract Kubernetes in a sense. Maybe it still runs on Kubernetes in the backend, but as a developer, do you actually care that your application is orchestrated in a particular way? You really want your application to just run essentially. You maybe don't want to run your own clusters. So can we provide managed services like a serverless, like a container as a service offering and provide a great tooling, CLI experience, user experience around that to improve that? I think that's an area of focus that we're currently working on and looking to improve for the community. And since we're talking about some kind of experience, I also want to talk about how different is cloud native experience from platform native experience? That goes exactly to what I was just saying as well. Cloud native, typically you're interacting directly with Kubernetes, you're talking to all the components that you're building together. A platform is a curated, developed, prescribed workflow, perhaps. So the platform is something that a provider can offer and build for you. So that you're not going directly to the cloud native, you're not going directly to the Kubernetes, perhaps, but you have that workflow, all documented, prescribed. Really guidelines and helpers to get you there along the way towards your application life cycle. And I want to bring this back to that point I just made about prescriptive guidance. So something that's platform native that is built to use with all the platform specific tools and Kubernetes being one piece of that, but it's still wrapped around other services. And those providers are going to provide guidance on how to do it that way. Versus really looking at it from a cloud agnostic approach, because if you care about the mission of portability, the success that comes from it and the why to it, that's our job then to again provide that prescriptive guidance. Here's cloud native in a way that's cloud agnostic. When we look at security, of course, we have been talking about the whole DevSec off movement. Depending on who you talk to, folks say, hey, security should be everybody's problem. Then somebody becomes everybody's problem. It's actually nobody's problem because somebody is doing something about it. In the cloud native world, where does the bucket stops when it comes to security with the cloud providers? Of course, you have a background in security or with the users to talk about the security aspects of cloud native. I think security is a shared responsibility between the users and the cloud providers, certainly as much as is in control of the provider should be secure and must be secure, right? But as it comes to a user's application or a company's application, they have a responsibility as well to build their software, provide upgrades and security patches and the provider, the cloud provider can offer integrations to improve their network policies and firewalls and mutual TLS helpers, like anything we can do. But at the end of the day, it is a shared responsibility for everyone. There are two cures to a bad security posture. One is a really embarrassing security incident and the other one is really good security awareness training. That doesn't change whether it's cloud native, whether we're still doing something on prem. I think security will be one of those just reoccurring conversations over and over again. What I do like about the direction where everything is going more distributed is when we talk about like the zero trust models, micro segmentation, the more distributed you get from that, like outside of a central environment, it almost just opens the door and sort of holds your hand to further walk into better posture in terms of micro segmentation. You're reducing the attack surface at all these different endpoints, right? And the more ephemeral that is, the more data doesn't persist. You're separating the data that does need to get sent home versus the data that can just stay there. So it's almost inviting of a better security posture, even if that's not intentionally where you're trying to go. When we talk about distributed, are we talking about distributed workloads? Are we talking about distributed environments? Are we talking about distributed architectures? And how different is distributed from decentralized? And what does distributed mean for data sovereignty because we also different countries, they have the regulation different states, they don't want to move data. So I want to look at it from a holistic perspective or you can have a myopic view. How do you folks look at it? I think from the Akamai perspective, distributed is all about locations and having your application be as close to your user as possible. So of course, when you're there, where is your data located, right? So if your data is far away, then you have transfer costs and recovery problems. At the same time, your distributed locations are often resource limited and perhaps size for data storage limited. So you have to, as Billy was saying, really micro segmentate your data as well so that only the small portion that you need in that location is there. And then think about things like recovery. If your node goes down, can you just move it to a completely different distributed location rather than having a in place recovery? And I'm looking at it from the perspective of a geo distributed application because distributed, you can have a distributed architecture. That's just a microservice architecture that still lives in one region. You can have that service mesh with the exact same thing in a different region. But now let's look at something like a microservice architecture. These different services that do one job and do it well or even break it down more modular functions that do one job and do it well that can run in different locations as they're needed to run, completely abstracted from where they run and then going to the data sovereignty point, that being easier to help fence off and ensure data sovereignty where it needs to go. So looking at it from a different point of view, I usually hear distributed, I think of just a distributed application. That's why we have distributed tracing, things like Yeager, but looking at that more from the geographical distributed edge perspective. Since we are talking about distributed and we are talking about edge and cloud, if you look at just the modern world, we kind of live in a data scientific world. Data is the most, and then I'm talking about data. I am talking about the gaming, you know? I mean, I'm a heavy gamer and almost every game is like 200 GB now. A lot of things are, you know, you are streaming. Next year, Apple will come with Vision Pro. VR is a big, you know, which also means that data has to be closer to the user. And once again, you folks have an edge there. I mean, no pun, edge in both ways, right? Edge in the way of edge, but edge in. How do you look at some of these new emerging use cases, Akamai and Linux presence globally to cater to these new workloads? So the edge we have is those, that's already one of the bigger use cases we have with our world's largest distributed network, but today, a lot of the day, it still has to call home. It still has to go to an origin. So what we're building out here on our existing infrastructure that we're well suited to do is being able to distribute that origin. And that is moving that data, the processing of it actually closer to the end users. That is exactly where we're going. And because the infrastructure is in place, we have that edge going for us already. No pun intended. One big advantage of why Akamai is going into compute is because they had the CDN, they had the data there. Let's move the compute out there as well to the distributed sites to the edge so that we can process that data locally closer to where it is. So absolutely, that's a huge interest of Akamai to address that. What are the next exciting things we can expect from Akamai? You can expect even more data centers. Like I said, as we literally doubled our footprint in just a few months, you can expect more improvements with the existing product, more capacity for storage, our Kubernetes engine improvements, and just more cloud infrastructure primitives to our suite. Yeah, and while we can't compete really on a service by service basis with somebody like AWS that has hundreds of services, what you can expect is us to build new services that can leverage that distributed nature, distribute, leverage the content distribution that we have. And those are in development, as well as targeting larger enterprise as well. So a lot of interesting things coming up. And I'm looking forward to talking to you folks about those technologies. Billy, Steven, thank you so much for taking time out today. And this was a great discussion. I loved it. Thank you. Thanks again. Thank you very much.