 Hi. This is your Saptal Bhartiya. And welcome to another episode of TFVR Newsroom. And today we have with us once again, Pavel Despat, Senior Product Marketing Manager at Akamai. Pavel, let's get to have you back on the show. Always nice to be here. Yeah, and it's exciting today to talk about this, because you folks announced new cloud computing sites and new capabilities. So I would kind of request you to just give us a quick recap of this announcement. Absolutely. So this was, for anyone who recalls, our follow-up to our February announcement, where we talked about how we were going to build out towards the vision of having distributed data centers with infrastructure services and storage services that are easy to consume and connected to our platform for all the benefits that you would know and associate with Akamai, right? The security, the reliability, the routing, and so forth in low latency. So this is a really, it's our first announcement. And it consists of, at the moment, three new data centers which are already live in use by customers, along with two more that we are going to be rolling out later this month. So for those that don't remember, we had 11 to start with. We are up to now 16 at the end of the month, which was a really great accomplishment for us and along a couple of new features and instance types to help people get the right instance type for their workload. Can you tell us where are these three new ones? If you look at where we might place them, a number of things played into factor. Customer demand, obviously. Remember, we have this massive platform where we see demand and traffic. Obviously, there are certain places that are going to be easier and quicker to build just by virtue of where they are. But there are also some strategic drivers. Number one, we announced IED, Dallas, because, of course, World Wide Hub. And if you look, we do have already existing scrubbing centers for our DDAL solution in that same area, right? So we're no stranger to there. We announced Paris. Paris has one of the highest data center densities in the entire world, right? And is a massive, important relocation for data sovereignty across Europe. Chicago, number five in data centers. Big backup low latency for New York, Philly, DC areas. And then the other two that are coming up. So those three were alive right now. The other two that are in beta and will be announced later, Erich and I, and Seattle. Of course, we want to look after the developer community. But Chennai, again, huge market. Look at what we've talked about with the most recent IPL. Those numbers keep growing in transactions, not just bandwidth, right? We're also announced previously that we're going to have a scrubbing center both there and in Mumbai. So if you look at this, those are obvious places where we replace compute as well. And that's the first bit. So that gets us up to 16 with the plan of 23 by the end of 2023. What is driving this demand, which is also kind of leading to this, you know, increasing footprint of Akamai? It's inevitable because if you look at the growth of the market in general, take whichever one you want, right? Whichever estimate you would like from whichever analyst. Ultimately, there are new workloads, applications all over the world, right? Not just in the North America, but literally everywhere in the world, new applications that require new compute, whether it's games, whether it's new video services, new e-commerce services, all these things are going to need compute. Now, are you going to, for these, for everything go, and are you gonna be able, is any one hyperscaler going to ever provide you all the localization, all the different features for all these needs? No. Are they going to disappear? Absolutely not. If you have a multi-exabyte data lake that you're doing AI and you will find certain providers do that, right? Like you go, you move your workloads to Amazon, look at where all the large learning models are happening right now. There are certain workloads in scale that they're gonna keep, but then at the other hand, and we were seeing that right now, there are some workloads that that's centralization and some of that baggage of all that extra services is unnecessary. If I have a workload and I have the code in a VM or container or something and I need to execute it, and we see a lot of these. I need to execute it in very localized places, for low latency, for localization. Well, that's the and, in the and the nor, right? There will be some parts of it. Yeah, you're gonna have your large learning models, for example, centralized into place. But then once you've trained the models, you can deploy them more distributed, right? You don't want to go all the way back to Ashburn for every query on that model, right? And that's just one example. Databases is another one. And one we're seeing of, hey, I have my data here, but if I can distribute it to my clients, that's great, right? Keep it centralized, of course, but just like we did with content in CDM. What are some of the driving factors where developers, teams, they do need kind of distributed architecture, for the distributed applications as well. As you said, you know, sometimes they want to be closer to where the users are, but what are the factors? I think the problem you have in general is that you can go and let's say, if you wanted to distribute your inventory database across a multiple, you know, a number of different locations and replicate it for the purposes of either resiliency or performance, variety of reasons why you'd want to do that. You can do it and roll your own right now. It becomes very difficult because what do you have to do? You have to manage the replication. You have to manage the networking, the connectivity, and it becomes an N squared problem, right? One replica is fine, but now when you have two or three or four, right, which you might want to do, maybe not even for durability, but just for local performance, that becomes very difficult, right? Because what do you do? You create a VPC in each region. You tie it together somehow, right? Either a tunnel or you do a transit gateway. Then how do your developers, to your point about the developers, how do they access it, right? How do you do load balancing? You have to worry about replication time. So the issue there is all these, you have all these problems plus all the operational problems. The idea of some of the things that we're working on, for example, like an e-commerce platform and inventory is, with all these locations, we can deploy through a couple of our partners. We work with a few folks, both MacroMet and HarperDB solutions that do just that. You click, you deploy, and you say, I want these regions. They go deploy in Okmai Cloud regions and the replication is automatically done, sucks from a central database, right? Just kind of slots in the middle and back to the developers, you just have an endpoint, right? It's an endpoint that happens to be replicated in all these different regions, however many you've chosen, but it's abstracted away. So the VPC, the connectivity, the, oh, you're using this subnet thing that you always run into, right? When you're trying to do your routing, all that's gone away. And it's much like the same idea we started with with CDN, right? Like in CDN, nobody worries if a hundred Okmai servers go away because there's 350, however many, 9,000 more. So that's the way where we want to go with data as well. And you need these distributed locations with this connectivity, right? Just dump it on the internet and that's where kind of the Okmai platform comes in. So that's why it's really exciting to have these locations married with the platform, with the connectivity and then start using some of our technology partners to bring that all together to solve those kinds of like database, data distribution problems. Can you also talk about the importance of distributed infrastructure? Not just from the point of view of distributed workloads but also because of the changing geopolitical situation in a lot of countries, they do want data to remain in the region. Of course, not everybody has something like GDPR or some of those larger, but they do want. So talk a bit about the importance of that and how once again having the data centers in like as you mentioned, Okmai, Mumbai, Paris, it helps to keep it regional. That's a very interesting thing that we actually take into account. So those kind of geopolitical situations, our CTO recently had an internal talk about how we plan that network and when there are these kind of conflicts, we do plan backup connectivity and all that kind of stuff, what would happen if we lost certain links. So it plays into that, into the platform but specifically into compute. It plays into some of the decisions if you look at where we mentioned we're built out. We have Europe. What's coming out later is going to be Stockholm. It's going to be Milan, right? And this is all on our blog but it's going to be a lot of Europe. We have India and there's a lot of investigation into the Middle East. If you look at whether mandated by something like GDPR and potential other EC initiatives, yes, Europe is a no brainer, Paris, obviously, right? Same with Stockholm, where we're going to have Frankfurt, Amsterdam, the usual kind of spots. But increasingly even in places where there's not an official mandate, the market is asking for it. You see that we see our customers saying, hey, whether they are based in the region or they have customers in that region say, you know, nobody's putting a proverbial gun to my head, but I do want to do this if I can get it. I would like to do this because it is where local market, local influence is going. So those spots are going to be, and why we're really kind of proud about those 23 is it's a big extension and that's where another differentiator, right? We will have infrastructure there. And while GDPR and privacy is very complex, you know, infrastructure alone doesn't solve it, but it starts with infrastructure, right? If you don't have a server in the country, you're in storage, then you don't have a foundation on which to build all those privacy things. So I think that's going to be, obviously as you see this year and next year the rollout plan is to be in those places. Are you seeing new kind of workloads, new kind of applications? Because traditionally we have seen, you know, all those lamb stacks where you folks also started offering GPU instances as well. But now we are talking a lot about Genuity AI, LLMs. So are you seeing any new kind of workloads also where you do feel, hey, it's not just reaching out to new locations, but it's also attracting those workloads or catching to those developers who want to run these workloads on the Camille node. We've seen newer things, probably the newest thing is that data distribution piece that's the most interesting. And the best analogy I've heard is taking the CDN model where you took static content and shoved it out to a number of different locations for a variety of reasons, taking data and doing the same thing and adding some compute there. Right, so not just moving a database like a key value store, which, you know, we already have, that's important, but that's not all of it. If you can move the data out and slot this distribution layer, kind of like the CDN did, almost transparently to browsers, that becomes really interesting and that's, I think, the biggest difference. Now there's a bunch of challenges there, which we can certainly chat about, but once you do that, then your big data lake becomes a lot more useably and securely accessible, because remember, we're not talking about security, but you just enable API protection in front of this and off you go, right? It takes care of your REST API as well, but specifically for the API distribution, you have some Dynamo or Pick Your Poison, wherever you have it, slot this layer in front of it, you have your distribution with that edge portion, you have your distribution layer and then the edge portion takes care of the usual security just like it has forever with your API and CDN. So that's, I think, the biggest change, architecturally, right? Because now you're talking about in all the diagrams when you go on all our reference architectures, artists have this other layer that we have to try to explain. And once we sit down and have this kind of conversation, it starts making a little bit more sense, but the point is simply, you have edge, you have centralized cloud, certain things, now that you have starting to have access to this more distributed, easier to configure middle tier, not to use that terminology again, it becomes interesting, right? And then you have to kind of talk about, yes, these are the kind of things that you do, and that's where we're working with some partners and whatnot and knock on wood some announcements, but that's, I think, the biggest different use case and you have to get people's heads around it. Like, no, you just, you still write to your database, you just more tell it where you want your data distributed in this layer and what security controls you want and you programmatically do that, but don't worry about it because the rest of it scales and routes automatically, but yeah, that's another story. How are you seeing kind of evolution of Linode and within Akamai where the previous workload was in now, what kind of workload? Because the cloud is the new reality, but people are doing different kind of things and they all need access to cloud, especially when you talk about distributed. So talk a bit about what kind of workload you're targeting is mostly the individual SMEs or you're also looking some large-scale, mission critical, you know, kind of, you know, more challenging, not comfortable workloads as well. The way to think about it is the combination of Akamai and Linode got us the ability to go in terms of size on the size axis to much larger customers, right? That is not to say that the individual developers earn anyway, anything's gonna change for them if you look at the launch, the Linode docs, everything was done the same, everything's in cloud manager, there's a couple of new instance types still in there, but it allows us to also serve folks that need larger capacity, need more IOPS, need more bandwidth, need more locations. Any, the individual developers are welcome to and hopefully will take advantage of much of the same thing and like the reduced egress cost. But the capacity in things, yeah, those folks probably won't notice because they can now just buy a lot more and they'll notice the speed, but they won't notice the how many you can buy, right? Because they weren't in that to begin with. Now, so to answer your question, this, the combination allows us to serve both markets now, whereas before it was harder for Linode alone to serve the higher end market. In terms of which, because again, we're not trying to be everything to everyone, where you see us focused, and if you look at our customer stories and everyone we talk about and some of the stories I've shared today is in media, we have a lot of great partners across the media spectrum and that really involves gaming as well. If you think of the large platforms, think of those as one. SaaS providers, if you look at our high tech customer base, we've been delivering their software for ages. We've been protecting their SaaS versions of their applications. We're a great partner there. So those are the kind of workloads, right? And those are also the kind of ones that what? Have privacy requirements, SaaS providers need to put things in different locations. They need performance, same thing with gaming and OTT, right? When you're whatever, I'm not going to name a service, but whenever sure your streaming service offers, you get cranking going, why am I paying however many dollars or pounds or whatever for it? So those are immediately the e-commerce, those are the ones that are immediately right up our alley and where we're seeing the most uptake a variety of sizes of customers for this idea of distributed and whatnot. Let's talk about cloud itself. Are you seeing that the cloud itself is evolving? So I think the first and foremost, so I used to work in mobile and everything. And I've seen a bunch of different technology versions come up. I think you have to realize, we still have AM radio. Technologies will sit around, IBM still makes mainframes, right? There's mainframes in the cloud. So the reality of all that long tail stuff is we will live in a diverse environment. I think we said it a little earlier. It will not be just one or two or three. Now, the right way to go about this is clearly less distributed, right? Not to do the pendulum of like mainframe, PC, cloud distributed, but it's really not surprising because we've been trying to put compute out farther forever, right? We've got somewhat successful on phones, but so far doing it, what we traditionally call edge, we've gotten some out there, right? But just the state of the art, Moore's law, connectivity, right? All those things have prevented us from going out that far yet, but it's getting there. I mean, at first content was at the edge and then we put WAFs and security policies, right? And bot detection and all this AI type of analysis. And then we added some functions at the edge. It's clear that for some things, it's better to move out. And out in this case, yes, edge, but out also and other things will mean hybrid clouds, right? I know edge also has that connotation, but I think what we see is there's a lot of the public internet, a lot of those applications are going to take advantage of this distributed portion because especially as scale grows, you're not gonna be able to do it as cost effectively centralized. And I'll point out to any large, even medium-sized event or even medium-sized security event. That's in the negative one. If you try to do it centralized, same thing with compute demands. It's just the demand is growing and it's just gonna push distribution out because that's gonna be one of the ways we solve this architecturally. Earlier you were talking about data as well. I also want to talk a bit about, I had a discussion with Justin from Akamai and we were talking about a small data, wide data. And it was not versus big data. When we look at those kind of data, how is Akamai serving that kind of market? I think if you look at it, that's where the ultimate idea of these distributed sites, even if you look at where on that comes in. With that acquisition, it's containerized data storage. Because again, if you have these sites, we talked about one of the challenges having all these sites is how do I connect them? One of the other challenges, how do I manage all the data across the site? How do I sync it? How do I store a rest? Which again is the boring infrastructure layer like we said with data sovereignty, but it's still a necessary piece. I think if you look at it where Akamai specifically sees itself is that infrastructure, that network connectivity, that availability layer, that yeah, you can build those kind of support for those kind of use cases. AI and large language models, there's a lot of weight there, but by and large, right? It all starts with that infrastructure and connectivity and abstracting that away so you can get to the workloads instead of the plumbing. Earlier, you were also talking about making things easier for developers. And once again, the workloads are pending. Everybody's moving to cloud. Developers do get overwhelmed with a lot of cloud complexity, Kubernetes. This cloud complexity is not going to go away. We have to make it easier. You have to help the people to be able to deal with it. How is Akamai helping organizations or developers, their developers should focus on what is actually adding the real value to their business versus getting worried about all this plumbing and unnecessary, I mean, it's not unnecessary. But what I understand is that they should be able to focus on their business. How do you make it easier for them so that they can continue to move faster, they can stay secure, but they don't waste a lot of their resources in things that does not add any value to their businesses. That one's really interesting because Akamai coming more from networking, almost TelcoE background versus Lenovo from cloud, we had different words for the same thing, right? Lenovo was working in CNCF, Akamai's world was IETF and those kind of standards. So when we come together, I think that was a really good compliment because in practice, what that means, especially for developers, when we talk about our global load balancer that will be announced, will be out in beta soon, it will work because you will put in an IP and an FQDN. There is no concept of some, like you don't need this weird target group, you don't need all these abstractions about this zone and that zone. And the point being that now, all of a sudden, it's great, you don't have to worry about and learn our nomenclature are because we don't have anything. It's a VPC, give it a host name. If it's outside of your VPC, we will route it. If it's inside your VPC, wherever you have your VPC defined, we will route it. So it's that kind of thing when we say simple, if you look at what it is to set up a node balancer right now versus some of the other types of solutions and the way that impacts developers is now if I have to go to this environment, like, all right, okay, what's a target and what's this? Okay, and yes, developers get it. They read documentation, there is documentation, but if it's just standard and open and I know this, I can do it in my data center. You know, I can do it in the node, I can do it wherever. So I think that's, well, unless you go into some of the other very custom things, right? So I think that's one of the things that we are standing by and both of you see, like I said before, both from the Akamai side and the Linode side, standards open, standards open. We just kind of call it different things based on where we came from. Pao, thank you so much for taking time out today and of course, you talk about this announcement, the growing footprint of Akamai Linode. Thank you for all those insights and I would love to chat with you again soon. Thank you. Absolutely, appreciate it. Thanks.