 Hi, this is Hossapin Bhaktia and welcome to another episode of TFI newsroom And today we have with us once again public spot senior product marketing manager at Akamai And today we are going to talk about you know a big star in the teacup Which is Amazon Prime where they release the blog about how moving to monolith was saving them a lot of you know Cost versus using serverless, but the thing is there is a lot to read here. First of all, we should all kind of appreciate The that Amazon, you know, actually released that blog and as we were before this discussion You said this is what we need we need these discussions people will always pick and choose But this is most important thing which is to be transparent talk about technology in the terms of technology So if I asked you when you saw this story, what was what is your analysis? What is going on there? Yeah, I agree very much It's it was it was a bigger topic in security a few years ago and we had these issue of hey We should share, you know, we should share breaches. We should share this information it is certainly Could be sensitive, but I I definitely applaud anyone who shares kind of the evolution because there was lessons to be learned and Aside from the memes that kind of came along with it. There are things to learn for all of us We've all kind of been in that position where as they described in the beginning You started with this service and then all of a sudden you're asked to oh do it for XYZ times greater scale. So there's their lessons in here My first reaction was just that you know, it was a kind of a typical situation where they had I'm guessing I don't know this nice internal tool that was really helpful and somebody said hey, that's a great tool We should do that for everything and then you have this project in your hand Then you probably left with this big architectural decision again speculating, but I think we all probably have seen that kind of scenario play And actually when I look at it, I felt that this is the actual scientific method You try things, you know, you implement them and when that if they don't work You you don't just stick to it just for the sake of hey That's where we are doing you just choose the right right tool right approach for the right use case And this use case is also not a generic use case. It's you know, specific to their Amazon Prime the delivery You know how to send content optimize it who is doing what so it's also a bit different Then a lot of use cases where micro services serverless makes a lot of sense Even a lot of things Amazon does they are still using it They'll continue to use it's once again, it goes down to right approach right tool for the right job So so talking a bit about once again, this is specific use case where they saw this value in Monolith were says, you know serverless to your point media in particular is a bit of an edge case For for a lot of things just given the volume given the distribution given the real-time nature of it Right. We don't see a lot of it. But when we're watching live event sports, there is a lot speaking from company that dabbles in CDN a lot that has to happen For the bits from that camera to make it to our phones in you know, five ten seconds That's usually a lot of the hand waving license latency that we see so There this is a very specialized use case now if we were to take a look at this particular example and see what lessons we can draw Media because it's special one of the things that makes it specialized is the volume One of the things that makes it specialized is how quickly you need to react especially if this is a real-time stream In their case it was an analysis workflow. So that's something you don't want to really delay You really don't want to batch it out and have it be done hours later so if we look at those aspects of a media workflow and Then we look in hindsight. Hey was you know, was this the right choice? Well That's where you're kind of see You kind of see again in hindsight why we saw some of those challenges happening First of all, they already talked about in the article how they were already doing a few Slick things to minimize the amount of data, right? So before they even went at the problem of analysis They're really trying to minimize their input data set to fit it within the constructs of in this case step functions, right? And serverless and then as that scaled of course that just became bigger and bigger. So what stood out to me is Yeah, the start and it was perfect, but as you stream out to thousands of Streams which, you know, largely high-def streams these days are in the area of three to four to five plus megabits You really had a big compute problem and like they said you saw that bumping up against the limits and Starting to do interesting things with transfer s3s to keep in those limits But I guess the lesson to maybe take away is if all of a sudden, you know You're you're breaking that torque wrench you're working on Maybe you need a bigger torque wrench or or some other kind of tool in general if you keep breaking it talking a bit about With once again bigger wrench or totally different tool The bigger wrench piece was kind of what they did right if you call if you if you extend the analogy to the monolith Instead of parsing it out and just like with any architectural pattern right there are places where I can use and you can turn up the dial on on Not distribution decomposition as much as you like and you can turn it down the Ideas for cloud. Yeah, of course cloud is a perfect spot that cloud Generally what we call cloud regions and availability zones have a ton of resources, right? We can have specialized GPUs we can have which in this use case Potentially maybe more cost-efficient, but there's a lot of computing power and a lot of cute computing specialization That again for this use case makes perfect sense and that's where the cloud is the other side of things on where serverless You know should be used and again wasn't applicable in this part of the use case, but serverless if you think about it stateless Relatively small right and not entirely stateless right there is Distributed KVs, but you're not you're not you're not building an AI model in serverless or Probably shouldn't be or else. Maybe you'll see something like this What it's good for is low latency Quick response being distributed not decomposed distributed so you can put it in a bunch of different places it being the compute workload If you're gonna centralize it great use serverless for like admin jobs, right where you're not We don't need to chunk up a bunch of data But that's kind of the opposite cloud is still great and will continue to be great for these massive workloads, but the quick Quick response quick instantiation You know small workloads very distributed That's where serverless still has a place to play I think in this case looking back a lot of those attributes weren't there So it didn't make the best sense. This is also very good kind of point of discussion You know that whenever new technologies come up companies get kind of you know excited about it hype cycle You know, hey, yes, we should embrace this we should embrace that But this brings us to a very important point Which is and we discussed this last time also when we're discussing the cost and other things well So don't start with the technology that in the market It started with what you're trying to do and then what is the right approach to solve that problem? This is once again a very good lesson for that also. I definitely think the way to look at it is Look at your application. So first of all absolutely agreed on what you're trying to accomplish Number one that should always be your guiding star because if you're not sure You're not really you don't know when you got there and you don't know where there is So goes without saying but in terms of the things to think about of where workload placement is Do you think about how much capacity you'll need not right now? All right, and I know it's to your point whether it's a new technology or a new product Everyone's always excited to get that MVP out Right. Hey, look what we prototyped look how quickly we got it out look at all the awesome features Especially today, right? There's a lot of components that you can bring together and get something Extremely especially when you have talent of OZ. It's extremely compelling out That is great and you should do that. There is kind of a point where you need to now roll into production in the way You architected things Should be a little bit different if you opted for easy like okay, yeah Managing my logs and my observability with functions. Let's say great But keep in mind that the architecture you start with And I know this is hard because it hardly ever changes and the worst hardest thing is these folks did again applause to them They changed it But keep in mind that you will probably have to change it especially if you're successful, right? And again perfect example in this case people started using it. Oh, all right. Yeah, we didn't expect that right now Not this many so that's number one and then to number two think about Think about what we have to think about today is not just Regions but also distribution. I think we have another dimension in terms of our architecture Before we had to think about all right. Do I want it in this region or you know X Y Z regions? Now it's how much do I want to distribute it meaning is there part of my workload that I need to be in hundreds of regions? That's not going to be 90 of your app Right um for those right those workloads. That's when you use your serverless That's when you think about it that way, but just do realize that now all of a sudden there is this other dimension to make our lives a lot easier That it's not just choose your provider But choose how distributed you want to be right and then you get into the whole edge world of Do I want to go with like a wavelength type of deployment because I need that for my You know manufacturing or a hyperscalar edge like edge workers and such so Think about that dimension as well And one of the biggest lessons that we have learned is that companies also need to be more transparent That even if they are making these kind of changes which actually goes against their own, you know Uh, philosophy or you know the whole market. They should be transparent. They should be brave enough To come up and talk about it. That could be also a very lesson. What do you have to say about that? Absolutely agreed. I think these days So number one I don't think there's any any shame decisions are made in the context. We've all been in those meetings Whether it's architectural whatever we are in the situation Most people reasonable people in that very meeting that I'm sure you're all thinking of is I'm saying this You go, all right. I may not have agreed with it, but there were there was context within that Even if there wasn't keep in mind that A technology decision may have been made because there wasn't An option a reasonable option there. So the idea that we should be afraid of going Hey, we have to rip this out Shouldn't be scary unless you don't have a reason for it Going back to oh, I want to try this new technology. That's not a reason I want to rip this out and we have to go to a monolith Because this is how we get the next level of scale that we now need to address perfectly reasonable Hey, I heard this other technology is really cool and I wanted on my resume That should never pass a sniff test of any like architecture or it team it team, right? So Make changes don't be afraid of them if they're made for the right reason They will benefit despite the short-term pain and and the transparency is it's huge. Thank you so much for you know In joining me at such a short notice and discuss this topic today and as usual, I'd love to have you back on the show Thank you. Thank you