 Welcome back to theCUBE's coverage of AWS re-invent 2021. This is theCUBE, we go out to the events, we extract the signal from the noise, we're here at a live event, hybrid event, two sets. We had two remote studios prior to the event, over 100 interviews, really excited to have George, Ellie Sios here is the Director of Product Management for EC2 Edge, really interesting topic at AWS. George, great to see you, thanks for coming on. Yeah, great to be here, thanks for having me. Everybody's talking about Edge, IoT, EC2. What's the scope of your portfolio, your responsibility? Yeah, well our vision here at AWS is to really bring the power of the AWS platform wherever customers need it. AWS, wherever our customers want it is, is our long term vision. And we have a bunch of products in this space that help us do that and help us enable our customers whatever their use case is. So we have things like wavelength, I know we talked about wavelength before here in theCUBE, where we bring full AWS services at the edge of the 5G network, so the 5G edge computing in partnership with Telcos worldwide, our partnership with Verizon in the US has been flourishing. We're up to, I think, 15 or more wavelength zones right now in many of the major cities in the US, but also in Japan and Korea and in Europe with Vodafone. So that's one of the portfolio kind of offerings and that helps you as a customer of AWS if you want to have the best latency to mobile devices whether they are sensors or mobile phones or what have you. But we're also filling out that Edge portfolio with local zones earlier today in Vernes keynote we announced that we're going to launch another 30 local zones in 20 new countries everywhere from South America, Africa, Asia, Australia and Europe, obviously, so a lot of expansion there, very excited about that. And that is kind of a similar offering but it basically brings you closer to customers in metropolitan areas over the internet. So wavelength's a big feature, George, I want to get on, just to touch on it because I think latency comes up a lot in Edge conversations, low latency issues whether it's cars, factories, you guys gave a demo yesterday to the press corps in the press room, I was there where you had someone in San Francisco from the opera and someone in person here in Vegas and you had 13 milliseconds going back and forth demoing real time, the benefit of low latency in remotes. I mean it wasn't next door, San Francisco. This is kind of the purpose of what Edge is about. Can you explain what that means, that demo, why it was important and what we were trying to show and how does it mean to the Edge? So there is multiple use cases. One of them is human collaboration, right? Like we've spent the last two years of our lives over conferences and kind of like the teleconferences and trying to talk over each other and unmute ourselves desperately but existing solutions kind of work generally for most of the things that we do but when it comes to music collaboration where milliseconds matter, it's a lot harder for it with existing solutions to get artists to collaborate when they're hundreds of miles away. Last night we saw a really inspiring demo I think of how two top tier musicians, one located in San Francisco and one located in Vegas can collaborate in opera which is one of the most precise art forms in the music world. There are no beats in opera to kind of like synchronize to so you really need to play off each other, right? So we provided a latency between them of less than 30 milliseconds which translates if you're thinking about audio, if you're thinking about the speed of sound that's like being in the same stage and that was very inspiring but there's also a lot of use cases that are machine to machine communications where even lower latencies matter and we can think of latencies down to one millisecond like single digit milliseconds when it comes to for example, vehicles or robots and things like that. So we're with our products we're enabling customers to drive down that latency but also the jitter which is the variation of latency especially in human communications that is almost more important than latency itself. Your mind can adapt to latency and you can start predicting what's going to happen but if I'm keep changing that for you that becomes even harder. Well this is what I want to get to because you got outcomes of applications like this opera example that's an application I guess. So working backwards from the application that's one thing but now people are really starting of trying to figure out what is the edge. So I have to ask you what is AWS's edge? Is it outposts, wavelength? What do people buy to make the edge work? Well for us it's providing a breadth of services that our customers can either use holistically or combine multiple of those. So a really good example for example is this wireless. I'm sure you know we're building with this the first in the world mobile network, 5G mobile network fully on cloud. So this combines outposts and combines local zones in order to distribute the 5G network across nationwide and different parts of their applications live in different edges, right? The local zone, the outpost and the region itself. So we have our customers, I talked about how local zones is going to be in total 45 cities in the world. We're already in 15 in the US, we're going to do another 30. But customers might still come and say, oh why are you not in Costa Rica? Well we have outposts in Costa Rica so you could build your own offering there or you could build on top of outposts while you distribute the rest of your workload in existing AWS offerings. So to answer your question, John, there is no single answer. It's I think that it is per use case and per workload that customers are going to combine or choose which one of these. So let's go for local zones. Explain what a local zone is real quick. I know we covered it a bit last year was virtual event but local zones are now part of the nomenclature of the AWS language. We know what a region is, right? So regions are regions. What's a local zone? We knew regions and we knew availability zones and then we were just keeping them warm. You got availability zones, now you got local zones. Take us through the topology if you will of how to think about this. Right, so a local zone is a fully managed AWS infrastructure deployment so it's owned and managed and operated by AWS and because of that it offers you the same elasticity and security and all of the goodies of the cloud but it's positioned closer to your end customers or to your own deployment. So it's positioned in a local urban or metropolitan or industrial center closer to you. So if you think about the US for example, we have a few regions in the east coast and the west coast but now we're basically extending these regions and we're bringing more and more services to 15 cities. So if you are in Miami, there is a local zone there. If you are in LA there's two local zones actually in LA. That enables customers to run two different types of workloads. One is these distributed clouds or distributed edge kind of workload that we've been hearing more and more about. Think of gaming for example, right? Like we have customers that are like supercell that need to be closer to the gamers wherever they are. So they're going to be using a bunch of local zones to deploy and also we have these hyper-local use cases where we're talking for example about Netflix that are enabling in LA their creative artists to connect locally and get as low as single millisecond latencies. So local zone is like an availability zone but it's closer to you. It offers the same scalability, the same elasticity, the same security and the same services as the AWS cloud and it connects back to the regions to offer you the full breadth of the platform. So just to clarify, so the edge strategy essentially is to bring the cloud, the primitives, the APIs to where the customers are in instances where they either can't move or won't move their resources into the cloud or there's no connectivity. Right there is, we have a bunch of use cases where customers either need to be there because of regulation or because of some data gravity so data is being generated in a specific place and you need to locally process it or we have customers in this distributed use case but I think that you're pointing out a very important thing which is a common factor across all these offerings, it is the cloud. It's not like a copycat of the cloud, it's the same API, it's the same services that you already know and use, et cetera. So extending the cloud rather than copying it around is our vision and getting those customers to well connectivity obviously needs to be there. We were offering AWS private 5G, we talked about it yesterday. Now, a premise that we've had is that a lot of edge use cases will be driven by AI inferencing and so first of all is that a reasonable premise that's growing, we think very quickly and has huge potential. What is the compute, if that's a correct premise, what does the compute look like for that type of work? It's a correct premise and you know that's why we think the model that we're offering is so powerful because you have the edge and the cloud fully cooperating and being connected together. You know the edge is a resource that's more limited than the full cloud in the AWS region. So when you're doing inferencing, what you really want to do is you want to train your models back up in the region where you get most scalability and the best prices. You have the full scale of AWS but for the latency sensitive parts of your applications, you want to push those to the edge. So when you're doing the actual inferencing, not the training of the models, real time, you push that to the edge, whether that's, if your connectivity is 5G, you can push that into a wavelength zone. If your connectivity is wired, you can push it into a local zone. If you really need it to be in your data center, you can push it in your outputs. So you see how are kind of like building out for all of those use cases. But in those instances, I'm interested in what the compute looks like. I presume it's got to be low power, low cost, super high performance. I mean, all of those things that are good for data-driven workloads. Right, the powerful thing here is that that's the same compute that you know and love in the cloud. So the same instant, easy to instance types, the EBS volumes, the S3 for storage, or RDS for your databases, and EMR clusters, you can use the same service. And the compute is the same powerful all the way down from the hardware up to the service. And is the promise to customers that eventually those, it's not all of those services, right? I mean, you've got outposts today, it's continues to grow. It's continuing to grow, yeah. So, but conceptually, as many services you could possibly push to the edge, you intend to do so. We are pushing services according to customer requests, but also there is a nuance here. The nuance is that you push down the services that are truly latency sensitive, right? You don't need to push everything down to the edge when you're talking about latency requirements. Like what's an example of what you wouldn't push down? So management tools, right? Like when you're doing monitoring and management. Yeah, you don't need these to be at the edge. You can do that and you can scale that or you know, batch processing. It doesn't have to be at the edge because it's by definition not online, latency service. So we're keeping those like AWS batch, for example, that's in the region because that's where customers really use it. But things like EC2, EBS, EMR, we're pushing those to the edge because those are more. We've got two minutes left. I want to get the outposts kind of update. I remember when outposts launched, it was really in a similar moment for re-invent hybrid. Oh, Andy Jazz, he said hybrid. I'll never say hybrid. Now, hybrid's kind of translated into all cloud operation. Now you got local zones. A lot's changed from Amazon web services, for example, when since outposts launched, local zones, things are happening, 5G dish. Now, what's the status of outposts? Are you guys happy with it? What is it morphed into? Is it still the same game? What is outpost today? Vis-a-vis what people may think it is or isn't? Yeah, we've been focusing in what we're talking about. Building out the number of services that customers request, but also being in more and more places. So I think we're in more than 60 now countries with outposts. We've seen very good adoption. We've seen very good feedback. Half of my EBCs have been on outposts, but this year I think that one of the most exciting announcements with the outpost servers for the smaller form factors that enable an additional use cases, like, for example, retail, or even building your 5G networks where one of our partners, Mavenir, is moving the 5G core, so the smarts of the network that does all the routing on outpost servers said, you know, we can distribute those all over the place. So we're keeping on the innovation, we're keeping on the expansion, and we've been getting very good customer feedback. All steam ahead, full steam ahead. Full steam ahead, plus 10%. All right guys, thank you so much, George. Really appreciate it. We're seeing the cloud expand. The definition is growing. Kind of like the universe, John. Tefalonte for John Furrier. You're watching theCUBE at AWS. Reinvent the leader in high tech coverage. Globally, we'll be right back.