 Live from Las Vegas, it's theCUBE, covering AWS re-invent 2019. Brought to you by Amazon Web Services and Intel, along with its ecosystem partners. Welcome back inside the Sands series for continuing our coverage here. Live on theCUBE of AWS re-invent 2019, absolutely jam-packed aisles, great educational sessions, and one of the feature presenters. Now joins us as well, is Dave Vellante, John Walls with Paul Seville, who's the SVP of Core Network and Technology Solutions at Sintri Lake. Paul, good to see you again. Yeah, nice to see you, John. So you just finished up, right? We'll get into that in just a little bit. First off, just give me your impression of what's going on here and the energy and the vibe that you're getting. Yeah, I think it's fantastic. I mean, it's very high energy here. You know, there's a lot of new things that are emerging in terms of the applications that we're seeing, the use cases for the cloud. And of course, exciting stuff happened around Edge Compute with the announcement of AWS with the outpost launch. All right, so we'll jump into Edge. I mean, everybody has a different idea, right? So yours, I mean, if you define the Edge, at least how do you see it? Yeah, it's very simple definition of how we see the Edge. It's putting compute very close to the point of interaction. And the interaction can be with humans or the interaction can be with devices or other electronics that need to be controlled or that need to communicate. But the point is getting that compute as close as possible to it from a performance standpoint that's needed. Okay, good. So we heard that a lot from Andy Jassy, I think yesterday, right? Yeah. Bringing the compute to the data. I mean, with all due respect, it was talking about like it was a new concept, right? We've been hearing this now for quite some time. So talk more about how you see the Edge evolving. I mean, look, I get a lot of credit to Amazon because they used to not talk about hybrid. I predict a couple years soon that we're talking about multi-cloud, guarantee it, right? Because that's what customers are doing. So they respond to customers. So at the same time, I like their Edge strategy because it's all about developers. It's about infrastructure as code at the Edge. But you guys are about moving that data or not necessarily bringing the compute to that. So how do you see the Edge evolving? Yeah, so the reason this whole trend is happening is because what's happening with the new technologies that are enabling a whole new set of applications out there. Things like what's going on with artificial intelligence and machine learning and virtual reality, those, the robotics control, those things are basically driving this need to place compute as close as possible to that point of interaction. The problem is that when you do that, costs go up. And that's the conundrum that we've kind of been in because when compute gets housed at the customer premise, you know, in a home, in a business, in an enterprise, then that's the most expensive real estate that there is. And you can't get the economies of scale that's there. The only other choice to date has been the public cloud. And that can be hundreds or thousands of miles away. And these new applications that require really tight control and interaction can't operate in that kind of environment. And yet it's too expensive to run those applications at the very edge, at the premise itself. So that's why this middle ground now of a place in compute nearby where it can serve many locations or must be housed more cost-effectively as it can be placed. So you got the speed of light problem. So you deal with that latency by making the compute proximate to the data. But it doesn't have to be like right next to it. Correct. But what are we talking distance-wise? That to be synchronous distance or? Yeah, when we think of the distance, we think about it in terms of milliseconds of delay from where the edge device, the thing that needs to interact with the compute or the application needs to interact with. And we have not seen any applications that from the customers we talk to that really get beyond our need tighter than five milliseconds of delay. Now that's one way. So if we get into that range of place in compute within five milliseconds of the edge interaction, the device that it needs to interact with, that is enough to meet some of the most tightest requirements that we've seen around robotics control, video analytics and other applications. Okay, so I could ship code to the data, but the problem is if it needs to be real-time, it's still too much latency. And that's the problem that you're solving. That's right, okay. So what you were talking about, why milliseconds matter? That's right. So give me some examples, if you will, then about why five matters more than 10 or five matters more than eight or 20 or whatever. I mean, because we're talking about such an infinitesimal difference, but yet it does matter in some respects. It does because, so I'll give you an example of robotics, for example, robotics control. You know, that is one of the things that requires the most tight latency requirement, because it depends upon the robotics itself. If it's a machining tool that's working on a lathe, then that doesn't require as tight of response time to the controller as, say, a scanning device that is real-time pushing things around very fast and doing an optical read on it to make the decision about how, about where it pushes the device next. That type of interaction and control requires a much tighter latency performance, and that's why you start to see these ranges. But as I said, we're not seeing anything below that kind of five millisecond type of range. So the other thing that's changing, and it helped me understand this, yeah, okay, so you're moving the compute closer to the data, which increases cost, and I want to understand how you're addressing that. Well, maybe one of the ways you address it is because you're bringing the cloud model, the operating model to the data. So patches, security patches, maintenance, things like that are reduced. Is that how you're addressing cost? Yeah, that is part of it, and that's why the AWS Outpost is very interesting, because it is really a complete instance of AWS that is in a much smaller form factor that you can deploy very close to that point of interaction, close to the customer premise. And that enables customers to leverage the pretty much the full power of AWS in engaging with those devices and coding to those devices and dropping those applications close to it. Now you lose the multi-tenant aspect, is that right? Or not necessarily. Yeah, for our understanding of Outpost, it's a single-tenant device coming out the gate, but ultimately it's going to be a multi-tenant device. Yeah, okay, so near-term, it's easier to manage, but it's multi-instance, I guess. And then over time, maybe you can share that resource. It's still not going to... The interesting thing is that even though it's a single-tenant device, there are still many great use cases because even a single-tenant device in one market could serve multiple enterprise locations. So it still has that kind of a sense of scale because as long as it's one enterprise, it can serve many locations off of that one device. Okay, so you don't get the massive economies of scale, but you're opening up use cases that never existed before, right? That's right. But what do you do with when the data supply basically is, you know, you also make a data scale, and Edge devices creating that much more data, all of a sudden speed becomes a little more challenging because you're taking in a lot more information, trying to process it in different ways. Apps are feeding off of that. So all of a sudden, you have a much more complex challenge because it's not static, right? This is a very dynamic environment. That's right, yeah. And there's a very big trend that's happening now, which is that data is being created at the edge and it's staying at the edge for a whole number of reasons. You know, in the old world, you would pretty much collect data and you would ship it off to the centralized data center or to the public cloud to be housed there. And that's today, that's where 80% of data resides, but there's a big shift happening where that data now needs to reside at the deep edge because it needs to have that fast interaction with something that it's working with or because of government regulations that are now coming in that are having much stricter tolerances around you have to know exactly where your data is and it can't cross state lines. It can't get out of a certain security zone. Things like that are forcing companies now to keep that massive amount of data in a very understood known localized position. So you got to act on it in real time. Yes. Some of it will go back to the cloud, but you see folks persist the data at the edge or not so much? Persistent data? Do people want to store it at the edge as well? Yeah, people want to store it at the edge where it's going to have a lot of interaction. So if you're running a chemical plant, you may not need to have access to a lot of data outside that chemical plant, but you're intensively analyzing that data in the chemical plant and you don't want to ship it off some place centrally a thousand miles away to be access from there. It needs to be acted on locally. And that's why edge compute, this movement toward edge compute is really building and becoming stronger. Talk about your tech. You know, what's the real value of what you do? You're obviously reducing latencies. You got to secure all this stuff, but maybe double check on it. Yeah, so centrally it brings a number of tools to help in this whole space. So first of all, the network that we provide that can tie it all together from the enterprise location to the edge location where compute can be housed all the way back to the public cloud core. We have a network that spans the entire U.S. fiber all over the place and we can use those low latency fiber optic connections to chain those areas together in the most optimal fashion to get the kind of performance that you need to handle these distributed compute environments. We also bring compute technology itself. We have our own variety of edge compute where we can build custom edge compute solutions for customers that meet their very specific spec requirements that can be dedicated to them. We can incorporate AWS's compute technology as well. And we have IT services and skilled people, thousands of employees that are focused on this space that build these solutions together for customers that tie together the public cloud resources, the edge compute resources, the network resources, the wireless connectivity capabilities that's needed on customer premise and the management solutions to tie it all together in that very mixed environment. We were just on a session with Teresa Carlson who runs Public Sector for AWS. I was telling her that I sat in a session, Marty Walsh, the mayor of Boston has got this big smart city initiative going on. I know that's one of the use cases you're working on. Maybe you talk about that a little bit and maybe some of the other interesting use cases that you're seeing. Yeah, that's right. Smart cities are a big use case though. The one, and we are actually actively working on a number of them. I would say that those use, the smart city use cases tend to move very slowly because you're talking about municipalities and long decision making cycle. I'll tell you though, we've seen. There's a 50 year plan he put forth. Yeah, that's right. But the use cases that we're really seeing the most traction with, or interestingly, is robotics is a really big one. And video analytics is another big one. So we're actually deploying edge use case solutions right now in those scenarios. The robotics one is a great one because those devices need to be, those robotic devices need to be controlled within a really tight millisecond tolerance. But the compute needs to be housed in a very, it's much more reliable economic location. The video analytics piece is a really interesting, the one that we're seeing very big demand for because retailers have now reached the point with the technology where they can do things like they can figure out by doing video analytics whether somebody is acting suspiciously in the store. And we're hearing that they think they can now cut thievery out of retail locations dramatically by using video analytics. And when you talk about big savings to the bottom line of a company, that makes a big savings to them. So those are very two good use cases we're seeing that are real today. You know, one of the other things you were talking about earlier was about the disappearance of compute divide. That's right. So where to go? I mean, what? Yeah, well, I'll tell you what, I like to say that in the old days, you know, if you've been around long enough, like I know you're old enough because I was a kid watching you on TV when I was a little kid. He knows that. He told us that. But the, Way back. When we got out of college. How does that make you feel? Really old. When we got out of college, John, everything was in a mainframe, right? It was essentially, when you went to work, and everything was housed centrally, then we went to distributed where a client server model where everybody was working on desktops and a lot of the compute was on the desktops and very little went back to a mainframe. Then we made the shift to the cloud where we put as much in the centralized location as we can to, and so we shifted way back to centralized. That's the compute divide. I'm talking about that big shift from decentralized, decentralized, decentralized. Now we're actually moving to a new world where that pendulum swing, that compute divide is disappearing because compute isn't most economically stored any one location. It's everywhere. It's going to be at the IoT edge. It's going to be at the premise. It's going to be in market locations that were centralized. It's going to be in the public cloud core. It's going to be all around us. And that's what I mean by the, by the disappearance of the compute divide. And you know, I want to come back on that. You talk about a pendulum. A lot of people talk about the pendulum swings, mainframe and distributed now. A lot of people say it's the pendulums are swinging back, but you just described it differently. It's a ubiquitous matrix now where compute is everywhere. That's where you hear this term fog computing. The idea of the fog now, it's not the cloud that you can see off in a distance. It's just everywhere around you. And that's how compute, we can start to think about how compute is. I think I first heard that term, I don't know, eight years ago. I'm like, what the heck is this? It was ahead of its time. But now it's really starting to show. I mean, this is sort of new expansion of what we know as cloud. It's sort of redefining. Yes, exactly. And then edge 5G, that's another big piece of it. Amazon's obviously excited about that with the thing to call it wavelength, right? What do you see for 5G? How's that going to affect this whole equation? Yeah, I think 5G is going to have a number of edge applications. And it's primarily going to be around the mobile space. You know, the advantage of it is that it increases bandwidth and it supports mobility. And it allows for a little bit higher resilience because they can take the part of the spectrum and make sure that they're carving it out and dedicating it for particular applications that are there. But I tell you that the, you know, 5G gets a lot of attention in terms of being how edge compute is going to roll out, but we're not seeing that at all. Edge compute is available today. And that, we're providing those edge compute solutions through our fiber optic networks. What we're seeing is that every enterprise that we're talking to, wants fiber into their enterprise location. Because once you have fiber there, that's going to be the most secure, reliable and scalable solution. Fiber can effectively scale as big as any customer could ever consume the bandwidth. And they know that once they get fiber into that application, into their location, that they're good for the future because they can totally scale with that. And that's how we're deploying edge solutions today. Well Paul, I know you got a plane to catch and you got to go, but after that age comment, we're going to keep you here for another hour. Thanks again. It's great to see you again. Always good to see you. Thanks, thank you. All right, Paul, hang on, hang on. We're about to say goodbye to Paul. Now we will. We have his reinvent. 2019 coverage continues right here on theCUBE. All right.