 From around the globe, it's theCUBE, presenting, Cube on Cloud, brought to you by SiliconANGLE. Welcome back to the live segment of the Cube on Cloud. I'm Dave Vellante with my co-host, John Furrier. John Rose is here. He's the global CTO at Dell Technologies. John, great to see you as always. Really appreciate it. Absolutely, good to be here. Hey, so we're going to talk edge. You know, the edge, it's estimated, it's a multi-trillion dollar opportunity, but it's a highly fragmented, very complex. I mean, it comprises from autonomous vehicles, windmills, even retail stores, outer space. And so it brings in a lot of really gnarly technical issues that we want to pick your brain on, but let me start with just, what to you is edge? How do you think about it? Yeah, I think, I mean, I've been saying for a while that edge is the, when you reconstitute back out in the real world. You know, for 10 years, we've been sucking IT out of the real world, taking it out of factories. You know, nobody has an email server under their desk anymore. And that was because we could put it in data centers and cloud, public clouds. And you know, that's been a good journey. And then we realized, wait a minute, all the data actually is being created out in the real world. And a lot of the actions that have to come from that data have to happen in real time in the real world. And so we realized we actually had to reconstitute an IT capacity out near where the data is created, consumed and utilized. And you know, that turns out to be smart cities, smart factories, we're dealing with military apparatus which are saying, how do you put edges into war fighting theaters or first responder environments? It's really anywhere that data exists that needs to be processed and understood and acted on that isn't in a data center. And so it's kind of one of these things. The finding edge is easier to define what it isn't. It's anywhere that you're going to have IT capacity that isn't aggregated into a public or private cloud data center. And that seems to be the answer. Follow the data. And so you've got these, the big issue of course is latency, people saying, well, some applications or some use cases like autonomous vehicles, you have to make the decision locally. Others you can send back and you can mall. Is there some kind of magic algorithm that technical people use to figure out what the right approach is? Yeah, the good news is math still works. And we spent a lot of time thinking about why you build an edge, not all things long at the edge. Let's just get that out of the way. And so we started thinking, well, what does belong at the edge? And it turns out there's four things. You need, if you need real-time responsiveness in the full closed loop of processing data, you might want to put it at an edge, but then you have to define real-time. And real-time varies. Real-time might be one millisecond. It might be 30 milliseconds. It might be 50 milliseconds. And it turns out that it's 50 milliseconds. You probably can do that in a co-located data center pretty far away from those devices. One millisecond, you better be doing it on the device itself. And so the latency around real-time processing matters. And the other reasons interesting enough to do edge actually don't have to do with real-time processing. They didn't have to do with, there's so much data being created at the edge that if you just flow it all the way across the internet, you'll overwhelm the internet. So we have a need to pre-process and post-process data and control the flow across the world. The third one is the ITOT boundary that we all know. That was the IoT thing that we were dealing with for a long time. And the fourth, which is the fascinating one is it's actually a place where you might want to inject your security boundaries because security tends to be a huge problem in connected things, because they're kind of dumb and kind of simple and kind of exposed. And if you protect them on the other end of the internet, the surface area you're protecting is enormous. So there's a big shift that basically moves security functions to the edge. I think Gartner made up a term for it called SASE, you know, it's a security-enabled edge. But these are the four big ones. We've actually tested that for probably about a year with customers. And it turns out that, you know, seems to hold. If it's one of those four things, you might want to think about an edge, if it isn't, it probably doesn't belong in it. John, I want to get your thoughts on that point. The security thing's huge. We talked about that last time at Dell Tech World when we did an interview with you on theCUBE. But now, look at what's happened over the past few months. We've been having a lot of investigative reporting here at SiliconANGLE on the notion of misinformation. Not just fake news. Everyone talks about that with the election, but misinformation has a vulnerability because you have now edge devices that need to be secured, but I can send misinformation to devices. So, you know, fake news could be fake data. Say, hey, Tesla, drive off the road, or, you know, do this or the other thing. So you've got to have the vulnerabilities looked at. And it could be everything. Data is one of them. Latency, secure, is there a chip on the device? Could you share your vision on how you see that being handled? Because that's a huge problem. Yeah, this is a big deal because, you know, what you're describing is the fact that if data is everything, the flow of data ultimately turns into the flow of information, the knowledge and wisdom and action. And if you pollute the data, if you can compromise it at the most rudimentary levels by, I don't know, putting bad data into a sensor or tricking the sensor, which lots of people can do, or simulating a sensor, you can actually distort things like AI algorithms. You can introduce bias into them. And then that's a real problem. The solution to it isn't making the sensors smarter. There's this weird catch-22 when you sensorize the world, you know, you have a finite amount of power and budget and the making sensors fatter and more complex is actually the wrong direction. So edges have materialized from that security dimension as an interesting augment to those connected things. And so imagine a world where, you know, your sensor is creating data and maybe you have hundreds or thousands of sensors that are flowing into an edge compute layer. And the edge compute layer isn't just aggregating it, it's putting context on it. It's metadata that it's adding to the system saying, hey, that particular stream of telemetry came from this device and I'm watching that device and I can score it and understand whether it's been compromised or whether it's trustworthy or whether it's a risky device. And as that all flows into the metadata world, the overall understanding of not just the data itself but where did it come from? Is it likely to be trustworthy? Should you score it higher or lower in your neural net to basically manipulate your algorithm? These kinds of things are really sophisticated and powerful tools to protect against this kind of injection of false information at the sensor, but you could never do that at a sensor. You have to do it in a place that has more compute capacity and is more able to kind of enrich the data and enhance it. So that's why we think edges are important in that fourth characteristic of they aren't the security system of the sensor itself, but they're the way to make sure that there's integrity in the sensorized world before it reaches the internet, before it reaches the cloud data centers. So access to that metadata is critical and it's got to be near real time, if not real time, right? Yeah, absolutely. And the important thing is, well, I'll tell you this, if you haven't figured this out by looking at cybersecurity issues, compromising the authoritative metadata is a really good compromise. If you can get that, you can manipulate things at a scale you've never imagined. Well, in this case, if the metadata is actually authoritatively controlled by the edge node, the edge node is processing it, is determining whether or not this is trustworthy or not, those edge nodes are not $5 parts, they're servers, they're higher end systems and you can inject a lot more sophisticated security technology and you can have hardware root of trust, you can have more advanced PKI in it, you can have AI engines watching the behavior of it. And again, you'd never do that at a sensor, but if you do it at the first step into the overall data pipeline, which is really where the edge is materializing, you can do much more sophisticated things to the data, but you can also protect that thing at a level that you'd never be able to do to protect a smart light bulb or a thermostat in your house. Yes, so John, give us the playbook on how you see the evolution of this mark. I'll see these are key foundational things as a distributed network and IOT turns into industrial IOT vice versa. As software becomes critical, what is the programming model to build the modern applications? This is something that I know you've got, I've talked to Michael Dell about this in theCUBE and everyone in your company as well as everyone else. It's software to find everything these days, right? So what is the software framework? How do people code on this? What's the application aware viewpoint on this? Yeah, and this is, unfortunately, it's a very complex area that's got a lot of dimensions to it. Let me walk you through a couple of them in terms of what is the software framework for the edge? The first is that we have to separate edge platforms from the actual edge workload. Today too many of the edge dialogues are this amorphous blob of code running on an appliance. We call that an edge. And the reality is that thing is actually doing two things. It's a platform of compute out in the real world and it's some kind of extension of the cloud data pipeline or the cloud operating model instantiated the software probably as containerized code sitting on that edge platform. Our first principle about the software world is we have to separate those two things. You do not build your cloud, your edge platform commingled with the thing that runs on it. That's like building your app into the OS. That's just gum, user space kernel. You keep those two things separate. We have to start to enforce that discipline in the software model at the edge as a first principle. The second is we have to recognize that edges are probably best implemented in ways that don't require a lot of human intervention. Humans are bad when it comes to really complex distributed systems. And so what we're finding is that most of the code being pushed into production benefits from using things like Kubernetes or container orchestration or even functional frameworks like the serverless FAS type models because those low code architectures generally are interfaced with via APIs through CI CD pipelines without a lot of human touch on it. And it turns out that those actually work reasonably well because the edges when you look at them in production the code actually doesn't change very often. They kind of do singular things relatively well over a period of time. And if you can make that a fully automated function by basically taking all the human intervention away from it and if you can program it through low code interfaces or through automated interfaces you take a lot of the risk out of the human intervention piece of this type environment. We all know that most of the errors and conditions that break things are not because the technology fails it's because a human being touches it. So in the software paradigm we're big fans of more modern software paradigms that have a lot less touch from human beings and a lot more automation being applied to the edge. The last thing I'll leave you with though is we do have a problem with some of the edge software architectures today because what happened early in the IoT world is people invented kind of new edge software platforms and we were involved in these, you know Edge X Foundry, Mobile Edge X, Crano and those were very important because they gave you a set of functions and capabilities at the edge that you kind of needed in the early days. Our long-term vision though for edge software is that it really needs to be the same code base that we're using in data centers and public clouds. It needs to be the same cloud stack the same orchestration level the same automation level because what you're really doing at the edge is not something to spoke you're taking a piece of your data pipeline and you're pushing it to the edge and the other pieces are living in private data centers and public clouds and you'd like them all operated into the same framework. So we're big believers in like pushing Kubernetes orchestration all the way to the edge pushing the same FAS layer all the way to the edge and don't create a bespoke world at the edge make an extension of the multi-cloud software framework. Even though the underlying hardware might change the microprocessor or GPU might change or GPU or whatever it is, yeah. By the way, that's a really good reason to use these modern frameworks because it's the world of heterogeneous compute where it's not always an X86 underneath it programming down at the OS level and traditional languages has a awful lot of hardware dependencies and we need to separate that because we're going to have a lot of arm we're going to have a lot of accelerators a lot of GPUs, a lot of other stuff out there. And so the software has to be modern and able to support heterogeneous compute which a lot of these new frameworks do quite well. Well, John, thanks so much for coming on really spending some time with us and you're always a great guest. I really appreciate it. Yeah, great to be here. Great stuff. I have a technical edge ongoing room Dave. This is going to be a great topic. It's a clubhouse room for us. We'll do a technical edge session every time. Really valuable. Thanks again, John. John Rose. Appreciate it. Okay, so now we're going to move to the second part of our technical edge discussion. Chris Wolfe is here. He leads the advanced architecture group at VMware. And that really means, so Chris's looks at, I think it's three years out. It's kind of his time horizon. So advanced architecture and yeah. So really excited to have you here, Chris. Can you hear us okay? Sure can. Great to see you again. Great to see you. Thanks for coming on. I really appreciate it. Awesome. So we're talking about the edge. We're talking about the things that you see. We set it up as a multi-trillion dollar opportunity. It's defined all over the place. We joke, it's could be a windmill. It could be a retail store. It could be something in outer space. It's whatever is defined, a factory, a military installation, et cetera. How do you look at the edge and how do you think about the technical evolution? Yeah, I think it was interesting listening to John and I would say we're very well aligned there. We also would see the edge is really the place where data is created, processed and or consumed. And I think what's interesting here is that you have a number of challenges and that edges are different. So like John was talking about Kubernetes and there's multiple different Kubernetes open source projects that are trying to address these different edge use cases, whether it's K3S or Kubej or Open Yert or SuperEdge. And I mean, the list goes on and on. And the reason that you see this conflict of projects is multiple reasons. You have a platform that's not really designed to support edge computing, which Kubernetes was designed for data center infrastructure. First, and then you have these different environments where you have some edge sites that have connectivity to the cloud. And you have some websites that just simply don't, right? Whether it's an oil rig or a cruise ship, you have all these different use cases. So what we're seeing is you can't just say, this is our edge platform and go consume it because it won't work. You actually have to have multiple flavors of your edge platform and decide what you should time first from a market perspective. Chris, I want to ask you, great to have you on. We've had many chats on theCUBE during when we actually would go to events and be on the ground, but I really appreciate you coming in to our first virtual editorial event. We'll be doing more of these. This is our software we'd be putting to work to do, kind of a clubhouse model. We get these talks going and make them really valuable. But this one is important because one of the things that's come up all day, and we kind of introduced it earlier, to come back every time, is the standardization openness of how open source is going to extend out this interoperability kind of vibe. And then the second theme is, and we were kind of talking about the OSI stack, you can throw back the old days. Talking about Kubernetes is a nice layer. But then also what is going to be the programming model for modern applications? Okay, with the edge being obviously a key part of it. What's your take on that vision? Because that's a complex area. Certainly a lot of software to be written still to come. Some software that needs to be written today as well. So what's your view on how do you program for the edge? Yeah, it's a great question, John. And I would say with COVID, we have seen some examples of organizations that have been successful when they had already built an edge for the expectation of change. So when you have a truly software defined edge, you can make some of these rapid pivots quite quickly. An example was Vanderbilt University had to put 1,000 hospital beds in a parking garage, and they needed dynamic network and security to be able to accommodate that. We had a lab testing company that had to roll out 400 testing sites in a matter of weeks. So when you can start to have first and foremost, think about the edge as being, our edge agility is being defined as, what is the speed of software? How quickly can I push updates? How quickly can I transform my application posture or my security posture in lieu of these types of events? Is super important. Now, then if we walk that back, to your point on open source, we see open source is really the key enabler for driving edge innovation and driving an ISV ecosystem around that edge innovation. We mentioned Kubernetes, but there's other really important projects that we're already seeing strong traction in the edge. Projects such as EdgeX Foundry is seeing significant growth in China. And that is at its core, EdgeX Foundry was about giving you a pass for some of your IoT apps and services. Another one that's quite interesting is the open source FATE project in the Linux Foundation. And FATE is really addressing ML at the edge through a federated ML model, which we think is going to be a long-term dominant model for localized machine learning training as we continue to see massive scale out to these edge sites. Right, so I wonder if you could pick up on that. I mean, in thinking about AI inferencing at the edge, how do you see that evolving? Maybe, you know, maybe you could, we could double click on the architecture that you guys see progressing. Yeah, yeah, right now we're doing some really good work as I mentioned with the FATE project. We're one of the key contributors to the project today. We see that you need to expand the breadth of contributors to these types of projects for starters. Some of these, what we've seen is sometimes the early momentum starts in China because there is a lot of innovation associated with the edge there. And now it starts to be pulled a bit further west. So when you look at federated learning, we do believe that the emergence of 5G doesn't really help you to centralize data. It really creates the more opportunity to put more data in more places. So that's, you know, that's the first challenge that you have. But then when you look at federated learning in general, I'd say there's two challenges that we still have to overcome. Organizations that have very sophisticated data science practices are really well versed here. And I'd say they're at the forefront of some of these innovations, but that's 1% of enterprises today. We have to start looking at about solutions for the 99% of enterprises. And I'd say even VMware partners, such as Microsoft and Azure Cognitive Services, as an example, they've been addressing ML further 99%. I see that's a positive development. When you look in the open source community, it's one thing to build a platform, right? We love to talk about platforms, that's the easy part. But it's the apps that run on that platform and the services that run on that platform that drive adoption. So the work that we're incubating in the VMware CTO office is not just about building platforms, but it's about building the applications that are needed by, say, that 99% of enterprises to drive that adoption. So if you carry that through, I infer from that, Chris, that the developers are ultimately going to kind of win the edge or define the edge. How do you see that from their perspective? Yeah, I think it's, way I like to look at this, I like to call it pragmatic DevOps, where the winning formula is actually giving the developer the core services that they need using the native tools and the native APIs that they prefer. And that is predominantly open source. It would some cloud services as they start to come to the edge as well. But then beyond that, there's no reason that IT operations can't have the tools that they prefer to use as well. So we see this coming together of two worlds where IT operations has to think even far differently about edge computing where it's not enough to assume that IT has full control of all of these different devices and sensors and things that exists at the edge. It doesn't happen. Oftentimes it's the lines of business that are directly deploying these types of infrastructure solutions or application services is a better phrase and connecting them to the networks at the edge. So what does this mean? From an IT operations perspective, we need to have dynamic discovery capabilities and more policy and automation that can allow the developers to have the velocity they want but still have that consistency of security, agility, networking and all of the other hard stuff that somebody has to solve. And you can't have the best of both worlds here. So if Amazon turned the data center into an API and then the traditional vendors sort of caught up or catching up and trying to do the same on-prem, is the edge, one big API, is it coming from the cloud? Is it coming from the on-prem world? How do you see that evolving? Yes. That's the question. There's a braces on. Yeah, but it doesn't have to be exclusive in one way or another. The VMware perspective is that, we can have a consistent platform for open source, a consistent platform for cloud services. And I think the key here is this. If you look at the partnerships we've been driving, we've onboarded Amazon RDS onto our platform. We announced a tech preview of Azure Arc, SQL Database as a service on our platform as well, in addition to everything we're doing with open source. So the way that we're looking at this is, you don't want to make a bet on an edge appliance with one cloud provider. Because what happens if you have a business partner that says, I'm aligned to Google or I'm aligned to AWS or I want to use this open source. Our philosophy is to virtualize the edge so that software can dictate organizations velocity at the end of the day. Yeah, so Chris, come on. You were an analyst, the gardener, you know us. Everything's a zero sum game, but it's, but life is not like that, right? I mean, and there's so much of an incremental opportunity, especially at the edge. I mean, the numbers are mind-boggling when you look at it. I agree wholeheartedly. And I think you're seeing a maturity in the vendor landscape too, where we know we can't solve all the problems ourselves and nobody can. So we have to partner and we have to, to your earlier point on APIs, we have to build external interfaces into our platforms to make it very easy for customers to have choice around ISV vendors, partners, and so on. So Chris, I got to ask you, since you're around the advanced technology group in charge of what's going on there, will there be a shift in focus on more shifts at the edge? But Pat Gelsinger going over to Intel. Good to see Pat go back to the other shifts, I must be all kidding aside, Pat's leaving big news around VMware. I saw some of your tweets that you laid out there was a nice tribute to Pat, but that's going to be cool. Pat's going to be at Intel, maybe some more advanced stuff there. I think for Pat's staying on the VMware board, and to me, it's really think about it. I mean, Pat was part of the team that brought us the X86, right? And to come back to Intel as the CEO, it's really the perfect bookend to his career. So we're really sad to see him go. Can't blame him. Of course, it's a nice chapter for Pat. So totally understand that. And prior to Pat going to Intel, we announced major partnerships with NVIDIA last year, where we've been doing a lot of work with ARM. So to us, again, we see all of this as opportunity. And a lot of the advanced development projects we're running right now in the CTO office is about expanding that ecosystem in terms of how vendors can participate, whether you're running an application on ARM, whether it was running on X86 or whatever it's running on and what comes next, including a variety of hardware accelerators. So is that really irrelevant to you? I mean, you heard John Rose talk about that. It's all containerized. Is it as a technologist? Is it truly irrelevant? What processor is underneath and what underlying hardware architecture is there? Is that a myth? No, it's not. And it's funny, right? Because we always want to say these things. Like, well, it's just a commodity, but it's not. But then be asking hardware vendors to pack up their balls and go home because there's just nothing left to do. And we're seeing actually quite the opposite, where there's this emergence and variety of so many hardware accelerators. So even from an innovation perspective for us, we're looking at ways to increase the velocity by which organizations can take advantage of these different specialized hardware components because that's gonna continue to be erased. But the real key is to make it seamless that an application can take advantage of these benefits without having to go out and buy all of this different hardware on a per application basis. But if you do make bets, you can optimize for that architecture. True or not? I mean, our estimate is that the number of wafers coming out of ARM-based platforms is 10X, x86. And so it appears that from a cost standpoint, that's got some real hard decisions to make or maybe they're easy decisions, I don't know. But so you have to make bets. Do you not as a technologist and try to optimize for one of those architectures, even though you have to hedge those bets? Yeah, we do, it really boils down to use cases and seeing, what do you need for a particular use case? Like, you mentioned ARM, there's a lot of ARM out at the edge and on smaller form factor devices, not so much in the traditional enterprise data center today. So our bets and a lot of the focus there has been on those types of devices. And again, it's really the, it's about timing, right? The customer demand versus when we need to make a particular move from an innovation perspective. That's my final question for you as we wrap up our day here with Great Cube on Cloud Day. What is the most important stories in the cloud tech world, Edge and or cloud? And you think people should be paying attention to, that will matter most to them over the next few years. Wow, that's a huge question. How much time do we have? Not enough, okay. Is it architectural things they got to focus on? I mean, a lot of people are looking at this COVID thing. I got to come out with a growth strategy. Obviously it's some clear obvious things to see. Cloud native. Yeah, let me break it down this way. I think the most important thing that people have to focus on is deciding how do they, when they build our architectures what is the reliance on cloud services, native cloud services, so four more proprietary services versus open source technologies such as Kubernetes and the ISV ecosystem around Kubernetes. One is an investment in flexibility and control, lots of management and for your intellectual property, where maybe I'm building this application in the cloud today but tomorrow I have to run it out at the edge or I do an acquisition that I just wasn't expecting or I just simply don't know. Sure, I hope that COVID doesn't come around again or something like it, right? As we get past this and navigate this today but architecting for the expectation of change is really important and having flexibility around your intellectual property including flexibility to be able to deploy and run on different clouds especially as you build up your different partnerships. That's really key. So building a discipline to say, you know what? This is database as a service. It's never going to define who I am as a business. It's something I have to do as an IT organization. I'm consuming that from the cloud. This part of the application stack that defines who I am as a business, my app dev team is building this with Kubernetes and I'm going to maintain more flexibility around that intellectual property. The strategic discipline to operate this way among many of our enterprise customers just hasn't gotten there yet but I think that's going to be a key inflection point as we start to see these hybrid architectures continue to mature. Hey, Chris, great stuff. Man, really appreciate you coming on theCUBE and participating theCUBE on cloud. Thank you for your perspectives. Great. Thank you very much, always a pleasure. All right, great to see you. And thank you everybody for watching. This ends theCUBE on cloud Dave Vellante and John Furrier. All these sessions are going to be available on demand. All the write ups will hit siliconangle.com so check that out. We'll have links to this site up there. I really appreciate you attending our first virtual editorial event. Again, there's Dave Vellante for John Furrier in the entire CUBE and CUBE on cloud team, CUBE 365. Thanks for watching.