 One of my favorite things to do at the end of Summit is to spend some time talking a little bit about where we're going in the future and figuring out what does the future look like. And so today I'm going to bring up John Rose, the global CTO for Dell EMC, but he's also the chairman of Cloud Foundry Foundation and has been part of this since the very beginning. So it's always fun to sit down with John and talk a little bit about where does he see the future going. So come on on stage John. Okay now I get the music. Now you get the music, yes. Just for context, my brother was the production manager for Blondie for ten years. So I've been to more Blondie shows than probably anybody except maybe the band. See I knew you would get the joke and at least somebody gets my jokes. See good joke Abby, this is good, you're improving. I heard you had a few duds but that was great. I did. Nobody likes my Facebook jokes at all. Okay. We'll work on it. We'll work on it. But you know I will say that we had you on stage at Summit North America last year and our conversation was actually one of the highest rated keynotes because it turns out a lot of people are really interested in where the future is going. And I can think of no one better than to sit down with and talk a little bit about the future beyond just cloud and really what we're kind of immersed in here. The future of all of this new cool tech and you are sitting at the apex of that across the board. What are some of the cool trends that you've been talking about? You talked a lot this week about a lot of different trends but what are some of your favorite trends that you're seeing right now? Yeah, yeah, for standard disclaimer anything predicting the future is largely going to be incorrect at least in terms of the date. So just take that with a grain of salt. But the reality is we're, I mean cloud foundry is an instantiation of something that we predicted well before cloud foundry existed. We knew that code would get recomposed in interesting ways. We knew that we had to have cloud portability. That was predicted before it happened and most people said it's kind of obvious it's a necessary element. Where we are today, I mean obviously things change but the big ones that we're paying attention to and these are provocative. Probably the biggest one is that for the last roughly not quite a decade, we've been under the working hypothesis that the future is about centralization of compute, storage, IT. Everything is going to aggregate to large clumps, whether they be large public clouds or large data centers. And everything else would disappear. We would suck them into these large entities and get the benefits of scale and efficiency and letting somebody else do some of the work. And magic. And magic would happen. And to be perfectly honest, we've been living that and that's been a good trend. It's allowed people to focus on, I don't know, building great applications and not necessarily dealing with all of the complexity of infrastructure modernization. However, first of all, it didn't suck everything into there. There's plenty of stuff that isn't in those entities. But the more important trend that we're seeing, and this is only maybe less than a year old, is a trend towards decentralization. This idea that we are going to fragment or actually spread the IT infrastructure out back away from these central entities. The entities are going to be very important. They're gravitational centers for ecosystems. But because of speed of light issues, because of the need to move to real time, because of the just sheer volume of data that's being created, we're starting to see new trends like edge computing materialized, which are not just about randomly putting compute at the edge. It's about extending the compute framework so that code that should run or data that should be processed or handled close to an entity that uses it are part of the overall IT framework. And I describe that as counter cyclical. We've been in a cycle where everything was moving toward centralization. And now we're seeing that reverse. This year at Mobile World Congress, for instance, the big theme. I go there just to kind of figure out what the big theme in telecom is, was every mobile operator, every industrial company, every automotive company saying, I can't build my future without a distributed compute framework, a distributed IT topology. Because I need to have access to resources close to where the actual activity and users are as part of the overall architecture. Not replacing the large aggregated clouds, but as an extension or another layer or maybe several other layers in that environment. What does that mean for us in the Cloud Foundry ecosystem? Well, we've benefited from the centralization model because when you think about deploying code, you just kind of deploy it in an infrastructure. That infrastructure is relatively well managed, it's relatively aggregated, it's got almost infinite capacity. When you move to the edge, if you start thinking about other layers, they're not infinite. They don't have infinite power budgets, they don't have people there. They don't have all the capabilities there. And so we're going to have to be very selective about what do we run there, and how do we run it, and what capabilities does that edge provide? And when we build code, what function should live in the center, because they're non-real time or can deal with the latency, or the data they need is there, and which functions maybe belong close to the edge so we can get the performance and real time characteristics we need. And so that hopefully one of the goals for Cloud Foundry is that we don't put that burden on the developer. That is a great function to happen in the platform. Developers should write code, and they should ask for functions to behave, and they should ask for them to have a behavior, and the infrastructure should adapt. But up until now, we didn't have to deal with this distributed infrastructure. But I guarantee you, five years from now, we'll be living in a highly diffuse and distributed environment. And code needs to be properly placed along with the data that it uses to be effective. Do you see services as pushing us in that direction as we extend and abstract away further? Yeah, absolutely. Because one of the things we talked about at the board meeting this week, and it was definitely in the keynotes, is this need for better service expression, service management. And it's not that we don't do it reasonably well today, but the idea is I service as an interface between two different parts of the world. Well, if you only have one topology and one infrastructure, service expression isn't as important because the code might be right next to each other, might be part of the same composite application. But when you are now spreading functions so that the edge is a collection of code doing X, and it now needs to reach into a collection of other parts of the topology to have a complete experience, the best way to do that is a service expression. But how do you do that over great distance? How do you do that over new classes of devices? So I actually think that for us as a community, and this is more near term, the next couple of years is going to be very dominated by how do we collectively in the open source ecosystem and the new software frameworks build a next generation of service framework. I think in Europe at the Cloud Foundry Summit, I suggested that we need to start working better across the boundaries, open service broker, Istio, the different teams, they all have pieces of the puzzle. Let's solve it collectively because without it, not only do we create a difficult experience for the current topologies, but we also put a barrier in being able to distribute IT environments without having all the functionality necessary for a distributed architecture to talk to the different parts, which is in fact the service expression that you need. And we inadvertently become a walled garden. Exactly, which by the way would be mutually exclusive to that future, I just described. Which we want to avoid and I think that actually has been a central theme here, is talking about interoperability and an extensibility of the platform with other technologies and obviously open service broker API is going to play a key role in the future of Cloud Foundry. One of the things that you talked a little bit about edge computing, but also I know that something that is of high fascination to you is the work that's going on in the automotive industry, which has gotten super exciting. Yes, if you want, here's an interesting thing, how many people know how big the autonomous vehicle, smart infrastructure, smart mobility market is predicted to be, anybody? I'll give you a multiple choice question, as let's say a billion dollars, 10 billion dollars or roughly the same size as the entire IT spend of China in 10 years, the answer is the third one, okay? And the reason for that is that- Which is a big number. Which is a very big number. In fact, I just saw a statistic about semiconductor use for accelerated workloads, meaning non generalized compute, if you're going to put accelerators, whether they be GPUs or IPUs or TPUs, pick your favorite flavor, FPGAs, how much of that accelerated semiconductor ecosystem is going to be used by different parts of the IT framework? Well, first of all, about 50% of the AI ML workloads in about five to ten years are going to be on accelerators. The second thing is that when people are modeling it now, they have three buckets. They have the traditional enterprise, they have the cloud providers, actually the four buckets, the traditional enterprise, the cloud providers, China, and the automotive industry. It's that big. And so now the interesting thing is there are really very few, if any truly autonomous vehicles today. So we're talking about this thing that could be as big as China, but we haven't actually done any of this yet or most of it has really in front of us. But what's fascinating for me is when you start to look at it, it demonstrates a potential scaling problem that we have to work through. So for instance, I give you some data. One of the automotive manufacturers we're working with has modeled the amount of data that will land from the sensors in their autonomous vehicle fleet in the mid-2020s. And this is LIDARS and sensor cameras, pick your favorite. There's a lot of data that's going to come out of these cars, about 40 million vehicles in that fleet. It works out that if you roll it up, it's 7,200 exabytes is going to be the data set that you have to process over. That's seven zettabytes. For those of you who don't understand what that is, go Google it. It's a huge number. There are no clouds today that are anywhere close to that. So then you start thinking about the consequences. If you have seven zettabytes landed and you need that to reason across it to do intelligent mapping, smart mobility, collision avoidance. You're not doing it on every car, but that's the data set. How do you operationalize that? And so one of the things we looked at, because obviously we have an interest in the storage of data, is we asked how many people would it need to manage 7,200 exabytes of data using kind of best-in-class metrics today? And it works out that there's a number that we use. It's about a person, a petabyte. You need one storage administrator for about a petabyte. It might be two petabytes, but it's not exabytes. And interestingly enough, if you do that math, you need about 7 million people to manage that storage environment. Clearly, that's not very affordable and it's not going to work. So I don't think we have 7 million storage administrators in the world by any stretch. Or it's a job opportunity. One of the other. Well, I'm pretty sure we don't want to solve this with human capacity. So what it tells you is the reason we're so interested, we just joined the Automotive Edge Computing Consortium. We're spending a lot of time there, is not because automotive is the only industry. But it's one that demonstrates a very real future that scale is going to matter in a way that if we don't solve for scale with automation, with artificial intelligence, with new architectures, we will actually not achieve that future. And so using that as a litmus test to say, how do you rethink operations of persisting data, of managing workloads, of delivering containerized code, of dealing with a distributed topology? By the way, there's four tiers in an automotive environment. There's the car, the Edge, the private data centers, and the public clouds all work together. Once you look at that environment, you kind of understand what problem we have to go solve. Now the good news is we have about 10 years to go solve it. But for me, it's look at that environment and use it as the gold standard. If we as an industry are able to address that market, then almost any other market that I've found is likely to be solvable based on the innovations that come out of that ecosystem. That's really cool. But it's also a really good way to bring it back home on the actual scale we're talking about. I mean, that's just the automotive industry. But if we scale that to, say, the retail industry or the healthcare industry, we start looking at a lot of these Edge and the IoT devices that are gonna be deployed, this becomes a big scale issue. Well, it becomes a big scale issue, but it also, one of the things that we learned, again, from an automotive industry is our assumptions about what that scale, what's driving that scale might be false. So for instance, how many of you think that the reason we're investing and building out this IT infrastructure for the automotive industry is because we're trying to automate the driving of your car. It sounds like a reasonable thing. In fact, that's actually not the reason we're doing it. That's an interesting consequence. The real reason we're doing it in most cases is that we actually believe that by bringing intelligence into the mobility ecosystem, we can start to do things we've never even thought about doing before. Like for instance, imagine a city that is reconfigurable overnight. Toyota, for instance, in two years at the Olympics is gonna show these electrified platforms that aren't cars, they aren't buses, they're offices. They're where you do work. And because it's autonomous and electrified, what happens is in the evening or whenever it's necessary through data and analytics, the infrastructure reconfigures, the pizza parlors in a different place, the doctor's office is close to you. It depends on what's going on. And so what you're doing is not just making cars safer or more autonomous. You're actually bringing this concept of fluidity and agility and mobility into the very infrastructure of cities and our environments. And that is a much bigger problem to solve. So then you jump into something like healthcare and you say, well, are we doing this so we can make radiology cheaper? Are we doing this so we can improve patient outcomes in a general sense? Or are we gonna discover things that are entirely different, that we've never thought about by actually investing in these types of technologies? Or extending healthcare to different regions around the country or the world? Absolutely. Or completely reinventing that healthcare and doing it in a way that there aren't human beings involved. Today we think about telemedicine. How can I have a doctor reach people without having to physically be there? That's a great first step. How about if we didn't even have to doctor involved? How about if the outcome was just handled by technology? Well, that is a huge coding problem. It's a huge data problem. It's a huge infrastructure problem. Realistically, that's probably the goal that we're shooting for. Having the human being in the middle of it is probably a bad outcome to scale this to a population that's probably gonna be 10 billion people by that time. Yeah, we have... It's an interesting problem. And I feel like the work you're doing, really focusing on the storage area in particular, is where we're gonna run into a lot of the roadblocks. It's gonna be less of an application scaling issue and more of a, where's the data? How do you get to it? And is it able to scale to hold all of this information? Yeah, it's not just the storage piece that could be a bottleneck. We all are familiar with the term data gravity. There's actually a phrase now, data anti-gravity. How do you develop technologies that reverses that trend? We should really lean more into that one. Yeah, that one's probably a good idea. But the bottom line is we keep creating more data and it's just simply because we have more sensors and more sources of data that are interesting and we figured out over the last probably five years that you don't actually know what your data's useful for until you actually try to do something with it. So collecting it is a good idea. Unfortunately, when you get into these distributed topologies, you now run into bottlenecks. For instance, here's an interesting statistic. In the edge computing model, one of the assumptions is that edge computing will live close to the cell sites. It will be in the radio access network near the base station. But when you actually look at the amount of power available to actually run extraneous compute, is a good way to describe it, in a cell site, it's a few hundred watts in most cases. Which immediately says, hey, that's my design, that's a hard design rule. I can't put petabytes of storage there. Just the energy of the drives would eat that up. I can't put GPUs there at scale. So if I want acceleration, I have to come up with a novel way to do this. I have to think about power efficiency in a way that wasn't necessarily something I contemplated in an at scale data center. And at the end of the day, what you're really doing is trying to manage an efficient infrastructure that's gonna have an infinite amount of compute requirements and an almost infinite amount of storage requirements in aggregate. But now you have to figure out where to properly place that and how to efficiently deliver those MIPS and bits in a way that actually can work in this kind of at scale environment that we've never dealt with before. One of the hypothesis I had several years ago when ARM really first started coming out is that that would be where a lot of the compute power would be moving to and highly distributed. No, no, I actually, I mean, ARM is fine. I think it's perfectly good technology. What I think is the more disruptive thing on the compute side is acceleration. What we've realized, and by the way, all of you are again beneficiaries of this, just like large clouds make it easy to build cloud founder applications because you don't have to think about scaling infrastructure. You also benefited from homogenous compute. Most of you don't think about how the code is going to execute, what piece of silicon it's on because we have a standard called kind of x86. You kind of assume it's available. But when you start to look at trying to scale the number of MIPS to what we need to achieve, we already have hit a point where Moore's law is not gonna solve our problems. We have power issues, so we have to find other creative ways to get much more capacity and processing into the infrastructure within the same power budget. And the tool that we have started to use more aggressively and we'll use very aggressively in the future is specialized acceleration. GPUs are being used as accelerators in certain workloads. By the way, they're really not very good accelerators for what they're being used for. They're just faster than a general purpose processor. But there's an extra wave of accelerators that are already in some of the public clouds and are coming out of the semiconductor space in the startup ecosystem. And Intel has a thing around Nirvana building these chipsets that generally could result in two, three, four orders of magnitude higher performance in terms of processing very specific things like AI and ML workloads. And what we are likely to see going forward, almost 100% certain, is an era where this nice comfortable homogenous compute layer disappears. And we have homogenous compute as a general purpose pool, but now we have diverse accelerators that are optimized based on the function you're trying to execute. Now as a coder, think about that. I'm not talking about the entire code running on the accelerator. I'm saying this particular subroutine or function benefits from running on this particular piece of silicon that is optimized to do that graph processing function or that vector processing function or something that we haven't even thought of. How do you route it to the right place? How does the platform enable the consumption of these without forcing the coder to have to understand the infrastructure? So again, much like distributed topologies, we're moving to heterogeneous compute again, which means that we're going to have to have platforms that sit in the middle between our developers and the infrastructures they use that actually make it easy to take advantage of having the data and compute in the right place and having the compute run on the right type of engine. And the reason you need to do that on the compute side is because if I can give you a three order of magnitude improvement on performance or nips or power consumption, that may be the only way you achieve the scale you need to accomplish the task in some of these new models. It sounds terrifying. We have a lot of work to do. It sounds like we have a lot of work to do. There's a lot of layers of abstractions in there that are going to get us to this. But what I'm also hearing is that we're going to become much more reliant on automation. Oh yeah, you can't do any of this without it. I mean, you know, one of the beautiful things I, you know what, three, four years ago when we were demonstrating early cloud foundry, you know, what was the demo? CF push, CF scale. And it got people excited because, you know, people realized how hard it was to do those two tasks. And if you can do it in a single command line execution and a platform takes care of it. So we kind of have been at the center of building out highly automated, efficient infrastructure to solve some problems that really got in the way of people being productive and the speed of delivering code. So I think we're in a very good place to do some of these other things. It's just there's a lot of them and they kind of get us out of our comfort zone because certain assumptions we have about what's underneath us and how the infrastructures are built and how compute is delivered and where things run are all going to change as we move into these new architectures. Yeah, absolutely. It's a bold new world. Yeah, well, keep us all employed. So, you know, it's a good thing. You're in the right place. It's good. We all have jobs in the future. That's, I think that's a step in the right direction. Yes. Well, John, it's always a pleasure to have you join us here at Summit and I appreciate you hanging out and sharing some of your thoughts with us. Great. Thanks, Abby. Thanks. Talk to you. Thank you. Thank you.