 Hi everybody, welcome back to SuperCloud 2. I'm Dave Vellante with my co-host, John Furrier. We're here at our tricked out Palo Alto studio. We're going live wall to wall all day. We're inserting a number of pre-recorded interviews, folks like Walmart, we just heard from near Zook of Palo Alto Networks. And I'm really pleased to welcome in David Flynn. David Flynn, you may know as one of the people behind Fusion IO completely changed the way in which people think about storing data, accessing data. David Flynn now the founder and CEO of a company called Hammerspace. David, good to see you. Thanks for coming on. Good to see you too. Dr. Neilu Mihai is the CEO and founder of Cloud of Clouds. He's actually built a SuperCloud. We're going to get into that. Neilu, thanks for coming on. Thank you, happy new year. Yeah, happy new year. So I'm going to start right off with a little debate that's going on in the community if you guys would bring out this slide. So Bob Muglia early today, he gave a definition of SuperCloud. He felt like we had to tighten ours up a little, but he said a SuperCloud is a platform, underscoring platform that provides programmatically consistent services hosted on heterogeneous cloud providers. Now, Neilu, we have this shared doc and you've been in there. You responded, you said, well, hold on. SuperCloud really needs to be an architecture or else we're going to have this stovepipe of stovepipes really. And then you went on with more detail. What's the information model? What's the execution model? How are users going to interact with SuperCloud? So I start with you. Why architecture? The inference is that a platform, the platform provider is responsible for the architecture. Why does that not work in your view? No, it's a very interesting question. So whenever you think about platform, what's a connotation? You think about monoriting system. I mean, I don't know whether it's true or not, but there is this connotation of monolithic. On the other hand, if you look at what's a problem right now with hyper-clouds from the customer perspective, they're very complex. There is a heterogeneous world where actually every single one of these hyper-clouds has their own architecture. You need rocket scientists to build their cloud applications. Always there is this contradiction between cost and performance. They fight each other. And I'm quoting here a former friend of mine from Bell Labs who work at AWS who used to say, cloud is cheap as long as you don't use it too much. So clearly we need something that kind of plays from the principle point of view, the role on rap writing system, that seats on top of this heterogeneous hyper-clouds, and there's nothing wrong by having this proprietary hyper-clouds. Think about processors, think about the operating system and so on and so forth. But in order to build a system that is simple enough, I think we need to go deeper and understand. So the argument, the counter argument to that, David, is you never get there. You need a proprietary system to get the market sooner to solve today's problem. I don't know where you stand on this platform versus architecture. I haven't asked you, but. I think there are aspects of both for sure. I mean, it needs to be an architecture in the sense that it's broad based and open and so forth. But platform, you could say, as long as people can instantiate it themselves on their own infrastructure, as long as it's something that can be deployed as software defined, you don't want the concept of platform being the monolith, combine hardware and software. So it really depends on what you're focused on when you're saying platform. I'd say, as long as they software defined thing to where it can literally run anywhere. Because I really think what we're talking about here is the original concept of cloud computing, the ability to run anything anywhere without having to care about the physical infrastructure. And what we have today is not that. The cloud today is a big mainframe in the sky that just happens to be large enough that once you select which region, generally you have enough resources. But nowadays you don't even necessarily have enough resources in one region and then you're kind of stuck. So we haven't really gotten to that utility model of computing. And you're also asked to rewrite your application to abandon the conveniences of high performance file, access, you got to rewrite it to use object storage stuff. We have to get away from that. Okay, okay. I want to just draw on that. Because I think I like that point about, there's not enough availability. But on the developer cloud, the original AWS premise was targeting developers. Because at that time, you have the provision of Sunbox, get a Cisco DSUC issue, now you get on the cloud. But I think you're coming up the scale question. Because I think right now scale is huge. Enterprise grade versus cloud for developers. That's right. Because I mean, look at Amazon, Azure, they got compute, they got storage, they got queuing and some stuff. If you're doing a startup, you throw your app up there, local host to cloud, no big deal. The scale thing that gets me is that. And you can tell by the fact that in regions that are under high demand, right? Like in London or LA, at least with the clients we work with in the meeting entertainment space, it costs twice as much for the exact same cloud instances that do the exact same amount of work as somewhere out in rural Canada. So why is it you have such a cost differential? It has to do with that supply and demand. And the fact that the clouds aren't really the ability to run anything anywhere. Even within the same cloud vendor, you're stuck in a specific region. And that was never the original promise, right? We turned it into that, but the original promise was, get rid of the heavy lifting of IT. Not have to run your own. Yeah, exactly. And then it became, wow, okay, I can run anywhere. And then, you know, it's like Web 2.0. People say, why SuperCloud? You and I talked about this. Why do you need a name for SuperCloud? It's like Web 2.0. It's what cloud was supposed to be. It's what cloud was supposed to be, exactly, right? Cloud was supposed to be run anything anywhere. At least that's what we took it as. But you're right. Originally it was just, oh, don't have to run your own infrastructure and you can choose somebody else's infrastructure. And you did that. You're still bound to that. And people said, I want more. All right. But how do we go from here? That's actually, that's a very good point because indeed, when the first hyper-clouds were designed, what does that really focus on customers? I think SuperCloud is an opportunity to design in the right way, also having in mind the computer science rigor, and we should take advantage of that. Because in fact, actually, if cloud would have been designed properly from the beginning, probably wouldn't have needed SuperCloud. You wouldn't have to have been asked to rewrite your application. That's correct. To use REST interfaces to your storage. Revicious history is always a good one. But look, cloud is great. I mean, your point is cloud is a good thing. Don't hold it back. It is a very good one. Let it go as it is. Yeah, let that thing continue to grow. Don't impose restrictions on the cloud. Just refactor what you need to for scale or enterprise grade or availability. Is that true or is that a problem you're solving? Well, yeah, I mean, what the cloud is doing is absolutely necessary. What the public cloud vendors are doing is absolutely necessary. But what's been missing is how to provide a consistent interface, especially to persistent data and have it be available across different regions and across different clouds. Because data is a highly localized thing. In current architecture, it only exists as rendered by the storage system that you put it in, whether that's a legacy thing like a NetApp or an Isilon or even a cloud data service. It's localized to a specific region of the cloud in which you put that. We have to delocalize data and provide a consistent interface to it across all sites. That's high performance, local access, but to global data. So Walmart earlier today described their, we call super cloud, they call it the Walmart cloud native platform. And they use this triplet model. They have AWS and Azure, no, no, sorry, no AWS. They have Azure and GCP. And then on-prem, where all the VMs live. When you probe, it turns out that it's only stateless in the cloud. It's all the state stuff. Let's just admit it. There is no such thing as stateless because even the application binaries and libraries are state. Well, I'm happy that I'm getting that. Okay. Because I actually, I have a lot of debate. Yes. If you think about no software running on a phone name or machine, it's stateless. Exactly. This is something that was- And that's data that needs to be distributed and provided consistently across all the clouds. It's a nonsense. So it's an illusion. Okay. Well, the I's talk about stateless. Well, you see, people make the confusion between state and persistent state. Okay? Persistent state, it's a different thing. State is a different thing. So, but anyway, I want to go back to your point because there is a lot of debate here. People are talking about data. Some people are talking about logic. Some people are talking about networking. In my opinion, it's this triplet which is data logic and connectivity that has equal importance. And actually, depending on the application, you can have the center of gravity moving towards data, moving towards what I call execution units or workloads. And connectivity is actually the most important part of it. So some people are saying, move the logic towards the data. Some other people, and you are saying, actually, that no, you have to build a distributed data mesh. What I'm saying is actually, you have to consider all these three variables, all this vector in order to decide based on application what's the most important because sometimes. So the application chooses. That's correct. Well, it's what operating systems were in the past was principally the thing that runs and manages the jobs. The job scheduler and the thing that provides your persistent data. OK, so we finally got an operating system in the equation. Thank you. I have a PhD in operating the system. What we're talking about is an operating system. So forget platform or architecture. It's an operating environment. Let's use it as a generator. I think that's a better definition. All right, let's take that with a deal. I want to ask you a question. Because I believe it's an operating system. I think it's going to be a reset, refactored. You wrote to me, the model of SuperCloud has to be open theoretical. It has to satisfy the rigors of computer science and customer requirements. So unique to today, if the OS is going to be refactored, it's not going to be may or may not be Red Hat or somebody else, this new OS. Obviously, requirements are for customers, too. But what's the computer science that's needed? Where are we? What's the missing? Where's the science in this shift? It's not your standard OS. It's not like an OS. I would make it to the operating environment. If you think about and make analogies, what you need when you design a distributed system? Well, you need an information model. You need to figure out how the data is located in distributed. You need a model for the execution units, and you need a way to describe the interactions between all these objects. And it is my opinion that we need to go deeper and formalize these operations in order to make a step forward when we design SuperCloud and to design something that is better in the current hyper-clouds. And actually, when we design something better, you make a system more efficient, and it's going to be better from the performance point of view. But we need to add some math into all this customer focus-centric. And I really admire AWS and their executive team focusing on the customer. But now it's time to go back and see if we apply some computer science, if you try to formalize to build a theoretical model of cloud, can we build a system that is better than existing ones? So, David, how do you see the operating system of a decent or operating environment of a decentralized? Well, I think it's layered. I mean, we have operating systems that can run systems quite efficiently. Linux has sort of one in the data center, but we're talking about a layer on top of that. And I think we're seeing the emergence of that. For example, on the job scheduling side of things, Kubernetes makes a really good example. You break the workload into the most granular units of compute, the containerized microservice, and then you use a declarative model to state what is needed and give the system the degrees of freedom that it can choose how to instantiate it. Because the thing about these distributed systems is that the complexity explodes. Running a piece of hardware, running a single server is not a problem, even with all the many cores and everything like that. It's when you start adding in the networking and making it so that you have many of them, and then when it's going across whole different data centers. So, at that level, the way you solve this is not manually and not procedurally. You have to change the language so it's intent-based, it's a declarative model, and what you're stating is what is intended, and you're leaving it to more advanced techniques, like machine learning, to decide how to instantiate that service across the cluster, which is what Kubernetes does, or how to instantiate the data across the diverse storage infrastructure, and that's what we do. So, that's a very good point, because actually, what has been neglected with hyper-clouds is really optimization and automation. But in order to be able to do both of these things, you need, I'm going back and I'm stubborn, you need to have a mathematical model, a theoretical model, because what does automation mean? It means that we have to put machines to do the work instead of us, and machines work with what? Formulae? With algorithms? They don't work with services. So, I think super-cloud is an opportunity to underscore the importance of optimization and automation in hyper-cloud. And actually, by doing that, we can also have an interesting connotation. We are also contributing to save our planet, because if you think right now, we're consuming a lot of energy on these hyper-clouds and also all these AI applications, and I think we can do better and build the same kind of application using less energy. So, yeah, great point. Love that call out. Dave and I always joke about the old, because we're old, we talk about old history. OS2 versus DOS, okay? Okay, OS is. OS2 is silly better, first threaded OS, DOS never went away. So, how does legacy play into this conversation? Because I, by the theoretical, love the conversation. I think it's an OS totally to see it that way myself. What's the blocker? Is there a legacy that drags it back? Is the anchor dragging from legacy? Is there a DOS OS2 moment? Is there an opportunity to flip the script? I think that's a perfect example of why we need to support the existing interfaces. Operating systems, real operating systems, like Linux, understands how to present data. It's called a file system, block devices, things that plumb in there. And by going to a REST interface in S3 and telling people they have to rewrite their applications, you can't even consume your application binaries that way. The OS doesn't know how to pull that sort of thing. So, to get to cloud, to get to the ability to host massive numbers of tenants within a centralized infrastructure, we abandoned these lower level interfaces to the OS and we have to go back to that. It's the reason why DOS ultimately won. Is it had the momentum of the install base? We're seeing the same thing here. Whatever it is, it has to be a real file system and not a thumb down file system. Neil, what's your reaction? Because you're on the theoretical bandwagon, let's get your reaction. No, I think it's a good, I will give, you made a good analogy between OS2 and DOS, but I'll go even further saying, if you think about the evolution operating system didn't stop the evolution of underlying microprocessors, hardware and so on and so forth. On the contrary, it was a catalyst for that. So, because everybody could develop their own hardware without worrying that the applications on top of operating system are gonna modify. The same thing is gonna happen with SuperCloud. You're gonna have the AWS's, you're gonna have the Azure and the GCP continue to evolve in their own way, proprietary, but if we create on top of it, the right interface. The open, this is why open is important. That's correct, because actually, you're gonna see some time ago, everybody was saying, remember venture capitalists were saying AWS killed the world, nobody's gonna come. Now you see what Oracle is doing and then you're gonna see other players. It's funny, Amazon's trying to be more like Microsoft, Microsoft's trying to be more like Amazon and Google, Oracle's just trying to say they have cloud. That's correct. So my point is that you're gonna see, you're gonna see a multiplication of these hyper clouds and cloud technology. So the system has to be open in order to accommodate what it is and what is gonna come. So the legacy, so legacy is an opportunity, not a blocker in your mind. And you see that same thing? That's correct. I think we should allow them to continue to be their own actually, but maybe you're gonna find a way to connect to them. Amazon's the processor and they're on the 8088. That's correct. You're saying, loving people trying to get put to work. That's a good analogy. At performance levels, you say, good luck, right? Well yeah, we have to be able to take traditional applications, high performance applications, those that consume file system and persistent data. Those things have to be able to run anywhere. You need to be able to put them on to more elastic infrastructure. So we have to actually get cloud to where it lives up to its billing. And that's what you're solving for with Hammer Sticker. That's what we're solving for. Give me the bumper sticker. Solving for how do you have massive quantities of unstructured file data? At the end of the day, all data ultimately is unstructured data. Have that persistent data available across any data center within any cloud, within any region on-prem at the edge, and have not just the same APIs, but have the exact same data sets and not sucked over a straw remote, but at extreme high performance, local access. So how do you have local access to globally shared distributed data? And that's what we're doing. We are orchestrating data globally across all different forms of storage infrastructures. So you have a consistent access at the highest performance levels, at the lowest level innate built into the OS, how to consume it. So are you going into all the clouds and natively building in there? So this is software that can run on cloud instances and provide high performance file within the cloud. It can take file data that's on-prem. Again, its software can run in virtual or on physical servers and it abstracts the data from the existing storage infrastructure and makes the data visible and consumable and orchestratable across any of it. And what's the elevator pitch for cloud of clouds? Well, cloud of clouds creates a theoretical model of cloud and it describes every single object in the cloud whether it's data, execution units, and connectivity with one single class of very simple object. And I can give you all that stuff. And the problem that solves is what? The problem that solves is it creates this medical model that is necessary in order to do other interesting things such as optimization, using sad engines, using automation, applying ML for instance, or deep learning to automate all this cloud. If you think about in the industrial field, we know how to manage and automate huge plans. Why wouldn't do the same thing in cloud? It's the same thing you can do. That's what you mean by theoretical model. That's correct. Write out the architecture, almost the bones of skeleton or something. That's correct. And then on top of it, you can actually build a platform. You can create your services. You put numbers to it, you kind of index it. You quantify this thing and you apply mathematical. It's really about, I can disclose this thing. It's really about describing the cloud as a knowledge graph where every single object in the graph, a node, an edge, is a vector. And then once you have this model, then you can apply the field theory and linear algebra to do operation with these vectors. And this creates a very interesting opportunity to let the math do this thing for us. So what happens with hyperscale or it's like AWS in your model? So in my model, actually, what they're happy with this or they can be like. I'm very happy with that. Well, they'd be happy with you. We create an interface to every single hyper cloud. Well, actually we don't need to interface with the thousands of APIs, but you know, we have the 80-20 rule and we map these APIs into this graph. And then every single operation that is done in this graph is done from the beginning in an optimized manner and also automation ready. That's going to be great. David, I want to just go back to you before we close real quick. You've had a lot of experience, multiple ventures on the front end. You talked to a lot of customers who've been innovating. Where are the classic enterprise? Because you used to sell and invent product around the old school enterprises with storage. You know that trajectory. Storage is still critical to store the data. Where's the classic enterprise grade mindset right now? Those customers that were buying, that are buying storage, they're in the cloud, they're lifting and shifting. They not yet put the throttle on DevOps. Well, they look at this super cloud thing. Are they like a deer in the headlights or are they like getting it? What's the classic enterprise look like? You're seeing people at different stages of adoption. Some folks are trying to get to the cloud. Some folks are trying to repatriate from the cloud because they've realized it's better to own than to rent when you use a lot of it. And so people are at very different stages of the journey. But the one thing that's constant is that there's always change. And the change here has to do with being able to change the location where you're doing your computing. So being able to support traditional workloads in the cloud, being able to run things at the edge and being able to rationalize where the data ought to exist. And with a declarative model, intent-based, business-objective-based, be able to swipe a mouse and have the data get redistributed and positioned across different vendors, across different clouds. That, we're seeing that as really top of mind right now because everybody's at some point on this journey trying to go somewhere and it involves taking their data with them. Guys, great conversation. Thanks so much for coming on, for John, Dave. Stay tuned. We got a great analyst power panel coming right up. More from Palo Alto SuperCloud 2, right back.