 Hey, welcome back everybody. Jeff Frick here with theCUBE. We're having a CUBE conversation in our Palo Alto studios getting a short little break between the madness of the conference season, which is fully upon us. And we're excited to have a longtime industry veteran, Brian Palowski, the CTO of DriveScale, join us and talk about some of the crazy developments that continue to happen in this world that just advances, advances. Brian, great to see you. Good morning, Jeff. It's great to be here. I'm a bit, I'm still trying to get used to the time zone after a long, long trip in Europe. But I'm glad to be here. I'm glad we finally were able to schedule this. Yes, it's never easy to get, one of the secrets of our business is everyone is actually all together at conferences. It's hard to get them together when, when there's not that catalyst of a conference to bring everybody together. Yes. So give us the, give us the one-on-one on DriveScale. So DriveScale, let me start with, let me start with what is composable infrastructure. DriveScale provides a product for orchestrating disaggregated components on a high-performance fabric to allow you to spin up, you're essentially your own private cloud, your own clusters for these modern applications, scale-out applications. And I just did a bunch of gobbledygook. What does that mean? The DriveScale software is essentially an orchestration package that provides the ability to take compute nodes and storage nodes on a high-performance fabric and securely form multi-tenant architectures, much like you would in a cloud, where you can, when we think of application deployment, we think of 100 nodes or 500 nodes. The applications we're looking at are things that are people are using for big data machine learning or AI or these scale-out databases, things like Vertica, Aerospike is important, DRAM, SSD-based database. And this is an alternative to the standard way of deploying applications in a very static nature onto fixed physical resources or into network storage coming from the likes of Network Appliance, sorry, NetApp or Dell EMC. It's the modern applications we're after, the big data applications for analytics. Right, so it's software that basically manages the orchestration of hardware, I mean, if compute, store, network, so you can deploy big data and analytics applications. Yes, it's actually focused on the orchestration part. The typical way applications that we're in pursuit of right now are deployed is on 500 physical hard, 500 physical bare metal nodes from pick your vendor of compute and storage that is all bundled together and then laid out into physical deployment on network. What we do is suggest that you essentially disaggregate, separate compute, pure compute, no disk at all, storage into another layer, have the fabric, and we inventory it all. And much like vCenter and for virtualization for doing software deployment of applications, we do software deployment of scale-out applications and a scale-out cluster. Right, so you talk about using industry standard, servers industry standard, storage. Does the system accommodate different types of compute and TPUs, different types of storage, whether it's high performance disks or it's flashed? How does it accommodate those things? And if I'm trying to set up my big stack of hardware to then deploy your software to get it configured, where's some of the things I should be thinking about? That's actually a great question. I'm going to try to hit three points. Absolutely. In fact, a core part of our orchestration layer is to essentially generalize the compute and storage components and the networking components of your data center and do rule-based, constraint-based selection when creating a cluster. From your perspective of creating cluster, you say, I want a hundred nodes and I'm going to run this application on it and I need this environment for that application. And this application is running on local, it thinks it's running local bare metal. So you say hundred nodes, eight cores each, minimum, and I want 64 gig of memory minimum. It'll go out and look at the inventory and do a best match of the components there. You could have different products out there. We are compute agnostic, storage agnostic. You could have mix and match. We will basically do a best fit match of all of your available resources and then propose to you in a couple seconds back with the cluster you want and then you just hit go and it forms a cluster in a couple of seconds. A virtual cluster within that inventory of assets. A virtual cluster, yes, out of the inventory of assets, except from the perspective of the application, it looks like a physical cluster. This is the critical part of what we do is that somebody told me, it's like we have an extension cord between the storage and the compute nodes. They used this analogy yesterday and I said I was going to reuse it. So if they listen to this, hey, I stole your analogy. We basically provide a long extension cord to the directed test storage, except we've separated out the storage from the compute. What's really cool about that is because the second point of what you said is that you can mix and match. The mix and match occurs because one of the things you're doing with your compute and storage is refreshing your compute and storage at three to five year cycles separately. When you have the old style model of combining compute and storage in what I'd call a captured DAZ scenario, you are forced to do refreshes of both compute and persistent storage at the same time. It's just becomes, it's a unmanageable position to be in. And separating out the components provide you a lot of flexibility from mixing and matching different types of components, doing rolling upgrades of the compute separate from the storage. And then also for having different storage tiers that you could combine SSD, the biggest tiers today are SSD storage and spinning disk storage, being able to either provide spinning disk, SSDs, solid state storage, or a mixture of both for a hybrid deployment for an application without having to worry about at purchase time having to configure your box that way. We just basically do it on the fly. Right, so and then obviously I can run multiple applications against that big stack of assets and it's going to go ahead and parse the pieces out that I need for each application. We didn't even practice this beforehand. That was a great one too. So, key part of this is actually providing secure multi-tenant environment is what the phrase I use because it's common phrase. Our target customer is running multiple applications. 2010, when somebody was deploying big data they were deploying a dupe. And then quickly think of what were the other things then? Nothing. It was a dupe. Today it's 10 applications all scale out all having different requirements for the reference architecture for the amount of compute to storage. So our orchestration layer basically allows you to provision separate virtual physical clusters in a secure multi-tenant way, cryptographically secure and you can encrypt the data too if you want. You can turn on encryption to get over the wire and add data, add rest, encryption, think GDPR and stuff like that. But the different clusters cannot interfere with each other's workloads. Because you're on a fully switched ethernet fabric they don't interfere with performance either. But that secure multi-tenant part is critical for the orchestration and management of multiple scale out clusters. So in theory, if I'm doing this well I can continually add capacity. I can upgrade my drives to SSDs. I can put in new CPUs as new great things come out into my big cloud, not my cloud but my big bucket of resources and then using your software continue to deploy those against applications as is most appropriate. Can we switch seats? Let me ask the question. No, because it's, it's, it's because I just keep out of the capacity and it's based on the optimal. That's a great summary because the thing that we're, the basic problem we're trying to solve is that what you, this is like the lesson from VMware, right? More or less it's from VMware was like, first it was like, you know, we had unused CPU resources. Let's get those unused CPU cycles back. No CPU cycle shall go unused, right? I thought they needed to keep 50% overhead just to make sure they didn't bump against the roof. That's a little detail. That's a little detail. So anyway, but, but what, the secondary effect was way more important. Once people decoupled their applications from physical purchase decisions and rolling out physical hardware and they stopped caring about any particular piece of hardware, they then found that the simplified management, the one button push software application deployment was a critical enabler for business operations and business agility. So we're trying to do what VMware did for that kind of captured legacy application deployments. We're trying to do that for essentially what has been historically bare metal big data application deployment where people were seriously in 2010, 2012, after virtualization took over the data center and the IT manager had his cup of coffee and he's laying it back on. Man, this is great. I have nothing else to worry about. And then there's a guy comes in his office and he's at Horace Cube and goes, what do you want? And he goes, well, I'd like you to deploy 500 bare metal nodes to run this thing called Hadoop and he goes, well, I'll just give you 500 virtualized instances and he goes, nope, not good enough. I want to start going back to bare metal and since then it's gotten worse. So what we're trying to do is restore the balance in the universe, right? And apply for the scale out clusters what virtualization did for the legacy applications. Does that make a little bit of sense? Yeah, and is it heading towards, the other direction right is towards the atomic, right? So if you are trying to break the units of compute and store down to the base. So you've got a unified baseline that you can apply more on volume than maybe a particular feature set in a particular CPU or a particular characteristic of a particular type of a storage. This way you're doing in software and leveraging a whole bunch of it to satisfy as you said kind of the meats men for that particular application. Yeah, absolutely. And I think a kind of critical about the timing of all this is that virtualization drove very much a model of commoditization of CPUs. I mean once VMware hit there, people weren't deploying applications on particular platforms. They were deploying applications on a virtualized hardware model and that was how applications were always thought about from then on. From a lot of these scale out applications, a lot of them, all of them are designed to be hardware agnostic. They want to run on bare metal because they're designed to run. When you deploy a bare metal application for a scale out, a patchy spark, it uses all of the CPU on the machine. You don't need virtualization because it will use all the CPU. It will use all the bandwidth and the disks underneath it. What we're doing is separating it out to provide lifecycle management between the two of them but also allow you to change the configurations dynamically over time. But this word of atomic kind of stuff, this aggregation part is the first step for composability. You want to break it out and I'll go here and say that the enterprise storage vendors got it right at one point. I mean they did something good. When they broke out captured storage to the network and provided the separation of computing storage before virtualization, that was a step towards a gaining control and a sane management approach to what are essentially very different technologies evolving at very different speeds. And your comment about, so what if you want to basically replace spinning disks with SSDs? That's easily done in a composable infrastructure because it's a virtual function. You're just using software, software defined data center, you're using software except for the set of applications that just slid past what was being done in the virtualized infrastructure and the network storage infrastructure. And this really supports kind of the trend that we see which is the new age, which is I don't, no don't tell me what infrastructure I have and then I'll build an app and try to make it fit. It's really app first and the infrastructure has to support the app and I don't really care as a developer and as a competitive business trying to get apps to satisfy my marketplace, the infrastructure I'm just now assuming is going to support whatever I build and this is how you enable that. Right, and very importantly, the people that are writing all of these apps, the TensorFlow apps, by the way, there's so many Apache things, Apache Kafka, Apache Spark. Yes. The dupes of the world, the no SQL databases, Cassandra, Vertica, things like that. MongoDB. It was a MongoDB, right. Let's just keep rolling these things off our tongue. They're all CUBE alumni so we've moved on to all of them. Oh, this is great, this is great. But, and they're all brilliant technologists, right. And they have defined applications that are so, so good at what they do. But they didn't all get together beforehand and say, Hey, by the way, how can we work together to make sure that when this is all deployed and operating in pipelines and in parallel that from the IT management perspective, it all just plays well together. They solve their particular problems and when it was just one application being deployed, no harm, no foul, right. When it's 10 applications being deployed and all of a sudden the line item for big data application starts creeping past five, six, go and approaching 10%, people start to get a little bit nervous about the operational cost, the management cost, deployability. The, I talked about lifecycle management, refreshes, tech refreshes, expansion. All these things that when it's a small thing over there in a corner, okay, I'll just ignore it for a while. Yeah. Do you remember the old adventure game pieces? I'm dating myself. You're a venture gal. When you water a plant. Water, please, water, please, the plant, the plant and where we're in pitiful, you give it water and then it goes water, water. Give me water. I'll have to lick that one up. Okay. All right, so before I let you go, you've been at this for a while. You've seen a lot of iterations. As you kind of look forward for the next little while, now kind of what do you see as some of the next kind of big movements, kind of big developments as you know, kind of the IT evolution and every company's now an IT company or software company continues. So let's just say that this is a great time. Why join DriveScale actually, a couple of reasons. This is a great time for a proposal infrastructure. It's like, why is proposal infrastructure important now? It does solve a lot of problems. You can do lots of, you can deploy legacy applications over and stuff, but they don't have any pain points per se. They're running in their virtualized infrastructure over here, the enterprise storage over here. My BM still sells mainframes, right? So there's still stuff running on those boxes. Yes, there is. Just let it be. Just let it run. Just came up in Europe. And just let it run, but there's no pain point there. What these increasingly deployed scale out applications, you know, 2004 when the clock speed was hit and then everything went multi-core and then parallel applications became the norm and then it became scale out applications for the Facebooks of the world, the Googles of the world, whatever, et cetera, right? For their applications. That scale out is becoming the norm moving forward for application architecture and application deployment. The more data that you process, the more scale out you need. And composable infrastructure is becoming a, is a critical part of getting that under control and getting you the flexibility and manageability to allow you to actually make sense of that deployment in the IT center in the large. And the second thing I want to mention is that one thing is that Flash has emerged. And that's driven something called NVME or Fabric. Essentially high performance fabric interconnect for providing essentially local latency to remote resources. That is part of the composable infrastructure story today and you're basically accessing with the speed of local access to solid state memory, you're accessing it over the fabric. And all these things are coming together, driving a set of applications that are becoming both increasingly important and increasingly expensive to deploy. And composable infrastructure allows you to get a handle on controlling those costs and making it a lot more manageable. Yeah. It's a great summary and clearly the amount of data that's going to be coming into these things is only going up, up, up. So great conversation from Brian. Again, we still got to go meet at Tehran later. So we'll make that happen. Yes, great restaurant Palo Alto. Thanks for stopping by and really appreciate the conversation. And if you need to buy drive scale, I'm your guy. All right, he's Brian, I'm Jeff. Thank you for watching theCUBE conversation from our Palo Alto studios. Thanks for watching. We'll see you at a conference soon, I'm sure. See you next time.