 From our studios in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. Hi, I'm Peter Burris and welcome again to another CUBE Conversation from our studios here in beautiful Palo Alto, California. With every CUBE Conversation, we want to bring smart people together and talk about something that's relevant and pertinent to the industry. Now today we're going to be talking about the emergence of new classes of cloud provider who may not be the absolute biggest but nonetheless are crucial in the overall ecosystem of how they're going to define new classes of cloud services to an expanding array of enterprise customers that need that. And to have that conversation and some of the solutions that that class of cloud service provider is going to require, we've got Joel Diedrich with us today. Joel is the Vice President and General Manager of Network Storage Software at Toshiba Memory America. Joel, welcome to the CUBE. Thanks very much. So let's start by, who are you? My name's Joel Diedrich. I'm managing a new group at Toshiba Memory America involved in building software that will help our customers create a cloud infrastructure that's much more like those of the the Googles and Amazons of the world but without the enormous teams that are required if you're building it all yourself. Now Toshiba is normally associated with a lot of hardware. The software angle is, how does software play into this? Well, you know, flash is changing rapidly, more rapidly than maybe the average guy on the street realizes and one way to think about this is inside an SSD there's a processor that's not too far short of the average Xeon in compute power and it's busy. So there's a lot more work going on in there than you might think. We're really bringing that up a level and doing that same sort of management across groups of SSDs to provide a network storage service that's simple to use and simple to understand but under the hood we're pedaling pretty fast just as we are today in the SSD. So the problem that I articulated up front was the idea that we're going to see as we get greater specialization in enterprise needs from cloud, there's going to be greater numbers of different classes of cloud service provider whether that be SaaS or whether that be by location by different security requirements or whatever else it might be. What is the specific issue that this emerging class of cloud service provider faces as they try to deliver really high quality services to these new more specialized end users? Well, let me first kind of define terms. I mean, cloud service provider can mean many things. In addition to someone who sells infrastructure as a service or platform as a service we can also think about companies that deliver a service to consumers through their phone and have a data center backing that because of the special requirements of those applications. So we're serving that panoply of customers. They face a couple of issues that are result of the trajectory of flash and of storage of late and one of those is that we as flash manufacturers have a bit of an innovator's dilemma. That's a term we use here in the value that I think most people know. Our products are too good. They're too big, they're too fast, too expensive to be a good match to a single compute node. And so you want to share them. And so the game here is can we find a way to share this really performant million IOP dragon across multiple computers without losing that performance? So that's sort of step one is how do we share this precious resource? Behind that is an even bigger one that takes a little longer to explain and that is how do we optimize the use of all the resources in the data center in the same way that the Googles and Amazons do by moving work around between machines in a very fluid and very rapid way. To do that, you have to have the storage visible from everywhere and you have to be able to run any instance anywhere. That's a tall order and we don't solve the whole problem but we're a necessary step. And the step we provide is we'll take the storage out of the individual compute nodes and serve it back to you over your network but we won't lose the performance that you're used to having it locally attached. Okay, so let's talk about the technical elements required to do this. Describe from the SSD from the flash node up. I presume it's NVMe. So NVMe, I'm not sure if all of our listeners today really know how big a deal that is. You know, there have been two block storage command sets of fundamental commands that you give to a block storage device in my professional lifetime. SCSI was invented in 1986, back when high performance storage was too hard drives attached to your ribbon cable in your PC and it's lasted up until now and it's still, if you go to a random data center and take a random storage wire, it's going to be transporting the SCSI command set. NVMe came out in 2012, so 25 years later, the first new, genuinely new command set and there's an alphabet soup of transports, you know, the interfaces and formats that you can use to transport SCSI around would fill pages and we sort of tune them out and we should, we're now embarking on that same journey again except with a command set that's ideal for flash and we've sort of given up on or left behind the need to be backward compatible with hard disks and we've said let's build a command set in an interface that's optimum for this new medium and then let's transport that around. So NVMe over fabrics is the first transport for the NVMe command set and so what we're doing is building software that will allow you to take a conventional x86 compute node with a lot of NVMe drives and wrap our software around it and present it out to your compute infrastructure and make it look like locally attached SSDs at the same performance as locally attached SSDs which is the big trick but now you get to share them optimally and we do a lot of other optimal things inside the box but they ultimately don't matter to customers what customers see is I get to have the exact size and performance of flash that I need at every node for exactly the time I need it. So I'm a CTO at one of these emerging cloud companies that I know I'm not going to be adding a million machines a year. Maybe I'm going to be adding 10,000, maybe I'm going to be adding 50,000, 100,000. So I can't afford the engineering staff required to build my own soup to nuts set of software. Can't roll it all yourself. Okay, so how does this fit into that? This is the assembly kit for the lowest layer of that. We take the problem of turning raw SSDs into a block storage service and solve it for you. We have a very sharp line there. We're not trying to be a filer or we're not trying to be EMC here. It's a very simple but fast and rugged storage service box. It interfaces to your provisioning system, to your orchestration system, to your telemetry systems and no two of those are alike. So there's a fair amount of customization still involved but we stand ready to do that. I mean, that's the, you can take or toy this together together yourself. Yeah, it's a sheba does, yes. So that's the problem we're solving is we're enabling the optimum use of flash and maybe more subtly but more importantly in the end, we're allowing you to disaggregate it so that you no longer have storage pinned to a compute node and that enables a lot of other things that we've talked about in the past. That's a big feature of the cloud operating model is the idea that any application can address any resource and any resource can address any application and you don't end up with dramatic or significant barriers in the infrastructure is how you provision those instances and operate those instances. Absolutely, I mean, the example that we see all the time in the service providers that are providing some service through your phone is they all have a time of day rush or a Christmas rush, you know, some sort of peaks to their workloads and how do they handle the peaks? How do they handle the demand peaks? Well today, they buy enough compute hardware to handle the peak and the rest of the year it sits idle. I mean, and this can be 300% pretty easily and you can imagine the traffic to a shopping site Black Friday versus the rest of the year. If the customer gets frustrated and goes away they don't come back and so you have data centers worth the machines doing nothing and then over on the other side of the house you have the machine learning crew who can use infinite compute resource but they don't have a time demand, it just runs 24 seven and they can't get enough machines and they're arguing for more budget and yet we've got hundreds of thousands of machines doing nothing. I mean, that's a pretty big piece of bait right there and in the order. We're just to say that the ML guys can't use the retail guys or retail resources and the retail resources can't use the ML and what we're trying to do is make it easier for both sides to be able to utilize the resources that are available on both sides. Exactly so, exactly so and that requires more than one of the things that requires is any given instances storage can't be pinned to some compute node otherwise you can't move that instance. It has to be visible from anywhere and there's some other things that need to work in order to move instances around your data center under load but this is a key one and it's a tough one and it's one that to solve it without ruining performance is the hard part. I mean, we've had network storage isn't a neat thing, that's been going on a long time. Network storage at the performance of a locally mounted NVMe drive is a tough trick and that's the new thing here. But it's also a toolkit so that what appears to be a locally mounted NVMe drive even though it may be remote can also be oriented into other classes of services. So how does this, for example, I'm thinking of Kubernetes clusters stateless still having storage that's really fast, really high performance, very reliable, very secure. How do you foresee this technology supporting and even catalyzing changes to that Kubernetes that Docker classes contain a workloads? So for one, we implement the interface to Kubernetes and Kubernetes is a rapidly moving target. I mean it's, I love their approach. They have a very fast version clock, every month or two there's a new version and their support attitude is, if you're not within the last version or two, don't call. You keep up, this is and that's sort of not the way the storage world has worked. So our commitment is to connect to that and make that connection stay put as you follow a moving target. But then where this is really going is the need for really rapid provisioning. In other words, it's not the model of the IT guy sitting at a keyboard attaching a disk to a stack of machines that's running some application and coming back in six months to see if it's still okay. As we move from containerized services to serverless kind of ideas in the serverless world, the average lifespan of an application is 20 seconds. So we've got to spool it up, load the code, get it state, run and kill it pretty quickly. Millions of times a minute and so you need to be light of foot to do that and so we're putting a lot of energy behind the scenes into making software that can handle that sort of a dynamic environment. So how does this resource that allows you to present a distant NVMe drive as mounting it locally? How does that catalyze other classes of workloads or how does that catalyze new classes of workloads? You mentioned ML. Are there other workloads that you see on the horizon that will turn into services from this new class of cloud provider? Well, I think one big one is the serverless notion and to just digress on that a little bit. Now, when we went from the classic enterprise, the assignment of work to machines lasts for the life of the machine, right? That group of machines belongs to engineering, those are accounting machines and so on and no IT guy in his right mind would think of running engineering code on the accounting machine or whatever. In the cloud, we don't have a permanent assignment there anymore, you ran a machine for a while and then you get it back. But the user is still responsible for figuring out how many machines or VMs he needs, how much storage he needs and doing the calculation and provisioning all that. In the serverless world, the user gives up all of that and says, here's the set of calculations I want to do, trigger it when this happens and you Mr. Cloud provider figure out, does this need to be sharded out 500 ways or 200 ways to meet my performance requirements and as soon as these are done, turn them back off again on a time scale of 10 seconds. And so what we're enabling is further movement in the direction of taking the responsibility for provisioning and scaling out of the user's hands and making it automatic. And so we let users focus on what they want to do, not how to get it done. So this really is not an efficiency play when you come right down to it. This really is changing the operating model so new classes of work can be performed, so that the overall computer infrastructure, the overall infrastructure becomes more effective and matches to the business needs better. You know, it's really both. I mean, there's a tremendous efficiency gain as we talked about with the ML versus the marketplace, but there's also things you just can't do without an infrastructure that works this way. And so, you know, there's an aspect of efficiency and there's an aspect of man, this is just something that we have to do to get to the next level of the cloud. Excellent. So do you anticipate that this portends some changes to Toshiba's relationship with, you know, different classes of suppliers? I really don't. We are, Toshiba Memory Corporation is a major supplier of both Flash and SSDs to basically every class of storage customer. And that's not going to change. They are our best friends and we're, you know, we're not out to compete with them. We're serving really an unmet need right now. We're serving a relatively small group of customers who are build, who are cloud first, cloud always. They want to operate in the sort of cloud style, but they really can't, as you said earlier, they can't invent it all soup to nuts with their own engineering. They need some pieces to come from outside and we're just trying to fill that gap. That's the goal here. Got it. Joel Diedrich, Vice President and General Manager Network Storage Software, Toshiba Memory America. Thanks very much for being on theCUBE. My pleasure, thanks. And once again, this is Peter Burris and this has been another CUBE Conversation. Until next time.