 Hi, I'm Peter Burris, and welcome to another CUBE conversation from our wonderful studios in Palo Alto, California. Today we're talking storage, not just any kind of storage, but fast, intelligent storage. We've got NGD systems with us, and specifically, welcome back to theCUBE, Nader Selesi, CEO founder, and Scott Chadley, VP of Marketing. Good to see you again, Peter. So the last time we were here, we had a great conversation about the role that storage was going to play in overall system performance. And Nader, when I think of NGD systems, I think of really smart people doing great engineering to create really fast, high-performance products. Where are we in the state of the art of fast storage, fast systems? So what we are learning from the customers, the demand of the storage continues to grow exponentially. They want larger capacity, pair, drive. The other challenge they have physical space is limited always. And the power consumption. It's not necessarily just power consumption of the device. They have all sorts of resources for implementing their hyperscale data centers. From the physical space, from buying servers, network, storage. The challenge that they face is power available from utility companies are limited. They cannot overcome that. So if they need to increase their capacity of the storage by double the size of the storage in a year timeframe, they cannot get access to the utilities and the power. So they need to focus on energy efficiency. So when I think of NGD systems, what I should think about is smart, fast, and efficient. From a power stand. That's correct. That's one of the areas that we are focusing a lot to provide the energy efficient. We are improving the watt per terabyte by a factor of 10 compared to the best in class available, the other solid state drive that exists in the industry. Let me see if you're sure I got that. So by improved wattage by a factor of 10 for the same capacity. Correct. Meaning we are improving watt per terabytes in a same physical space. And that's the challenge that the industry is facing. The next set of the challenge that all the hyperscalers are facing and we are learning from them, moving of the data is a challenge. It just takes time and it's not efficient. So the more they can do inside of the drive to do the manipulation of the data without moving the data, that's what they are looking for. And that's exactly where we are focusing with our third generation product that we are introducing in the fourth quarter of this year. We are introducing mass-producible solution that can take it to the mass production. So give us an example of that because I know you were one of the first suppliers of technology that did things like what Matt produced down closer to the data. Is that the basic notion that we're talking here and what are the use cases? So there are by far a lot more use cases than I let Scott to go into some of the use cases that we have implemented as an example with some large partners which we are also announcing this coming week or next week during the FMS. So Scott, do you want to explore? Yeah, absolutely. So Peter, just to give you to your point. So there's a lot of different ways you can look at making storage intelligent. What we looked at, we took a different direction. We're not trying to just do simple things like the minor database applications. We're going for what's new and innovative in the way of things like AI and machine learning. So we talked last time a little bit about this image similarity search concept. As Nader mentioned, we're going to be live with a guest speaker at FMS implementing a version of that. No, FMS is Flash Memory Summit. Yes, Flash Memory, for those that don't know, Flash Memory Summit that happens every year. Other things that we've worked on again with partners relate to things like relational databases and being able to do things like implement Google TensorFlow live in the drive. We've also been able to port Docker containers directly into the drive. So then there's now a customer's ability to take any application running whatever format it's in and literally drop it in a container format into the drive and execute the commands in place on the data. And we're seeing improvements of 10 to 50x on execution time of those applications because they're not physically moving data around. So to put this kind of summarizes, if a customer or user has a choice of moving 50 terabytes around of raw data, as opposed to moving maybe a couple hundred kilobytes or a megabyte of application down to the drive, then obviously you want to move the smaller down. But it requires a fair amount of processing power and control be located very close to the data. So how's that happening? So by architecting the fundamentals of the storage from a sketch, we are able to provide the right solution. So architecting it within each controller or each SSD there is a controller for managing the flash and interfacing with the host. In addition, as part of that, we have embedded additional resources. Part of it is a quad core application processor, 64-bit application processor that is running at least a gigahertz that the application can come down and run on that or operating system is running on it. In addition to that, we have embedded the hardware accelerator to accelerate certain functions that make sense to be done. Plus the access to the data that is readily available at a much higher bandwidth than the host interface. So that's how we are addressing it. And of course, by providing the complete software stack to make it easy for the customers to bring their application rather than starting from a sketch or having it very specialized and custom solution. So when I think about if I'm a CIO or if I'm a senior person in infrastructure, I'm thinking what workloads naturally lend themselves to high degrees of parallelism. And then I'm thinking, how can I move more of that parallelism closer to the data? Exactly. That's right, correct. All right, so how's this turning into product? So from that perspective, we've been releasing or we've had released now two platforms we've called the Catalina Family of Solutions and they've been POCs, prototypes and some limited production volume. As Nader mentioned, our third platform we're calling the Newport platform is going to be an ASIC based solution that's going to be able to drive that mass market adoption that he referenced. And it's a whole bunch of unique things about it. A, we have the application co-processors. It's the first SSD controller to ever be done on a 14 nanometer process node. So that's where the energy efficiency piece of this comes into play. And the fact that we can do the densities that customers are looking for. Because right now there's a challenge in the market to be able to do a large enough drive at the right performance characteristics and power consumption to solve the need. So you're following some locations in Southern California from Catalina and the Newport next couple years will be in the San Bernardino Mountains. Sure. So as we think about where the technology is, so give us an example of the performance improvements which you're seeing from an overall benchmark standpoint. So one of the other US cases that it may not be intuitive to think about is for the content delivery, for video content delivery. So the new generation of contents, they are large, they are massive, they record massive amount of storage. And the old fashioned way of doing it, they have multiple drives in a server. They all converge and they go through another server for the encryption and authentication. Well, we are moving that function inside of the storage. Now all of a sudden same server instead of all converging and going through one narrow pipe, all the drives concurrently can serve multiple subscribers in parallel by more than factor of 10. And that's substantial from the performance point of view. So it's not necessarily the old fashioned way of measuring it, what's the IOPS, those are the old way of measuring it. The new ways, the end users, how they can access the data without being a bottleneck. And that's again another US case of it. The other US case as Scott mentioned for the doing image similarity search. In the old fashioned way, when they were accessing a billion images, it's working fine with the current SSDs, off the shelf SSDs and the current servers and GPUs. The challenge they are facing as they increase this database to a trillion images, it just cannot do it that old way. So it's more than just how many gigabytes they push through or how many IOPS is being able to look at it from the system level point of view and how many subscribers or how many customers can access it concurrently. So you're describing a number of relatively specialized types of applications but nonetheless applications of significant value to their businesses. But let's talk just for a second about how a customer would employ the technology. Customers don't mind specialized or more specialized devices as long as they fit in within the conventions for how they get used. So what's the complexity of introducing your product? Very good point that you're raising. Fundamentally, we are a solid state driver storage as a block storage based on PCIe and BME without any drivers, they plug it in, it's plug and play, it works. On top of that, for the scenario of the block storage, we address highest capacity, lowest power consumption or lowest watt per terabytes and servicing the majority of the market that nowadays are focused more on the read and consistent read rather than the what's the again IOPS or how fast is the ride. So we have our architecture and the algorithm is set up that we would provide a very narrow beam of the consistent latency no matter what workload they put on it and provide the right solution for them. Then on top of that, if they have specialized workload or use cases, they still can enable a disability based on simple software switch. So Scott, when you think about partners, the ecosystem, I know that we talked about this a bit last time, getting started, expanding it, where are we in terms of NGD systems getting the market? Absolutely, so from that perspective, we've gone beyond the proof of concept only phase. We've actually got production orders that have shipped to customers. We're starting to see that roll out in the back half of this quarter. As Nader mentioned, as we roll into Q4 with the new product, we then upgrade those customers and start getting into even larger roll outs. But it's not just a couple of mom and pop shops type of thing, it's some big names, it's some high level partners. And we're starting to build out the ecosystem of how to deliver it through server ODMs or other partners that can play off of the system, whether it be storage array providers or even some of the big box players. So we're now here with Newport. You've no doubt got plans, we don't have to go too deep into them, but as your company starts to scale, what's the cadence going to look like? Are you going to be able to continue to push the state of the art from performance, smarts and energy efficiency standpoint? Absolutely, there are already things that are in the pipeline for the next generation of how to bring more intelligence inside of the drive with more resources for a lot more workload to be able to adapt itself to the many, many use cases rather than only maybe today it might be a dozen use cases to go infinite. It truly is a platform rather than just the unique for this application. And then we're going to expand on that for the next generation of it. So obviously as we ramp up the first generation of product and mass production, the R&D is working on the next generation of the intelligence that they're going to put into it and continue that cadence. And of course we scale the company accordingly. Great news from NGD Systems. Nader? It is wonderful, this FMS coming up, we are announcing our new generation of product as well as announcing a close partner, one of the hyperscalers that we are introducing the next generation of product. Fantastic. Scott Shadley, VP of Marketing. Nader Selesi, CEO founder. NGD Systems, thanks very much again for being on theCUBE. And to you, once again, thanks for watching this CUBE conversation. Until we meet again, thanks for watching.