 Now I'm joined by Dave Vellante, who's an analyst with Wikibon, a colleague here at Wikibon and co-CEO of Silicon Angle. Dave, welcome to theCUBE. Dave, a lot of conversation about AI. What is it about today that is making AI so important to so many businesses? Well, I think there's three things, Peter. The first is the data. We've been on this decade long Hadoop bandwagon and what that did is it really focused organizations on putting data at the center of their business. And now they're trying to figure out, okay, how do we get more value out of that? So the second piece of that is the technology is now becoming available. So, AI of course has been around forever, but the infrastructure to support that, the GPUs, the processing power, flash storage, deep learning frameworks like TensorFlow have really, Cafe have started to come to the marketplace. So the technology is now available to act on that data. And I think the third is people are trying to get digital right. This is about digital transformation. Digital means data. We talk about that all the time. And every corner office is trying to figure out what their digital strategy should be. So they're trying to remain competitive and they see automation and artificial intelligence, machine intelligence applied to that data as a linchpin of their competitiveness. So a lot of people talk about the notion of data as a source of value and there's been some presumption that's all going to the cloud. Is that accurate? Well, yes, it's fun. Funny you say that because as you know, we've done a lot of work on this. And I think the thing that organizations have realized in the last 10 years is the idea of bringing five megabytes of compute to a petabyte of data is far more viable. And as a result, the pendulum is really swinging in many different directions. One being the edge data is going to stay there. Certainly the cloud is a major force. And most of the data still today lives on premises. And that's where most of the data is likely going to stay. And so no, all the data is not going to go into the cloud. Well, he's not the central cloud. Yeah, that's right. The central public cloud, you know, you can maybe redefine the boundaries of the cloud. I think the key is you want to bring that cloud like experience to the data. We've talked about that a lot in the Wikibon and CUBE communities. And that's all about simplification and the cloud business models. So that suggests pretty strongly that there is going to continue to be a relationship between choices about hardware infrastructure on premises and the success at making some of these advanced complex workloads run and scream and really drive some of that innovative business capabilities. As you think about that, what is it about AI technologies or AI algorithms and applications that have an impact on storage decisions? Well, I mean, the workloads, the characteristics of the workloads are going to be, oftentimes it's going to be largely unstructured data. There's going to be small files. There's going to be a lot of those small files and they're going to be kind of randomly distributed. And as a result, that's going to change the way in which people are going to design systems to accommodate those workloads. There's going to be a lot more bandwidth. There's going to be a lot more parallelism in those systems in order to accommodate and keep those CPUs busy. I mean, we're going to talk more about that, but the workload characteristics are changing. So the fundamental infrastructure has to change as well. So our goal ultimately is to ensure that we can keep these new high-performing GPUs saturated by flowing data to them without a lot of spiky performance throughout the entire subsystem. You got that right? Yeah, I think that's right. I mean, that's when I was talking about parallelism. That's what you want to do. You want to be able to load up that processor, especially these alternative processors like GPUs and make sure that they stay busy. You know, the other thing is when there's a problem, you don't want to have to restart the job. So you want to have sort of real-time error recovery, if you will. I mean, that's been crucial in the high-performance world for a long, long time in terms of, you know, because these jobs, as you know, take a long, long time. So to the extent that you don't have to restart a job from ground zero, you can save a lot of money. Yeah, especially as we, as you said, as we start to integrate some of these AI applications with some of the operational applications that are actually recording the results of the work that's being performed or the prediction that's being made or the recommendation that's being proffered. So I think ultimately, if we start thinking about this crucial role that AI workloads are going to have in business, and that storage is going to have on AI, move more processing close to the data, et cetera, that suggests that there's going to be some changes in the offing for the storage industry. What are you thinking about how the storage industry is going to evolve over time? Well, there's certainly a lot of hardware stuff that's going on. We always talk about software-defined, but there's some hardware still matters, right? So obviously flash storage changed the game from, you know, spinning mechanical disk. And that's part of this. You're also, as I said before, seeing a lot more parallelism, high bandwidth is critical. You know, a lot of the discussion that we're having in our community is the affinity between HPC, high-performance computing, and big data. And I think that was pretty clear, and now that's evolving to AI. So the internal network, things like InfiniBand are pretty important. NVMe is coming on to the scene. So those are some of the things that we see. I think the other one is file systems. NFS tends to deal really well with unstructured data and data that is sequential. When you have all this streaming, for example. Exactly, and when you have all this, what we just described, this sort of random nature, and you have the need for parallelism, you really need to rethink file systems. File systems are, again, a linchpin of getting the most out of these AI workloads. And I think the other is, we talked about the cloud model, you got to make this stuff simple. If we're going to bring AI and machine intelligence workloads to the enterprise, it's got to be manageable by enterprise admins. You don't need, you're not going to be able to have a scientist be able to deploy this stuff. So it's got to be simpler, cloud-like. Fantastic, Dave Vellante, Wikibon, thanks very much for being on theCUBE. My pleasure.