 Hey, welcome back everybody. Jeff Frick here with theCUBE. We're wrapping up our day here at the Open Compute Project Summit in Santa Clara, California, 2017. I think it's about 3,000 people here, a really bunch of smart technical people, really into the guts of cloud, into the guts of compute, storage, and networking. And we're really excited to have David Floyer, the CTO and co-founder of Wikibon, who's been perusing the floor all day on for our last segment to kind of wrap the show and give us an update. So first off, Dave, always great to see you. Thank you. Thanks for having me here. Absolutely. So let's jump into the three horsemen of the computer industry, right? So there's compute, storage, and networking. So let's start with compute. What have you seen here that you're excited about, this change in the game? Well, we're seeing Intel, of course, come out with a whole number of new announcements, et cetera, in the processor game. But the really interesting ones are the ARM, the ARM 9, version 9, sorry, the ARM processor, which is from Qualcomm. That's coming out. What is interesting about it is that it's the first 10 nanometer computer coming out from this. So it's coming out from the, coming out from the mobile phone area. So you're seeing the mobile technology now coming into the server technology, which if you think about it as a replay of what Intel did. So many years ago, they came from the PC up into the servers. So for the first time, we're seeing the mobile moving into the server area. Which as we know, scale is such a powerful driver of innovation, just because it's cameras and everything else that's kind of come out of this billions of mobiles that are produced every year that are showing up all over the place. Absolutely. So that's very interesting to see that come in. And then the second thing which is interesting is to see open power come in. And they've come in with the Power 9 series. It's not yet generally available, but they've got it here in demonstration mode in the number of the booths. And what's interesting there is that you've got the fastest IO in the Gen 4 PCIe. So that is very much faster than Gen 3. And at the top end, power is really doing a very nice job of going after that top end space. So you're seeing new processes coming in at the bottom with the ARM and Qualcomm. And then you're seeing the higher performance ones coming in from the power at the top. So it's interesting to see that OCP is embracing all three platforms at this convention. And we've seen power kind of grow. They had like this little co-located thing at the NVIDIA GPU Tech conference that we first covered a couple of years ago. That's right, yes. And really, you know, IBM's behind it, but taking kind of an open source approach to drive innovation. And you're starting to say that this really starting to gain traction a little bit. It is. I mean, Google themselves are using it in their platform. So it's starting to take off for real. And of course the ARM platform as well. So we're seeing a multi-platform environment for the first time really, in earnest. In earnest, awesome. Okay, so that's the compute. Let's jump over to store. Obviously you got a long history in storage. What are you seeing that's new and innovative in the storage side? Well, everybody here is focusing on NVMe. It's, they're focusing on Flash. They're focusing on NVMe. They, that is a much, much faster way of connecting to the processes. And NVMe itself is a little bit limiting because you're connecting it via PCIe, but you're connecting it just to that processor, that processor node. That's a little bit limiting. You want a little more flexibility. So what is coming out for the first time at this conference? What is coming out is NVMe-OF or NVMF. It's NVMe over fabric. So it's the fabric version. And it's using the high-speed connects. It's using the RDMA Rocky, for example, that Malinox has announced and has got a huge market share. It's using that technology to connect multiple nodes to what is essentially a single view of the storage. So we are, what we're seeing is that converged infrastructure is direct connect, but it's being seen just like a sand. So the server sand that we've talked about is actually coming into reality with NVMe and NVMe-F. And you're seeing Intel in there with their flash drives. You're seeing Western Digital. You're seeing Seagate. You're seeing Micron. All of these people are working very hard on getting NVMe-OF out. And that's a game changer. It's a real, real game changer. It's interesting as you keep talking about these layers of innovation and layers of innovation. I can't help but think to the layman's kind of version of this which was the old operating system versus CPU between Microsoft and Intel back in the PC days. You can never have enough speed and horsepower. There will always be new applications, existing applications, but even more new applications to consume that capacity and speed. Well, what's very interesting as well is you're seeing more platforms now on these leading edge boxes. It used to be Linux. Now you're seeing Microsoft Windows coming into these new boxes. You, of course, have got VMware as well. So you've got three platforms which are now coming up to snuff, being invested in as three different ways to market, three different sets of workloads, three sets of applications which run on the different boxes. So again, you're seeing more choice actually coming out in OCP. Which always comes back, right? This is horses for courses, what's best for the customer. That's right. So the final horseman that we didn't talk about yet is networking, right? And finally, it sounds like SDN and kind of the promise of software to find networking is really starting to come to fruition. Absolutely. Well, first of all, the speed of the network is absolutely crucial. When you are starting with NVMe on these very, very fast flash boxes, you need very, very fast interconnects. So what used to be overkill at 25 gigabits per second is now just entry point and 100 gigabits per second. You can load those with just three NVMe devices. So really bandwidth is the key to making this whole thing work. And then on top of the bandwidth, you need the protocols, Rocky and all the other protocols, which are enabling you to make it into a fabric and be able to communicate with very, very low overheads across this network. So that's one part of it. That's the cabling. The cabling obviously needs to go longer as well. So photonics are coming in with longer distances as well. So that's coming in for a large data center. And then the last of it is some very interesting work being done on the operating systems. There's multiple operating systems that work on the switches. One of the most interesting is Microsoft, Microsoft Sonic, which they've announced and are talking about as part of that Azure link. And that's the connectivity they're talking about. Very high-speed connectivity between all of the nodes. And they've got some very interesting benchmarks which have shown tremendous important, tremendous benchmark results from NVM EOF over these very high-speed networks. So this really, in my view, is the data center of the future. This is, we're seeing it emerge from very high-speed networks, very high-speed storage. The speed of the processes isn't improving. So much, but the number of cores, the number of threads, GPUs, FPGAs, they're speeding things up as well. So we're seeing this convergence of all three horses in the processor, the compute marketplace. We're seeing that convergence come together in a really different way that we've had it for the last 25 years. Yeah, it's really interesting kind of the three horsemen of the public cloud with Microsoft and Google and AWS. Again, massive scale, and I can't help but think of some of the presentations we saw at Reinvent. When you have such scale behind you, even their own interconnects and their own data center, and you get into their network, the investment that they can make on cutting-edge stuff in these distributed applications worldwide, really moving the ball down the field. Like we were before. Very fast, and as always, it's volume. Volume counts, you made the point earlier. You've got to go after the volume, and all of these three areas now have flash as volume, the processes have volume, particularly with the arm and of course with the Intel, and now there's volume at these high-speed interconnects, which I must admit, something I wouldn't have predicted 10 years ago. And the other thing that's coming around the band that we've talked about before is IoT. And that changes the game. Now there's a whole nother layer of complexity thrown into the mix, because now you've got distributed devices, you've got edge computing, you've got edge storage, you've got harsh environmental conditions in remote areas without connectivity. So this is going to change too, kind of dynamically, how's the compute store and network shifted amongst between the devices, the data centers, the little thing out in the field. Whole other shifts, right? The key to that is low latency because you're having sensors with lower and lower latency with higher and higher bandwidth, more and more data coming off them. And when you do the sums, you cannot move all of that data to a cloud. So our prediction is that 95 to 99% of all data will live at the edge where the sensors are, and most of it's going to die there as well. So you're going to take extracts of that data, but the important decisions are going to be made in the first second after that data has arrived. So you're going to get very, very high fidelity, for example, from a camera, you're going to do your imaging in the camera itself. The processing is going to be right next to where the most data is, which is that raw image, a file that you have in the camera itself. So that's where IoT is going to have to earn its bread and butter, and you're going to have to do that processing at the edge and you're going to have to connect just what you need to connect up to the cloud. You can't take everything up there. That's what I love about autonomous vehicles. I mean, it's a fun topic to talk about. It's easy to identify. We see the Google cars driving all over the neighborhood all the time, but it's really just a great use case, a very easy, tangible thing. Of IoT. Yeah, of IoT, because if you got to make a split second decision to hit the brakes or not, if a kid rolls out in front of the car, you're not sending it to the data center. All right, David, well, exciting times to be in the industry. You've been doing this for a while and it just continues to evolve. It's fascinating. It is. I mean, we focus on the disruptive areas and that is exactly where all the action is these days. All right, well, David, again, thanks for stopping by. You can read a lot of David's work at wikibon.com and we'll see you at our next show. Absolutely. Thanks very much. David Floyer, I'm Jeff Frick. You're watching theCUBE. Thanks for watching. We'll catch you next time.