 From San Jose, in the heart of Silicon Valley, extracting the signal from the noise. It's theCUBE, covering OCP, U.S. Summit 2016, brought to you by OCP. Now your host, Jeff Brick and Stu Miniman. Hey, welcome back everybody. We are coming to the end of day one here in downtown San Jose, the Open Compute Project Summit 2016. It's all about hardware, it's all about cloud, it's all about open standards and really, you know, building the infrastructure that supports this thing called cloud that we talk about all the time. There actually is some hardware back somewhere that we're leveraging. So for this segment, we're really excited that David Floyer on co-founder and CTO of Wikibon, David, welcome. Thanks very much. So you've been walking the floor, going to the sessions, what's your kind of initial impression? My initial impression, well, the cloud is alive and well and expanding hard and the rate of change of the infrastructure, the innovation has accelerated significantly over the last two years of OCP. So I'm impressed, I'm impressed. It's a geekfest here. I'm in general impressed with the level of innovation, the speed of things that are going on, the way they're solving problems, overall very impressive indeed. Innovation just as simple Moore's Law kind of stuff or how much has the open source component in OCP applied at hardware driven speed of innovation? So if you look at Moore's Law, it's coming, it's stopped in terms of gigahertz and they are applying other technologies to fill in that gap. So they're looking at, for example, ways of using much faster interconnect between the computers, much faster networks. They're looking at Flash as just a bunch of Flash, J-Bof, the new term that's come out here. Just a bunch of Flash. They're coming out with J-Bof, they're coming out with in general innovative ways of overcoming what used to be solved by just waiting a couple of years and the process is getting faster and faster. There's a limit on the number of cores you can actually throw at a problem. So they're using GPUs now to speed up things. So the innovation is because the traditional way of solving those problems is no longer there. So David, when I first joined Wikibon, you talked to me a lot about what some of these really big companies, I mean how Apple was using Flash and we talked about how these hyperscale companies got these team of really smart people, the PhDs to build stuff. Now OCP is helping to really drive this to kind of broader adoption. I know you've been at the Joe a number of years. So how do you say the progress of how OCP is going? How does it impact what's going on in the marketplace? If you look at the original specification of OCP, Facebook defined a number of machine types, defined the workloads, and then said okay, this is OCP. That has significantly changed now. You're seeing individual technologies evolving for very specialized problems that large data centers are having. And so it's being applied on a much broader scale than it was before. So OCP is now no longer just Facebook. It's all of the people inside that. It was interesting today that the majority of the speakers were from Microsoft, from Intel. Going around the floor, there's a lot of different players who are playing it. Whereas in the original days, it was just Quanta and Facebook and a few other people on the side. So the depth of the technologies, what they're taking on is very different from what it was. Is it essential? Absolutely. You say that's a good thing, right? With less Facebook means more of everybody else. It means more of everybody else. And what's happening fundamentally, as you said at the beginning, to get SQL working with Facebook was a science project. You needed PhDs. They were squeezing every last drop. They are now putting these systems, they are normalizing, putting in these big systems, normalizing the use of Flash, normalizing the use of GPUs. And it's enabling the software itself to develop at a much faster rate. So one of the things we always look at, David, is it's not only that, there's big ecosystem building around this. As you said, Microsoft's big here. Intel, of course, major platinum sponsor of everything. All these networking and storage companies, but certainly some of the practitioners. So Goldman Sachs quoted in a lot of these environments saying, you know, broad adoption. Bloomberg's got this fancy rack of their environments here. So, you know, the financials are kind of something that you expect. The telcos, there was the panel in the keynote this morning is there adopting. So, you know, starting to see some of those early big customers, the ones that we say, they might not be, you know, one of the big public cloud providers, but they look like a service provider to their own enterprise in there. So, you know, what's your take on the customer side? Well, the most interesting to me is the whole area of networking. I found it was very interesting to see up on the same stage, AT&T, Verizon, Deutsche Telekom and somebody from Sweden as part of a telecom panel who jointly are putting together what they want to have is the white box, the white box solution going away from the proprietary, huge Cisco developed, Broadcom developed proprietary software, proprietary hardware, all bundled in together towards splitting out the planes, providing software capabilities through virtual networks, which is something which you would have expected to have been here by now in volume, but being able to personalize networks. The amount of effort that's going into there to move what is traditional mainframe type networking into the 21st century of white boxes and much faster development and much more ability to provide new services and new ways of doing things, I think that's really a indicator of what the future is going to be. Both within the data center, 100 GB just normal now, looking ahead to much, even higher speeds, looking at the ability of the telecom companies to provide 5G and much greater mobile capability. I think it's exciting. It really is a major change. So David, the interplay between the network and what's going on in storage is pretty interesting and right up your alley. So Facebook made a statement, and I'm hoping you can help unpack it for our audience, because they said with 100 gigabit ethernet, data locality doesn't matter, but there's a clarifier. So I looked at there, flashed with NVMe, really local, versus, talk a little bit about that interplay. You've talked a lot about low latency and data locality. There was a little sentence he quietly said within the data center. A little asterisk on it. He said within the data, he suddenly caught himself and he said within the data center. So yeah, it's really exciting within the data center. It used to be that the disk drive was so slow that everything had to wait for that. That was 25 milliseconds. You can do so much in that time. Now it's down to 350 microseconds. It's going down even more. The cost of the 3D flash going into these devices is going to bring down the cost dramatically over the next couple of years. So you've got oodles of IO. You've got storage at the same, literally the same price at the moment. It's two to one gigabyte per for the flash modules. It's coming down to be very similar to disk. That means that whole speed of access to data is just going to go through the roof. At the same time, the access time between processes is going down with 100 gigabit connection. So you can do RDMA over whatever you want in FiniBand or in FiniBand over Ethernet or Ethernet over in FiniBand. There's so many different versions of going on at the moment. PCIe coming through really strong PCI connections, PCI switching within that. All of these technologies are speeding up the whole of the data center, which means that parallel computers can be so much more able and capable across a big data center. It's exciting. So David, I remember a few years ago you started writing about how kind of the HPC, the computing is bleeding in the enterprise. Absolutely. We've looked at cloud computing, gives us practically limitless number of cores, capacity of storage. Number of GPUs, yeah. And GPUs now as networking becomes shrunk down to huge bandwidths. I have all the bandwidth now. What's that going to mean on the application? Well, there's two things about community networks. One is bandwidth, the other is end-to-end latency. And the speed of light, unfortunately, has not come down. We haven't fixed that one yet. We haven't fixed that one. Got a request in, but hasn't been fixed yet. I'm sure somebody's going to solve it. We had Bell Labs up there, IBM Research, Google. Somebody's going to break that barrier. You think so, right? No, I don't think so. Until they do. Too many semesters of physics under my belt for me to think that they're going to break it. So you still need, in fact, that Flash and this 100 gigabit ethernet actually make it, you want the data center to be smaller and smaller and smaller. So if you think, for example, edge devices, which are capturing all this data, the cost of moving all of that data over that network is so high that you'd never want to do that. So you're actually going to have edge networks with a lot of processing, a lot of capabilities, a lot of big data inside the edges themselves. And then you're going to have your central networks themselves. So it's not going to be the few data centers, the 100 data centers around the world. There are going to be an awful lot of edge data centers doing an awful lot of work, unmanned, out everywhere with warehouses and on wind farms and everywhere else around the world. And they will be doing an awful lot of work and there will be a lot of communication between those edges and the clouds themselves. So that requirement to minimize latency is still there. Yeah, so David, actually a really good point that you bring up there, Intel made a statement that like 70 to 80% of all compute, I think it was compute that they said are going to live in these really large data centers. So that might be thousands of nodes. And there was some conversation through the social channels saying, well, what about edge? You know, so what's your take? What's the breakdown? Because I think about it, you know, the core, the cloud, you know, the amount of nodes that it has, it's going to be massive compared to the edge. You know, there's lots of devices, but how many of them are really devices versus, you know, can I take a thousand devices and aggregate that down to a single node or, you know. Well, if you think about the internet of things, it's really built around sensors, isn't it? And sensors are MEMS, you know, the MEMS devices which are in your iPhone and everything else like that. Those are going nanoscale now with the introduction of new types of sensors. Those sensors are on things of one sort or another. It's very difficult to have, you know, two engines in the car and share a sensor between them. You have to have one each. So the sensors are going to be spread out. The sensors, interestingly enough, are all developed for the ARM processor. So the ARM is going to be a significant portion of that bringing together of stuff on a distributed basis. You know, inside the home, inside the car, et cetera, you're going to have an awful lot of sensors, thousands and thousands of sensors. Most of that sensor data is irrelevant to the rest of the world. It's only relevant to that, your house or that car or whatever it is. Whatever that system is. Whatever that system is, the warehouse. Most of it's going to be very local. Most of it's going to be thrown away. So you're going to have good processing there, pairing it down, cutting it back and only sending up the signals that are really necessary to the center. So to answer your question about the ratio between them, you know, if I was having to guess, I would say it eventually will end up around 50-50. You're going to have, you will want to put processing out there, quite strong processing, because you want to send out a query, get it to do some work and send you back the result. That's a much cheaper way of doing stuff than moving all of the data from point A to point B. But David, I still feel like we're in that same spot that we used to always be in with the OS in the Intel chip, right? Faster chip's better, big fatter OS. Faster chip, fatter OS. So they seem to just suck up each other's capacity at the same rate. So yes, we're going to have all these, you know, continued advances on the compute, the network and the storage. But on the other side of the equation, like you just said, the amount of data that's being thrown off that we care about, and you said that a number of times, you don't want to care about all of it, but the amount that we do want to care about is going up crazy. So we're still, you know, it's kind of this arms race between capacity and stuff to fill it. How's that going to shake out? Will it ever shake out? Is this, you know, as soon as you open up capacity, someone's going to find something that they couldn't do before. Now I can do it. I'm going to fill that thing up. Well, you know, there's at least 10 or 15 years to go before we ever reach any limits on capacity or the number of bits that we can store, et cetera. Even just with flash technologies, I mean, maybe some technologies beyond that. There's still an awful long way that we can take these technologies as they are and get more and more stuff out of them. I think the exciting thing is that we have so much compute and number and information that it's going to make the programming capabilities, the artificial intelligence, the systems of intelligence. Those are to me the really exciting next breakthroughs that are happening. You will see a glimpse of it with things like Watson, but there's so much going on. Facebook themselves are using these GPUs to just improve recognition of faces. They said they've done that by 60%. If they do that for the next 10 years, they'll be very, very good at recognizing faces. So there's all of this artificial intelligence that's going on, the ability to drive a car, you know, the fact that we can go from point A to point B and much less aggravation and much less risk. It's exciting to me, but I think the emphasis will move towards algorithms, towards automating stuff, towards reducing the number of people required to do an awful lot of stuff in the businesses themselves. So I think that's where the most exciting use of these technologies will be and the biggest challenges ahead. Yeah. Yeah, I mean, David, do you know what I remember when OCP first started, you know, one of the things that really jumped out to a lot of us was if you look at the enterprise and you say, how many machines can a typical administrator manage? It was, you know, you do good, you manage a few hundred. And in the hyperscale space, you know, Facebook, when they started, it was like 10,000. So I mean, we're talking like two orders of magnitude, not 30% cheaper. It's that operational change that's there. So as, you know, a larger percentage of, you know, the traditional infrastructure stack, you know, storage network, you know, storage. Sure, there's going to be edge, but the amount that needs to be managed, it's going to be consolidated down to a fewer number of players, service providers, cloud providers and the like. And that operational model, I think that's something we've highlighted. It's the operational shift, now that's going to be significant and the vendors and the platforms need to be able to. There's two trillion being spent on enterprise operations. We see that going down to one trillion over the next 10 years. It has the ability to go down much, much further than that. Well, the capacity expands. Yeah, well, the capacity is going to go astronomically. So, and the vendors have to take on responsibility for doing all of that work automatically. They should, the amount of non-differentiated work that goes on in data centers is far too high. So, public clouds, private clouds, run by people who know how to run these things, that's where the emphasis is and then set the programmers free to really come up with the algorithms of the future. I think it's very exciting. Well, good times. Well, David, thanks for sharing your perspective. Always good to go. To go deep with Dave, as we like to say at the office, we have a lot of Dave's. David Floyer, David Vellante, got a lot of smart guys. So, thanks for sharing your insight. You're very welcome. Appreciate it. So, David Floyer, Stu Miniman from Wikibon. I'm Jeff Frick from theCUBE. We are live in San Jose, California at the Open Compute Project. This is the end of day one. We'll be back tomorrow for another day of coverage. So, be sure to tune in SiliconANGLE.tv and look for the Open Compute Project Banner. Also, again, want to thank our sponsors, Pika8, Melanox, and Micron. Again, without our sponsors, we couldn't go out to all these shows. We did about 80 shows in 2015. So, thank you very much for enabling us to go out, find the smartest people we can find, ask them the questions you want to answer, separate the signal from the noise, and share it back with you. So, we'll be back tomorrow. Thanks for watching. Jeff Frick signing out from OCP 2016.