 This is Silicon Angles and Wikibon's exclusive coverage of IBM Edge. Now this is theCUBE, our flagship program. We go out to the events, extract the signal from the noise. Two days of wall-to-wall, all-day coverage. We're talking IBM executives, customers, partners, CIOs, Intel, you had Intel on earlier, how about the Xeon? We're going to have Intel CIO on later today. But this is where we want to get the data and share that with you. This is theCUBE. I'm John Furrier, the founder of Silicon Angle. I'm joined by my co-host. I'm Dave Vellante of Wikibon.org. John Borkenhagen is here. He's the CTO of System X and Blade Center within IBM. John, welcome. Thank you. Good to have you here. Good to be here. Good to see such good systems presence at the storage conference. We heard today from Ambu, storage is going away. It's changing and we're having some fun with that. Where's it going to go? Where's all the data going to go? I don't understand. What does that mean? Storage is going away. Well, I think- Storage is dead. I think storage as we know it is maybe waning. Storage is a container that is the most expensive thing in the floor, maybe it's going to change. But anyway, so John, take us through your sort of CTO perspective. What are the big mega trends that you're following, the tectonic shifts that you're trying to capitalize on from a technology perspective? Well, there is a big shift around storage. Storage over the last 10 years of performance is only increased by 1.2X. That compares to 800X in the processor capability. Storage, something has to be done there. And the big disruptive technology is flash, flash memory. Make flash part of the storage and you address that bottleneck, that part of the equation, to give them more balance system. Yeah, so it's interesting. We spent a lot of time talking to companies that are putting flash. I mean, people think of flash as a storage technology. I actually think of it as a systems technology. It's an old saying, the best IO is no IO. And so I think about for years, we've seen function move out of the server onto the storage array. In a way, you can see two merging together. Yes, and I wonder if we could talk about that in terms of how that change is coming about. Sure, there's use for flash in both the server and out on the storage area. On the server, the issue is that you don't get high availability. If the server goes down, whatever data that you stored into the flash on the server, you can't access. You cannot access while the servers don't. That's one advantage. Unless you copy it somewhere. Unless you copy and you can't copy, there's a technology emerging to help you more efficiently copy. But right now, the real high availability from a hardware perspective is still out on the storage. There's other uses for flash though. And like you said, there's capabilities to be able to copy it. There's flash question for the external storage, the same for NAS. We'll be introducing in second quarter flash cache software that uses flash on the server and does a read cache for the data, for the sanding. We've seen a huge performance increases by being able to do that. And it's not, even though it's just a read cache, it doesn't just add performance to reads. It also helps the writes. By getting those reads all the way, the writes get done faster on the standard storage. I think the line's blurring a little bit between servers and storage. If you look at the servers and the storage in the past, disk drives, hard drives were just slow. You didn't need much performance to control the storage you use real low-end processors. Now that you have SSD out there, the storage controllers become in the bottleneck. It's not enabling the full performance of the SSD if you do put it out on the storage. So what you're seeing is server class processors out in the storage. Or you could look at it the other way and say, you're seeing storage on these server processors. So it is blurring the storage servers and the traditional storage are kind of merging together and they're looking like one another. Yeah, so what is that from a CTO's perspective as to that trend? John and I love to watch what the hyperscaler guys are doing, but there's this discussion now while they're having a hard time managing all this scale out, share, nothing. What's your take? That is a trick managing it, but that's the direction that people want to go in it. It's lower cost hardware if you can scale out rather than using a larger scale up system. But at the same time, I would have said that we're going to continue to see a fast shift or acceleration in that direction. But there's things in-memory databases you're seeing SAP HANA where that really is reversing that trend a little bit. For some of the transactional capabilities on SAP HANA, you want these scale up and it provides a lot better performance than the scale out. Yeah, so it all comes down to sort of management costs at the end of the day, right? Because you're driving costs down, the industry's doing a great job of driving cap ends down. Management costs are really the get factor. So what do you guys do in there to lower those admin costs? First of all, it's not just management costs, it's also a complexity of software to try to scale out. But management costs, first of all, virtualization. Today there's a hypervisor on the server that really simplifies the management. It makes it easier to move things, it makes it easier to configure things by virtualizing everything on the server. Now we have software-defined networking, which is bringing that same capability out to the network so that you don't have to go out and move cables, you don't have to have the administrative staff to do that, you can do it now from a console with the software-defined networking. There's some capability there to date that has a lot of maturity that can take place as we go towards the future, it's gonna be just as important as a hypervisor. Okay, we participate heavily in the open source stuff, whether it's open flow, open stack, open daylight for the SDN stuff. We have, there's been a shift, it used to be IBM kept everything internal to ourselves. Now we really have embraced the open world. And we not only participate, and we help drive a lot of those standards. And there's still room for innovation on top of that. The standards are, a lot of them are the interfaces, so you can interconnect, inter-operate. So our IBM can interconnect and operate with other industry offerings out there. So as a standards part of it, where everybody has the same stuff and we wanna contribute, that we do. But on top of that, we still contribute, we can still put in capabilities that we need for our own servers, our own technology, which we can't do without participating in those open communities. How do you deal with another question on a different, on similar topic? Density, right? Data center footprint we had earlier. And we actually gave the example of using Flash, how they've actually reduced the server component, footprint, increase overall throughput, lower power, and footprint, all kind of coming together off that comment that was a customer told them about old code, Flash brings out the best code. That's a big issue, density, power, and cooling. So we saw HP has moonshot, covered that, that's been great, Harold has great engineering fee, but still hasn't shipped anything yet. So how do you play in that world? Because a moonshot's the god box of this world. So all footprint, high capability. I would debate that, that it has high capability. We did a study, a moonshot, one of the applications target is big data, I do. And we did a study, we looked at how much performance compute power can you put in a rack with moonshot. There's a lot of little processors. They're not, the cores aren't as strong as the standard Intel 2 socket core. There's actually more performance in 40 standard EP processors than the 805 atom processors that you can put in a rack with moonshot. There's about 20% more compute power. Really? How those public benchmarks? This is using their, it's using the CPU compute power, I forget which standard it is, but they assign a compute power. And you guys did this internally at IBM? We did internally, yes. Yep, it was a study to say, okay, how important is this? We've done other benchmarks. Right now, the microservers, whether it's ARM or ATOM, we're not seeing broad advantages of them. There's some corners in HPC and the web workloads where it makes sense in the commercial market. There's still performance, cost, power, density, advantage, trying to put as many high performance cores on a processor, a silicon chip, as you can, and up to the point where the cost of that chip hits neither curve. You can't put any more on the wafer without the yields going down to the lower. We'll continue to be consolidation. If you look at, I mean, the processor is going to continue to follow more is a lot. It hasn't stopped. There's going to be more performance per server, which means that you can do more consolidation on that one server. You're going to get more out of a server. Once you use Flash technology, we've seen up to 8x performance improvement by leveraging Flash where storage was a bottleneck. That means you can take eight servers and use one to get the same type of performance. The bandwidth capabilities, it's just been a tremendous increase whether it's PCIe or Ethernet in bandwidth. That is no longer a bottleneck, and that's allowing us to do things. Flash opens the kimono to new scenarios. Yes, it does. New caching layers. It's almost, it's a creative license for engineering right now. It is, it's disruptive technology. How can you best leverage it? The first steps are to leverage it just like hard drive, and it's taken a while to really see Flash proliferate because of the limitations that storage controllers just can't support the Flash performance like that. The storage controllers are being redesigned, so they support it. The software wasn't there. We talked about the Flash caching. If you have a server consolidation, virtualized workload, hot these general apps, file server apps, how do you get performance with Flash out of them? Now with this Flash caching, you knock down as a storage control. So when we're using Flash internally for database, for example, we had to, we worked with Elsai to get a SSD controller. The standard storage controllers, they were the bottleneck. You couldn't get the IOPS out of the Flash. Now the storage controllers are being redesigned. Instead of, I think we talked about instead of the small Intel processors, they're beefing up the processing power to really get the IOPS out of there. That has to be fixed first. And then the next level of innovation caching layers? Sand caching? Yes, definitely tiered. There's sand caching. There's tiering. There's really the software. There's two components. The hardware infrastructure has to be designed, the system, to fully leverage cache. That's being done in parallel. It's the software. How does the software fully leverage it? Flash caching, sand flash caching is one of them. There's, the applications will be rewritten. Critical applications to really get the full value out of the Flash. You know, get rid of the path length that you could hide all the code path length when you had a hard drive. Because a hard drive access lanes you was so long. Now it's popped up to say, why is this taking so long? It should be a lot shorter. It's a code. They're optimizing that code to make it shorter. Well, we certainly got to get you a David Floyer because you guys could geek out because it's something that we like to talk about the different levels. And obviously the highest level of, but you know, for the top of the tree that's hard to get at is a complete data center reboot with Flash.