 Okay, we're back here live in Silicon Valley in San Jose for the Open Compute Summit 5, or the action on open source hardware development around the future of the data center, future of the cloud, future of what's under the hood, and that's what we've been covering all day, like a blanket, this is theCUBE, our flagship program. We go out to the events, extract the signal from the noise. I'm John Furrier, the founder of SiliconANG, I'm joining my co-host, Dave Vellante, co-founder of wikibond.org, and we love these emerging markets when the trends are created by the people who actually built the products. And that's really what it's all about here, the story here at the OCP Open Compute Summit is now a few years into the movement, you now have adoption and validation by the community, which is a series of developers, individuals, and companies. And our next guest is one of those new companies, LSI has joined Open Compute, and Robert Ober is joining us here, who's a fellow as well. Welcome to theCUBE. Hi, thanks. Let's talk about the new entrance into the community formally with contribution, which is we heard from Colin earlier about the model, which is it's not a profit-based nonprofit where you put the money in, pay to play, it's really contribution-driven here at OCP. You guys are new, LSI is new with new contribution. Talk about what you guys did, what the announcements are, and we'll jump into what it means. Yeah, sure. So first I want to, thanks, but I wanted to say about contributions in that, we're actually in, we have been in most of the Open Compute platforms from the start, it's just we didn't contribute those designs. So we've been engaged and involved in Open Compute pretty much from the start, but this is the first time where we've kind of stepped up and said, hey, we're going to contribute something to the whole community. What is that something? So it's actually two somethings. So the first one, the first thing is a design of one of our PCIe flash cards. It's called the Nitro 6209, and there's a lot of great stuff. I could go on about it, how wonderful it is, but the key thing is it's the, as far as I know, it's the very first product that's been designed specifically for the Open Compute servers. They have very particular thermal cooling, airflow issues, I'll say politely, and this was designed specifically around those. So it is a product that you can plug directly into an Open Compute server, and it will work very, very well. The truth is, we know this from some of our own other designs, is you plug some of the flash cards into an Open Compute server, and they don't work very well, they tend to fry. So that's the big thing about this. It's a contribution of something designed for the servers. The second one was, I'm sure people know about Open Vault, which is the enclosure for JBod, however you want to call it, cold storage, and which is a great product. It uses some of our chips in there, some of our design, but we realized it could actually be a lot better, and there are ways to improve the performance and the capability of that, and so we're contributing the changes in design, so the upgrades to the design to make it an improved product, make it far higher performance with the same drives. So essentially you're on the former, the Nitro card that's specifically designed for OCP servers, you're essentially addressing the age old problem of heat density by essentially customizing your product for an OCP environment. Is that a fair way to say it? I mean, I hesitate to use it. That's exactly right. Just as with an OEM today, for example, if we're asked to do a blade device for an IBM server, it will be tailored to the environments and the form factor of that server. In this case, it is a flash acceleration card designed specifically for the open compute server. So obviously OCP servers can be deployed in a wide variety of use cases, data center environments, hot, cold, closets, et cetera. We were at, John and I and others from the team were at the re-invent, Amazon re-invent in November, and we were talking to James Hamilton, and he shared with us that Amazon knows the data center that it's going into. They know the environmentals. They, as a result, the claim is that they can, and we believe him, he can get even more dense servers than you might be able to get from, say, an ODM even. So I wonder from a system architect standpoint, if you could talk about that, in terms of the broader market has to deal with this wide spectrum of environmentals. As Amazon sort of says, okay, we've got this tighter set of tolerances that we can work toward. How much of an advantage is that? Is it significant from an architectural design standpoint system architect? Is that, would you love to have that type of situation to deal with? And actually, yeah, yeah. So I'm going to jump on your question and I'm going to say, well, you know, if I roll out many years, the evolutionary direction we're going in the data center is, you can call it many things, you can call it pooling, you can call it disaggregation, but at a large scale, at a rack or multiple rack levels, you know, or a hyperscale data center, you want to start pulling apart the parts. And I can argue this for an operational reasons, I can argue it for architectural reasons, but in this instance, I'm just going to talk about thermals. And if I think about the individual components, right, processors will tend to run very, very hot. And so you need to manage those appropriately. DRAM doesn't want to be quite as hot, right? It's going to have lower retention, it's going to need more frequent refresh the hotter it is. So you want to manage its temperature a little bit different. Flash loves to run reasonably hot, but if you go over the edge, it just, it falls apart. It's not going to work, right? So you need to manage that. To lose data. And then disk drives as well. They're mechanical devices, they have certain limits on thermals and that, you know, or otherwise the bearings. So if you try to, today, when you try to put all those things in one box and manage them in one temperature profile, everything's a compromise. Whereas if you pull them apart, you can manage each thing correctly. And in the end, you can get a much denser packaging. So, okay, so I guess maybe another way of asking that question is do you see sort of, so Amazon has that luxury? I mean, and do you see over time as the consumers of OCP solutions as sort of replicating that luxury by necessity? Or do you expect that there's still going to be this ridiculous closet to hyperscale class data center spectrum? Okay, so my personal belief is that as soon as you start talking about a rack or more, you're going to naturally be pushing into the kind of hyperscale architectures that we're seeing with Open Compute, right? And it will naturally evolve and serve. In order to stay competitive, you just have to have no choice. Yeah, yeah, it's just, I mean, the economics are just too compelling. You have to go that direction. You know, when I start talking about something like a print server or spooler or something, you know, that's tucked in a closet, that's something different. And I think those will be on their own trajectory and frankly, they aren't that interesting to us as a company. What does your data say in terms of the economics? I mean, you hear some different figures thrown around, but can you share with us any sort of metrics from an economic perspective that you see? You're saying the economics are so much more attractive. How much more attractive? I don't think we've done a specific analysis ourselves, but I will tell you anecdotally, at the extreme, I've had one large enterprise tell me they've done the calculations and they believe they could save 70% of their IT. 70? 70, which is unbelievable and it's not even credible when you first hear that number. But the more you think about it, it starts to ripple through and you can start to understand how, you know, 70% may actually be plausible. It's certainly 30 to 50% savings is very plausible. And a big part of that is, you were talking about before about the disaggregation. Let's call it, let's use that parlance. Being able to pull the pieces apart and optimize for each one. How does that, we heard Frank talk about converged infrastructure this morning, disparaging actually converged infrastructure. Conceptually, a lot of what OCP is doing is building sort of its version of converged infrastructure, is it not? So help us sort of square that circle where you're pulling the pieces apart but it converges infrastructure and bringing it all together. And that's one of the reasons why disaggregation as a term makes me laugh because if you're a server manufacturer, it's aggregation, you're ripping my server apart, right? But for me, my background is architecture. I just view it as, I'm trying to resource pool. I want to put my like resources with my like resources and logically I'll decide how I allocate those resources, right? So I think of it more as pooling. But, so where was I going with this? Well, so I was asking you about the square of the circle between disaggregation and converged infrastructure. Yeah, but that makes sense, correct? Yeah, and if I look at converged infrastructure or bladed systems or something, there are a lot of practical reasons why they make a ton of sense. And Frank was even saying, there are some really good things about it. The problem is they tend to be architected as much to solve problems as they do to lock in customers so that there's no way you can buy any component or anything from anybody else. Vendors locking in customers? Yeah, who would have guessed, right? On purpose, right? Yeah. So, you mentioned pooling a couple of times now and I think pooling, I think virtualization. And there's been some discussion about beyond virtualization. I want to forget, I get your systems architect perspective. Do you see that virtualization layer, which today is up here, coming down into those individual sort of components? I mean, I guess you're seeing it with SDN. Yeah. You saw it with compute, sort of seeing it with storage, certainly with software defined storage. What's your expectation for that migration? Well, no big surprise being at LSI. I've spent most of my thought cycles on storage, right? But I think there where we're going is, it's interesting, you call it virtualization. I was having dinner, okay, I'm on a name drop. I was having dinner a few weeks back with a friend of mine. He's the CTO of Baidu in China. And we were talking about some of the concepts we've got that we're working on. And he says, wow, to me, I would call that hardware virtualization. And I think that's true. I think that's the direction we're going. So, what you're seeing is where the hardware is being deconstituted and pooled. And in my case, it's especially storage, it's being pooled and meeting accessible from multiple servers. All the storage is accessible directly as if it's directly attached. And I can allocate the resources, whether it's bandwidth, capacity, whatever, I can allocate it to individual. So, we have two minutes left. I wanted to get a question. Good conversation, I didn't want to interrupt Dave. Dave on a roll. But it was good to follow that through. It was an awesome conversation. Really, really important to talk about those two areas. But you brought up Amazon. I want to ask you specifically to talk about Amazon because when we were at Amazon Reinvent in Vegas, they didn't really open the kimono and tell us what's going on with their devices. We asked them how much they're buying. They kind of hand way, oh yeah, OCP, we follow it. We love what they're doing, but just not for us. Golf clap, as you say. It was bigger than golf clap, you know? Good try, keep on going, boys. Meanwhile, we heard rumors that they have some really bad ass form factor devices that they're building. So what's your thoughts on, what are they doing? Why aren't they adopting OCP? Are they just on their own thread? What are they doing technically? Because they got pools, they got power in their cloud and people are looking at Amazon saying, hey, you know, that's a black swan in my opinion, but open compute and open stack is a nice solution. And it could be Amazon-like. So we're seeing a nice thread there. But what does Amazon do? What's your technical architecture tell you? Well, two things I'll say up front. One, I'm not allowed to talk too much about some of the things. And two, I don't actually have great detail on specifically what Amazon is doing. I will tell you that they have an incredibly diverse set of platforms. So they have, I mean, it's a very complicated infrastructure, right? It's not homogenous like we like to think about things. So it's a very complicated one. So even if they were to use OCP, it would be on one platform and not a whole bunch of others, presumably. Well, which is essentially what James Hamilton hinted toward is that OCP doesn't have enough configurations to address our unique needs. So we have to customize to the nth degree. It wasn't ruling out OCP. What he's saying is that what you're saying is that, well, you can use it in a pocket. It's not going to have a real big impact right away. Yeah, and I think the reality is, is if you go one step further, any of the, you know, pick your number, five, six largest data centers in the world, they already have solved a lot of these problems. They already have answers themselves. So while they might respect and some day use OCP, there's no burning need to, right? Because they've already resolved these problems pretty much on their own. I mean, for example, Microsoft's contribution is phenomenal, right? And it's a peer to OCP. They solved their problem. Yeah, they solved it themselves. Kudos to them for giving it to the community, right? But they're, you know, you look at Google or Amazon or Baidu or, you know, they've already solved it on their own, right? So those top five or six guys, they're interested, they respect OCP, but they have no burning need to use it. I think where OCP and OpenStack both really come together is they take the hyperscale deployment model, the hyperscale management model, and they make it available to enterprises and smaller deployments, right? They bring the concepts and capability to the masses. Rob, I know we're tight on time, but I wanted to go ahead. Well, we were saying earlier that, you know, I was just at the Mac 30th anniversary party Saturday night. My name's up on one of those posters. It was so awesome. It really was an amazing event. I personally had a lot of joy there, being there, and just that's my generation a little older than me, but, you know, I was coding in the early 80s and that was all I can totally relate to the stories about, you know, fighting over 60 bites between Finder and Mac Payne as rolling in my seat, just laughing, it was fun. But those are hardware geeks, right? The Homebrew Computer Club, and they talked about it, and Bill Atkinson said, we made the Mac for ourselves. We would have worked for free. These are the sound bites, right? You know, Mom was simplifying it, but I was saying earlier, if the Homebrew Club were around today, they'd be here, right? Yes, absolutely, absolutely. This is it. This is where they're making it for themselves. Yeah. And that's what you're saying. Amazon is making it for themselves, made it for themselves. So if that's true, we'll call it the Corpbrew Club, Corporationbrew, whatever, not Homebrew. If it's the Homebrew for the future, modern data center, what is the cool things for these guys that are deep tinkering around? We're seeing some stuff up on stage, but you know, you're an architect, you're looking at new ways to put this together. What is the future was of the data center OCP working on? What's the new things that you think are intoxicating for the engineers and the software? I mean, talk about a target-rich environment, right? Personally, I've been working on new and novel storage architectures, just ways of, you know, kind of harmonizing the architecture so you can have flash and disk and boot, but there are a lot of things beyond that, right? There's whole new storage models coming. I think we're a key value store. I mean, I think we're gonna see a migration away from block and file towards key value and more object-like systems. You know, it'll take forever for the long tail to decay, but there's new exciting stuff. Photonics and optics, big deal. I mean, when you start unbottling the interconnect latency and bandwidth, I think new memory types. I mean, we've got phase change coming, but then there's spin torque around the corner. That opens up, once you integrate that as extended memory pools, you know, you put terabytes on a server, all of a sudden you, there are whole new styles of database structures. You can do all sorts of graph theory. That opens up new applications. The graph databases, flash memory, unbelievable. We are at a point in time that's just gonna, you know, it's explosive. It's kind of fun. It's a target-rich environment. I totally see it. I feel it. The magic is exploding. You can feel it happening here. It's the beginning. This is where they're all the action. And everyone's here. It's not just tire kickers. No, no. Zuckerberg showed up because I'm sure he's curious. What the hell's going on? We're saving billions of dollars. And what do we do with this? Who knows? I mean, there's rumors that they're going to have a cloud to compete with Amazon. Who knows? Well, I shouldn't be saying that on the camera, but it sounds like we have a scoop there. Robert, great to get your insight, certainly in the systems architecture. It's a game changer. A whole other evolutions happen. Revolutions happening. It's happening right now, and it's physical. It's about the data center. It's about what the hardware is all about to enable the software. And I really appreciate it. This is theCUBE. We're at the Open Compute Summit Live. All day coverage here. This is SiliconANGLE. We'll keep you right back.