 Live from the San Jose McKenery Convention Center, it's theCUBE at Open Compute Project US Summit 2015. Okay, welcome back everyone. You're watching theCUBE here live in Silicon Valley at the Open Compute Project Summit, OCP Summit 2015. So hashtag, join the conversation. Go to crowdchat.net slash ocpsummit2015. Leave a comment, watch the video. We're embedding the stream there. Also live at siliconangle.tv. You're watching theCUBE. I'm John Furrier with Jeff Frick. Our next guest from Facebook, the inventors of DevOps in my opinion, the pioneers who built their own gear because they were smart, didn't have a lot of cash, then they got a lot of cash, kept on building it here in theCUBE. Matt Cordry, director of hardware engineering. Welcome to theCUBE. Great, thank you. So I always joke about Facebook because Facebook started out young, nimble, move fast, break stuff, Zuckerberg's thesis, which is basically code words for your startup, don't spend a lot of cash, hire smart people, build your own stuff, save money. You got it. Buy all this HP, Oracle, waste all this money on general purpose hardware. You guys built your own stuff and then became massive, a massive application. So then you actually had to scale it up significantly. So that's the story of Facebook. So now you guys are the new motto, F8. Last year was move fast and have no downtime, which is don't break stuff, right? That's the new motto. So share with us what's going on at Facebook right now. I mean, obviously enormous success. The campus is getting bigger. You're going to be a town someday in Silicon Valley. Huge, global success. But the pressure on the hardware and the infrastructure is significant. Serious chops in engineering going on. You guys are growing from open source. Now you got huge, large scale systems and software with that open source DNA. What's going on with Facebook right now and why here? What's going on this show? Sure, sure. So we have a lot of really exciting stuff going on within hardware engineering as well as, you know, more broadly in the technology space at Facebook. So, and a lot of that is how do we not only, you know, improve Facebook and build this awesome platform for our own developers, but then also look for ways to extend that technology out into the community and into the industry. Because that really, you know, it actually helps us move faster. You know, getting open source software and hardware out into the world, you know, helps connect more folks and it helps us actually innovate faster because we get more smart minds on the problem. You know, all the great people that are here at the summit today, for example. So talk about form factor. You mentioned connecting the world, which is the, you know, Zuck's big things outside of North America, you know, in emerging countries where wireless is developing. So, you know, internet of things is a big thing right now. So, you know, form factor is not just a server. It's a lot of density stuff. And how is that relating to what you guys are doing? Everyone wants smaller, faster, cheaper, right? That's been Moore's law. Sure. But what is that, what's the state of the art today for? Smaller, faster, cheaper, and how does that relate to smaller devices like transceivers and wireless to internet of things? Sure. I mean, it's a great question. You know, my team mostly focuses on the gear in the data center. And so, you know, it's funny. You think smaller, smaller, smaller, make it denser, pack it tighter. You know, that's just not the right approach for us at all. So we build, I like to joke, we build big, ugly tin boxes. So we build the least sexy gear on the planet. It's a lot of fun. We actually take pride in the fact that we build these big, ugly tin boxes. But the great part is they're super efficient. We don't build them for looks. Like, we don't make our gear look good. There's no plastic bezel. There's no fancy blinking lights. Because at the end of the day, we build it to work. And we build it to work at massive scale. And when you really focus in like crazy on like make it work big, like go big or go home, it's, you know, it's a lot of fun because you get to go, you know, solve that big, hard problem. And you don't get distracted by, you know, make it look good on a slide or, you know, make it sexy or make it stainless steel and glass. That's somebody else's problem. So form factor is not as important in terms of bells and whistles look and feel. It's functionality, serviceability, power and cooling, those kinds of things. Yeah, yeah, exactly. I mean, you hit it right on the head is when you look at the energy required to cool a large fleet of equipment like what we have, if you don't do it efficiently, you're just throwing money out the window. I mean, it's really a travesty. You get some of these big fleets where they have inefficient server designs, inefficient cooling designs, and the amount of energy is really significant. You know, it's enough to power a small city. And so even small improvements in cooling make a big dent. And when you look at some of the work we're doing here, you know, the amount of power required to cool one of our designs is three to six watts. A traditional OEM design can be 80 watts of power. And so like just that simple improvement makes a tremendous impact to Facebook, to our operation at scale. And I think, you know, like we see here today, to the whole industry. So the thing I want to ask about this event, what's going on here? Honestly, the big announcement from you guys was the contribution with the system on a chip, that speaks to the whole software is eating the world, the Mark Andries and famous, you know, seminal article in the Wall Street Journal, I think it was a year ago, maybe more, it seems like 10 years ago, but okay, software is really important. You guys are a software company, you have a big app, social network. Absolutely. The system on a chip mean, you know, we hear big data analytics is important, in the big data world. Where is this, what's going on technically, and why is system on a chip important? Sure, it's a great question. So, you know, we're really excited about the system on a chip design. A lot of it is the disaggregation of our infrastructure. And that's very much a software thing, right? We're basically saying, instead of cramming all the different bits and resources into one big box, that's sort of a jack of all trades, master of none, we're going to really focus on solving one problem at a time, and disaggregate the server into the building blocks. And so, system on a chip is a great way for us to focus on the compute element. Let's go figure out how we scale out massive amounts of compute without having to, you know, put a bunch of local storage and other resources in the box at the same time. And so, the Osemini contribution that we're announcing today is all about how to build huge scale out system on a chip focused like crazy on the web front end. On serving, you know, just rendering and serving web pages to the millions of people, or billions of people that consume Facebook every day. Jeff and I talk a lot of customers out there, and it's like, you have two ends of the spectrum. You have the Facebooks of the world who have super smart dudes like you guys on now engineering and software doing some badass, DevOps, you name it, it's all good, right? And then the other end, you have the slow enterprise that's invested in, you know, decades of server consolidation, storage, disaggregated network, storage, now converging, and it's a gap, right? So now you're seeing everyone wants to be like Facebook, but you can't hire the guys because they're all working for you guys or Amazon or Google, but they want to be like Facebook. But you're seeing now mainstream adoption. And what we're seeing here today is the HP announcement. Comment on that trend. How does an enterprise, which could be general purpose, IT and or infrastructure, how do they become more Facebook like? What are they doing? What does this trend mean to them to be more hyperscale, large scale? Sure, absolutely. I mean, I think it's, we're starting to see a really important shift in the industry enabled by companies like Hive, by HP and other solution providers who are able to come in and bridge that gap because you're absolutely right. You know, to follow in the exact footsteps of Facebook to build a full hardware design team from scratch and then, you know, go and invent these products is a lot of work. And it makes sense at our scale, but, you know, if you're a 10th, our scale probably wouldn't make sense to make that investment and start from scratch. And that's the great part about what Open Compute is bringing is that now, you know, you can work with the vendors you already know and trust and they can start bringing some of the efficiency wins, some of the scale out wins and some of the more open modular approaches that Open Compute brings into your own data center. And so we are starting to see the enterprise deployment folks saying, hey, we don't need to buy this gold plated really high end stuff because a lot of our problems actually work and translate very, very well to the Open Compute hardware. The other myth that people are starting to see through is sort of the myth of hardware reliability in that a lot of people early, you know, early on said, oh, this Open Compute hardware, you know, designed by monkeys, like there's no way it's going to work at scale. It's going to be flaky. It's going to crash left and right. These guys don't know what they're doing. The fact of the matter is that our failure rate on Open Compute hardware is better than what we saw on a lot of the white box or sort of OEM hardware in the past. And so we're actually creating a more reliable platform through its simplicity. It's such a simple design. There's just nothing in there to fail. So talk a little bit about within the hardware, you're the hardware guy, how the landscape's shifting, right? It was custom built high end stuff back in the day with big sun boxes and big SGI boxes. And then it went to, you know, everything's white label, X86 box. And now it feels like you're really getting into more of a hybrid. You've got purpose built boxes, but you're using components of kind of standard X86 chips until it was in the keynote. Talk about how you guys kind of pick and choose where it's purpose built and where you continue to leverage kind of industry standard stuff. Sure, sure. I mean, it's a great question. And a lot of that is the art to managing a large deployment. And so a lot of what we look at is how do we create really good playgrounds for the most exciting sort of commodity technology in the industry. And I think of commodities as, you know, things like the CPU, memory, you know, storage devices, SSDs, hard drives, et cetera. You know, we're not going to go out and invent a hard drive. Like that's an amazing science to build something as complex as a hard drive or an SSD. And you know, that's not really our business. But what I love to do is find ways to deploy that kind of technology and massive scale in more efficient and more innovative ways. You know, better to manage, better to scale out, more efficient to operate. You know, less expensive to deploy, things like that. The other funny thing that you guys are doing, and I think it's fascinating is that, you know, when we talk to the big enterprises, the big knock on them is they don't have a ton of PhDs like you guys have at Facebook or Google or Amazon that are really focused on these super hyper scale deployments. And yet you guys have taken that work and now you're open sourcing it back out for their availability. So again, at first blush, it feels like you're open sourcing the keys to the kingdom. You know, what are the benefits? What was that discussion? And why did you guys choose to go that way? And then what are ultimately the benefits back to you? Sure. And so what we look at when we open-source our hardware is primarily, you know, it's core to our mission and it's not just open-source hardware. You know, Facebook has a wide range of open-source software communities that we participate in or created as well. And part of it is just core to our mission to make the world more open and connected. And so by basically improving the world's ability to create connectivity systems, to create great networks, to build incredibly efficient data centers, and to write software that's much more efficient. You know, our hip hop virtual machine allows much more efficient PHP execution so to allow people to scale out large web properties. And so if you look across that entire portfolio of open participation, what we're really doing is making it easier to connect the world. And so that's actually core to our mission and our belief structure at Facebook. You know, it's not a side project. You know, and it's really actually very important to us. It's great. So what's going on with Facebook for the future? You guys are looking at the landscape here. You have kind of an interesting crowd here at this event. You have software, you have hardware guys. What's it mean for the developers and the folks out there in the landscape because you have an operating system model going on here. You could argue that in the open is the data center operating system in play. Internet of Things, software coordination, orchestration, automation. These are like operating system buzzwords. Is there a land grab for software to find operating system right now? That's a good question. You know, I think what's exciting to see is that we're starting to really blur the boundaries of open hardware and open software. It used to be these things where you had such a bright line between them. You get the closed hardware design, finished goods, all nice and spiffy comes in a nice box. And then you could start to do things in open source software with Linux and the LAMP stack and all the great sort of work that's come out of that path. And now we're starting to really mash them up. And I love it when we start blurring boundaries and breaking some old norms. And so there's a few things I'm really excited about that have come out today. You know, the first is the FBOSS and OpenVMC announcements that we did because now Facebook is actually announcing open software at an open hardware summit. Like what the heck is going on with that, right? And a lot of these are the bridge technology or the ways that we allow software people to get really deep in the hardware. And so FBOSS allows you to program your own network switches. Like who the heck would have thought we could have done that five years ago, right? When network switches were these sealed boxes with the warranty void stickers, you couldn't even take the lid off, right? And now it's wide open and you can hack your own software on your own network switch. And then OpenVMC, which I'm also really excited about, allows you to roll your own code in the little baseboard management controller that exists not only in our network switches, but in just about every server and storage device on the planet. And so we're breaking that wide open and letting people hack and innovate down at the hardware platform management level, which is really uncharted territory for sort of that hardware software mashup. So talk about the developers. I mean, obviously you have old school guys, systems guys like me, and the young guns coming in who love open source. It's like options to them. They've never seen Linux download patches and all the stuff we used to do in the old days of standing up servers. Now stuff's coming in by the truck load. I mean, we've interviewed you guys in the past where the app developers are like, hey, I need 20 zillion more servers. And boatloaded servers get provisioned magically, right? So this new developer, the software guys, you got a hardware culture developing, almost like a homebrew computer club in a modern era. So if I'm a developer, I'm a young guy, I'm in the Robots Club or I'm programming, what do I do? What's the tools? What direction would you share with those folks out there? Sure, that's a great question. And we're in a really exciting time, I think, to be a new developer because we've got this amazing maker movement, the Raspberry Pi's and the 3D printers. And I think that's opening up a whole new world of opportunity for folks to just get their hands dirty even at home or in their dorm room to start hacking on hardware and software and building embedded systems, building IoT systems. And what we're doing, I think, and what we're seeing is that starting to move into the data center as well. So now we've got these young folks who are getting their hands dirty with DIY hardware and DIY software and doing all this creative stuff. And now they're going out looking for jobs and we're saying, hey, working at a company like Facebook is just an extension of the maker community. We have a huge 3D printer in our labs. And so how do you use a 3D printer? Come on over, we've got one. And if you've hacked on a Raspberry Pi, you know what, we're doing embedded systems work as well. And so I actually see a great translation from the hacker maker community into hacking and making at data center scale and at massive scale. So what does this mean for computer science? Because now you're blending now two worlds to there. Is that the flattening of this computer science? You got physics, you got social science, you got security. I mean, it's a really interesting landscape if you're in the computer science curriculum or related. Yeah, you know, it's really cool. What we're seeing is a lot of mashups. I mean, just like I talked about the Raspberry Pi sort of hardware software thing, we're seeing a lot more folks like sort of cross-disciplinary mashups coming in as well. And so we hire tons of folks from computer science as well as electrical engineering and hardware engineering space. But we're seeing more and more folks who are like, well, I'm doing some of everything. I did some robotics, I did some programming, I've done some hardware design, I love it all. And I love it when we see folks come in with that sort of hybrid degree program or sort of broader set of experiences because what they can do is they can work at the boundaries. And I love it when people can sort of work at the boundaries of hardware and software. Or even within the software world, like work at the boundaries of two or three different disciplines. You know, I mean, even outside of hardware, working between software design and computer imaging, I think there's a lot of potential for folks who broaden their experience level and show up in the workforce with that set of capabilities. Matt, we got a break, we're getting the hook here. Final word, bumper sticker of this event. What does this mean for the future of computing and changing the world and connecting everyone together? This is day one for a change in how people build scale computing. Like, we're at it, this is the ground floor. And so I think if we look in 10 years and we look back, we're going to say, wow, that was the point where people started to figure out that hardware didn't need to be closed. Just like when we started to download like those Linux floppy disk images over modems in the late 90s, we said, what the heck is this Linux thing? It's a hobbyist tool. There's a few crazy people in the corner doing Linux. And we look back at that now and saying, oh, that was actually the start of a revolution. We just didn't know it. And so I think we're going to look back on this, on open compute, on open hardware. As the start of a new revolution, we just don't know it yet. All right, thanks for joining us on theCUBE. We'll be right back with our next guest live in Silicon Valley covering the open compute project. Freedom, it's day one on the ground floor. This is the future right here, the data center software meets hardware. New way to scale up and scale out. We'll be right back after this short break.