 I'm the chair of the Related Open Source Project Track. I'm very happy to have Cole Crawford here. It feels very appropriate to start the bottom of the stack with hardware first thing in the morning. And I'll let him finish introducing himself. Perfect, perfect. So my name is Cole Crawford. I'm with the Open Compute Foundation. And who in the room has heard of Open Compute? That is fantastic. Aside from being related to the Open Compute Foundation, I'm also the Cloud Advisor at the Linux Foundation. I've been involved with OpenStack since pretty much day one. Previously, before there ever was OpenStack, there was a small company called NASA. And a small company called Anso Labs. And I was Anso Labs' federal partner. So we did a lot of government integration for NOVA before there was NOVA and a combination of Swift and Glance. So I've been involved with OpenStack quite a long time. So with that being said, I'd like to kick off and thank you, Lloyd, and thank you for the rest of you for being interested in hardware. We at Open Compute feel that open source software can only go so far in terms of completely open sourcing a data center. Typically, when I talk about cloud, I talk about sort of the abilities of cloud. And I like to talk about the three abilities of cloud. Interoperability, portability, and compatibility. And workload compatibility and workload interoperability. You can get some of these things from software. You can make sure you're using standard APIs, standard interfaces, et cetera. But, you know, X86 has been around for quite a long time. And there's tons of different standards, right? You've got SM BIOS, you've got IPMI. There's all sorts of different mechanisms for interfacing with hardware. But you'll notice from vendor to vendor, the implementation might be just a little bit different. So Open Compute aims to standardize the interfaces and the APIs, if you will, that we communicate with at the hardware layer and get rid of what we call gratuitous differentiation in the hardware space. So who knows where these quotes are from? Anybody? So this goes back a long ways, right? This is Linus Torvalds and Richard Stallman. And, you know, we like to sort of parody this message with Open Compute. And I think Open Stack has done something similar, right? Humans are innovative. We want to move forward. And, you know, as humans migrated throughout history, we've always wanted to experience the, you know, the creature comforts. And so, you know, we always wanted to make the new world around us sort of like the old world around us. And that typically moves towards an open ecosystem, right? You can't say the same thing for power. I don't know if you guys travel internationally, but that's tough. But typically, we want to experience in a common way our world around us. So, you know, before there ever was standard X86 hardware, we had software innovation happening and openness being embraced, you know, in the early 90s and Richard Stallman actually with the GNU project even before that. So let's take a quick look at, you know, sort of this timeline here. You can see how, you know, you had these big, big companies with great ideas. I mean, if you think about how innovation happens, right, a lot of times. And look at the VC community today. A lot of times, you know, an idea gets spawned with an investment with a big differentiator, right? And you see a lot of this stuff start out as very closed. But as the technology becomes more democratized and more commoditized, you see this shift. And, you know, you really saw Sun sort of embrace this. You know, Andy Bettelscheim is the vice chairman of the Open Compute project and obviously the founder of Sun. And you saw Sun sort of make that move before anybody else. Obviously Java was open sourced. And even the operating system was given away along with hardware. And then ultimately you see sort of this path to Linux. I had BSD up there, but I thought for this conference it was sort of not relevant. But you do have a lot more options on the open space as it exists today. And by the way, stop me and raise your hand. You know, we find that as we've given this sort of open source data center speech, we like this to be conversational. A lot of people have a lot of questions around hardware and the relationship that hardware has with software. You know, typically you look at the relationship between an IT, IT software side of an enterprise and IT hardware, and they're completely segregated, different chains up. And we want to, very much like OpenStack is doing, right? We want to smash that paradigm and create an ecosystem where there's harmony that exists between both. So if you have questions during this presentation, please raise your hand and let's make this a conversation. So Evolution of the Cloud, again, following in that same sort of vein, right? You start out very proprietary. VMware, you know, the sort of the... I didn't actually have IBM up on the left. I guess if I was doing the slide again, I would put IBM out there in the 70s because they were doing virtualization. But, you know, on x86 virtualization, VMware sort of the, you know, the father and founder of virtualization started out very proprietary. And then you saw Zen emerge, right? Zen was a game changer for x86. All of a sudden, people were doing, you know, a lot more infrastructure consolidation, right? You could, you didn't have a great way to manage it, but you certainly had a lot more density. And then you had KVM, right? The Kuminet acquisition, which was funny, you know, the KVM thing actually, Kuminet, the company behind KVM, originally spun that idea out for their desktop service, Solid Ice. And the hypervisor was sort of a secondary technology to what they thought was their differentiating value proposition, which was Solid Ice and the Spice Driver. So we have this infrastructure consolidation, but we had no great way to manage it, right? Who here... Raise your hand if you were responsible for any kind of IT operation in terms of process and server lifecycle management or software lifecycle management. So, you know, who has felt pain in the SDLC process for software? I mean, it's rampant, right? It's rampant. So, you know, we saw open-source emerge. Around the time of Zen and KVM, you had companies like Puppet and Opscode sort of give us tools, right? And now you've got Ubuntu and R-Path and all these other companies that are giving us tools to help us, you know, manage... You had enterprise tools. You know, I should have played logic on this slide. I should have put Opsware on this slide because these were sort of the early tools, right? And when we think of sort of managed services, anybody remember LoudCloud? LoudCloud was sort of the first SaaS model, right? It started out as SaaS. So we were finally given tools to help us manage our infrastructure. And then came the anomalies of the world and the Reuben Cohen's to go along with anomalies, who himself is an anomaly, and the eucalyptus of the world, right? And these were the first sort of open-source cloud technologies that we were given, albeit both of them embraced sort of this open-core model, right? Which we all know is probably not the best model if you want to grow a big community and you want to see traction in the enterprise. Anomaly eventually was sold, I think, to virtual stream, I think, and eucalyptus obviously still exists, still a competitor in some sorts to open stack, but in terms of community and driving innovation in an open way with, again, sort of the three abilities that we talk about for cloud, we're finally at a place where open stack is dominating the community traction and the involvement from an ecosystem perspective. So does what I put on here have... Does it give away? Does anybody know why I put these pictures on here? That's exactly right. So the original 19-inch rack came from the railroad switching era. It's unbelievable. It had no specification for depth or width, right? Or excuse me, yeah, depth or height only width. And this is what's in our data center. This is being run. This is what Google does. This is what IBM does. This is what HP does. This is what Dell does. This 19-inch width rack is exactly what everybody does in the industry now with no thought to how efficient it is. Which is pretty interesting. So it's kind of fuzzy, but this picture here up on the upper left, those are the original 19-inch racks. And we've just adopted that standard because it was there. So, again, following sort of the history, right? In the 70s, you had the big mainframes. You had sort of the scale-up technologies, right? You had the big super domes and you had the Starfire series from Sun. And then finally in the 2000s, you know, X86 really took off in the data center. We really started figuring out how to do HA. It wasn't necessarily all about fault tolerance anymore, right? We had enough workload. We had enough to process that it started making sense for us to scale out instead of scaling up. And, you know, the Dells, the HPs, the IBMs of the world are in the positions they're in today because they recognize that. So, but like I said earlier, they recognize this, but they all sort of tweak their versions of IPMI or S&Bios or DMI, right? So there's all these just very small variations that made heterogeneous environments for us hard to cope with. Who here today, by standard practice, will deploy and open stack cluster on more than one piece of, you know, tier one gear? And if I could, how's that working? The attitude of most C-level executives and enterprises to say, if I want to ensure workload uptime, right? If I want to meet my OLA and my SLA, I'm going to have a homogenous environment. It's going to be all Dells, it's going to be all HP, all IBM, all whatever. Just like in the software world, right? Who here is doing an open stack puppet or otherwise implementation with a combination of, say, Ubuntu and Red Hat or Piston? What is it? Secure Linux from scratch. Secure, yeah. So who's doing a heterogeneous open stack software? Yeah, that's what I'd expect, right? So finally, we've engaged the tier ones and we've engaged the tier, even some of the tier twos in the ODMs over in China to see that the 19-inch rack is an old way of doing things. We've got a new, more efficient standard and just like open stack started, we have really gained a lot of momentum in the last 18 months around open compute by the sheer number of people that raised their hands. Obviously, there's a big traction here that we want to have everybody in this room get involved with, from the peer perspective, of community involvement for operating our data centers. So who here has signed up for a mailing list on open compute? Anybody? Okay, if I could. Which mailing list? Great. So hardware management is... There is actually motherboard. I meant to put that back on. So Matt Gammerdella from Nebula is now chairing the motherboard vertical and it's important, right? So open compute exists of sort of five verticals. Very much like open stack consists of, you know, the various top-level projects that make up open stack. We've got virtual I.O., where everything networking-related, everything that has anything to do with, you know, any kind of virtualized I.O. or real I.O., right? Because ultimately, in a scale of architecture, a lot of your real I.O. turns into virtual I.O. So we've got virtual I.O. We've got hardware management, which is run co-chaired by Matthew List and Grant Richard from Goldman Sachs. Obviously, Goldman Sachs knows a thing or two about how to manage hardware. We've got data center design, which on my next slide I'll show you sort of the benefits of why 21-inch racks and why data center design. We have the open rack, which pretty much contains all of the... What started out as the triple rack and now the individual racks that make up open compute. And then obviously storage, which I currently chair, storage. So if you're interested in storage, please... Actually, if you're interested in any of this, come up and talk to me, but if you're interested in storage, I'd especially like to talk to you about open compute. And storage makes up, you know, all of our cold storage, all of our near-line storage, all of our production storage. This is the vertical where, you know, a lot of the companies you see in the middle are focusing. The two really interesting technologies for me personally in open compute right now are around virtual I.O. and storage. Only because I'm closer to those worlds. I'm not saying anything about the other verticals, just in terms of, say, open stack, right? Storage makes up, you know, you've got multiple companies here representing storage technologies, right? You've got Dreamhost out there and Ink Tank. You've got Red Hat here with Gluster. You've got Swiss Stack, right? So there's tons, you know, EMC and NetApp are all storage-focused. There's a ton of attraction in terms of storage around open compute. And one more thing on this slide is, you know, why open compute, right? I'd like to just sort of go over the history of why. Anybody remember the hard drive shortage we had 18 months ago? This was a real problem for Facebook, right? This was a really big problem for Facebook. If you can imagine the global growth that Facebook had and continues to have, but put yourself back 18 months and, you know, in the U.S. and globally, Facebook was growing like crazy. And they were having a hard time sourcing enough drives to meet the capacity demand for their growth. So, and they were buying from anybody. It didn't matter. But they were having a hard time, again, with interoperability, compatibility, and portability of their workloads on all this hardware. So what they did was they engaged some of the big ODMs in China, and they said, listen, we want to design a box. We want anybody to be able to build this box. We'll pay for support. We'll either lease it, we'll buy it. It doesn't matter. But we want this box to be open source. We want anybody to be able to build it, we'll buy it from you, we'll source it, and we can then ensure that as we build out our data centers and as we build out our IT infrastructure, we can guarantee to some level that our workload's going to work on this hardware. So this really became a supply chain management issue. This wasn't, you know, just like Rackspace initially was figuring out how they were going to move off of sort of their VPS and into real cloud. This was done for Facebook because it was a huge supply chain management problem. The benefit was, you know, the community is going to benefit from this because this wasn't a core sort of, you know, IP thing for Facebook. This is something that ultimately for anybody doing scale-out workloads was going to be a good thing, so they made the obvious choice to open source it and, you know, we're all better for it. So back to the 21 inches and efficiency. Facebook did a lot of work around efficiency as they were doing their data center design and they ran a lot of numbers. They ran a lot of metrics. You know, it started out as a one and a half U unit because a one U fan was too small and a two U didn't offer as much density for a very nominal increase in cooling. So one and a half U ended up being the standard initially for open compute, largely because of its efficiency. And you can see here that on the left, the Primeville data center that Facebook operates and this is actually fairly old data. This says 1.08. Lately we've been seeing closer to 1.02, 1.03. So that's a huge, you know, and one is sort of the magic number for PUE, right? Where Google's best data center is operating at about a 1.3 with the infrastructure that they run. So we see by what we've done, you know, in terms of the 21 inch rack and by the efficiency of the data center design with the way and who here has seen the video on how open compute and how the Primeville data center operates with the misters and the evaporative cooling and I'd encourage you to check out that video. It's really cool the way that data center is run and I think that that data center is open if you happen to be up there. I think you can actually go into that data center. It's quite impressive. So, you know, roughly 15% more efficient than Google by doing sort of the analytics around efficiency. So here we are. We finally have production-ready open source software. We've got vanity-free open source hardware. Obviously, you know, the question is, what do you do next? And the right answer for us is, right, in terms of open compute is a certification path. My screen is cut off a little bit, but that's okay. Up at the top, it would say the new standard open compute certification. We're already working with the community and not just the open-sec community, but the open-sec community is certainly, we'll see on the next slide, there's a very important community for us. But we feel that, you know, it's great to offer open source hardware, but what if we were able to allow our software partners and the community adopters of software to benefit from a validated reference architecture? I contribute. I go around and I often quote Chris Kemp from Nebula, who I believe at the last summit said, you can have exadata without the exadollar. Apologize to anybody from Oracle if you're here. But we feel that that's possible, right? We feel that you can do validated reference architectures and certified software stacks on open source hardware. And if we offer this, right, if we offer the certification path, we effectively, who here represents a software company with the product? Anybody? Okay, so maybe, what if we were able to give you a hardware partner for free that could be adopted by the government, right? What if we could get common criteria certified for this? Or what if we could get an SI to be a channel partner for you, right? These are key themes for what we want to do. So we're working very closely with Rackspace, right? The Ovid Foundation is working very closely with Rackspace. Rackspace, like Facebook, is faced with, you know, with a growth challenge, and that's a good thing, right? But what if you had turnkey integration and SI integration, right? What if you could certify your software on one piece of hardware and you knew that it didn't matter if your customers deployed Dell or HP or IBM or GT or SuperMicro or, boy, I'm so sorry, still a call mechanics, right? What if it didn't matter? What if anybody could build it and your customers could put it in their data centers and know that down to the transistor, it's the exact same piece of hardware? So when you throw that in your data center, you don't even have to worry about the interoperability or the portability of a heterogeneous environment because it's completely supported from an SI where you can offer incentives for that SI to be your channel partner. So this gives you a global footprint to base your company around or your IT ops organization on. So, yes, absolutely. So last week, some people were out in Raleigh, North Carolina. I think that's what you're referring to. And out in Raleigh, one of the first sort of community-led initiatives inside of Open Compute was this roadrunner concept, right? You had Roadrunner and the Catholic, which is the AMD and Intel spec accordingly. And this is sort of the first time somebody that wasn't doing something specific to scale out growth for Facebook has designed something for the community and giving back to the community. And this is exactly what we want, right? This is what we want out of the OpenStack community. We want everybody in this room to come work with us and build an ecosystem around open-source software and open-source hardware. Because as we move towards the software-defined data center, having open interfaces and open standards to interface with, and if you listen to Steve Herrod from VMware sort of talk about his vision for the software-defined data center, I don't think you can get there in any massively scalable way without doing this. I mean, I think you're going to need this open-source aspect on the hardware side to do that. And so on the certification path, we took a different path. Open Compute took sort of a different path. Our board is made up of invites, right? We invited our board of directors to participate in this. The foundation operates on a healthy nonprofit budget, but every member of the board of directors was asked to be on that board of directors, and we don't, you know, there's not... It's not the sort of the... I mean, it comes with it a sort of bad term, but the pay-to-play model, right? This is not a pay-to-play model for Open Compute. And there's a reason for this, though. It's not necessarily that, you know, that works. The Linux Foundation does it successfully. OpenStack is doing it successfully. But it's kind of a misnomer when you say open-compute hardware, right? In order to truly do open-compute hardware, if you want to do it yourself, you need to go to China and you need to spin up a multi-million-dollar manufacturing company, which we can't do. Open-source software, I mean, if you're really a masochist, you could do it on the tablet, right? I mean, you could do it on any piece of software, on any OS, and you can write open-source software. So, you know, again, open-source hardware, we kind of benefit from the fact that, you know, the relationships we have with the ODMs and the Tier 2s and the Tier 1s who are involved who see this train coming, no pun intended from the last couple slides. But for the train that is coming, these guys see that the, you know, that we move towards open standards and we move towards open innovation where, you know, keeping things in the dark and keeping things proprietary don't make sense in the long term. They certainly make sense from a financial perspective in the short term. And, you know, when an idea is being brought to conception, it makes a lot of sense. I mean, there's a lot of bugs. There's a lot of things where, you know, that closed architecture really makes sense, racks-based with open stack, and I think we're all better off for it. I truly do. But, you know, again, as we go forward, we want to see these things shift to an open ecosystem, and I think we're there. And the reason for the universities, right, sort of a, you know, a bipartisan way, if you will, is to make sure that software vendors get the benefit of this and adopters get the benefit. Just like open stack, there's really three communities you serve in open compute, right? We've got our end-user adopters. We've got our manufacturers, right? And we've got our sort of our, what we call the new ISVs of the world, right? These new software vendors that interface with open compute and ultimately have customers of their own. And by having universities get involved, not only are we another sort of, this is actually something Oracle did in the 90s and early 2000s over in, I want to say it was Brazil. They spun up what they called Oracle U, I think, Oracle University. And we want to do the same thing. So we're working with a number of different universities. I don't know that I'm capable of saying which university yet, you probably know, but there's more than one. We're working, you know, I just had one of the most amazing conference calls in my life with the Haystack team. Anybody familiar with the MIT Haystack project? The black hole telescope? We just talked to those guys and that was such an amazing, such an amazing conversation to have. These guys are collecting 64 gigs a second. 64 gigs a second. And they have roughly three or four campaigns a year. So imagine what that data looks like. So this is a telescope made to basically track black holes and see the event horizon actually happening. So very cool project. So we've been on the phone with MIT. There's other universities involved. We are going to move forward with the certification path and it's going to be in a way where the community is going to benefit from having sort of an independent body help us with that certification and at the same time offer a great learning experience for that individual. So how to get involved? We've got a, we actually, the OWACLA that OpenStack uses. We have the, you can sign the open web CLA. I would encourage, if you found a vertical earlier interesting, please get on there and contribute. A lot of this right now is being led by Facebook and sort of the core members of the Open Compute initiative, but we're slowly seeing that transform into community. I can't tell you how many hackathons in Silicon Valley OpenStack meetups I've been to with Lloyd, but we're starting to see that with Open Compute. So get on a mailing list and offer up pizza and Coca-Cola. Being recorded. Alright, offer a beer, we've got a beer conference next to us for crying out loud. So get something together, get people together, and this is your opportunity to be a hardware vendor. I mean if there's a technology that you want to see introduced, instead of the tier ones telling us what they think is good for us, right, let's tell them what we want. So, just like the OpenStack Initiative, we are now empowered to tell the world what makes sense for us, and we're able to basically change the world. So with that, I thank you. And I want to open up the floor. I wanted to save about 10 minutes for questions, because I think this is 40 minutes, and I think we're right at 30. So, questions? Let me get back to you on that, because we do have a lot of... We can talk offline. We do have a lot of those same companies that are participating in that venue, participating here. But you'll see the fusion aisles of the world and others that are interested. So I'd love to meet with you after the fact and sort of talk to you. Yeah. So you asked about the MD&E, right? Don't have anybody that's currently leading that initiative? If you think about Facebook's big growth, right? Lady Gaga takes a picture of her knee, and it's the biggest thing for a week, right? And then nobody sees it again. So cold storage is very important to open compute right now in Facebook, but we do have a number of vendors. And in fact, in the storage spec, you know, the one approved spec right now inside of open compute is a spec called NOX. And NOX is just JBOT. I mean, it's just... It's very dense JBOT, right? It's 2U, and there's two sleds, and you can... Because of the 21 inches, you can fit 15 drives in each of these sleds. You get 30 as dense as you want, drives inside of 2U, and if you think about 42U, it's a lot of disk. But that's sort of the core focus is that cold storage. But we do understand, right, we're not negligent in terms of understanding the need for even hybrid specs where, you know, sure, an external SAS cable plugging up to compute works, but we're working with a number of companies in terms of offering sort of hybrid specs where you've got, you know, maybe a very dense, multi-core environment on one sled and then 15 drives below of, you know, tier storage. So certainly, hierarchical storage management is important to open compute, and we're going through what that looks like. Flash and, you know, solid state, high performance, low latency has not been something that we've typically been focusing on yet, but there's opportunity, if you go and you go to openpapute.org, at the very top, there's a get involved link, and under get involved, you can go down to the GitHub, all of our specs are out in GitHub, and you can see, and you can actually contribute on the mailing list, you can say, hey, I've got this idea for, you know, for a low latency, high performance box, and the beauty of this is, you know, long term, this could very much look like a group on where you could source this, and if the spec takes off, then it becomes a standard and gets voted in, and that becomes an actual open compute spec, which is fantastic. So I would like to talk to you about that offline, though, just because there's some action, you know, we are definitely involved in some of those conversations, but it's just not been a core focus yet. I think what's changed is, oh, sorry, sorry. So his question was, from the 90s, there were interested parties, and there was actually an initiative to sort of do this similar thing, and what's changed since the 90s to allow for this, and I think the answer is pretty simple. We now have globally, right, I would imagine that at least 50% of this room has operations in at least two data centers. Am I right, can I show off hands, more than one data center? Sure, right. So we have reached sort of this scale-out point where it doesn't make sense to have sort of the scale-up fault-tolerant deployments, and we want to make sure, you know, I think open source initially was met with some criticism in the 90s, right, it happened to Sun, it happened, I mean, look what happened to Linux, right? I mean, Linux was, it's crazy when you look back and think about the hurdles that Linux had to overcome to penetrate the data center, and OpenStack is going through that right now as well. But I think that the workloads are ultimately what determined the ability for an initiative like this to take off, and this is why we're so interested in differentiated specs, right? Facebook realized that just cold storage doesn't make sense for them. Just offering, just offering, you know, JBod, that's, that's, you know, is that going to help, is that going to help the OpenStack initiative, you know, some, for some scenarios it would, right? But this is why we wanted to open it up because Facebook realized that this wasn't just about them, this is about everybody else. And they do understand with, you know, with Hadoop and with OpenStack and just the nature of scale out computing today, I think people are looking again for an open ecosystem to contribute through. And I think, you know, by, by design or by chance, Facebook happened to be in the right place to, to start those conversations with all of the appropriate vendors and all of the appropriate, you know, tier one partners. So I think two things have changed, right? Number one, people are more friendly to open source, and two, I think the workloads, the scale, the nature of the scale out workloads in this ecosystem lend themselves nicer to, to something like this than sort of a, you know, a locked in. People don't want lock in. People used to want lock in, right? People, you know, Oracle still does a great job at saying, it's us for Linux, it's us for the hardware, it's us for, you know, whatever, one phone number, right? That's what they've operated under for a while now, and it's still successful. But the bottom line is, you know, open source software has, there's a comfort level associated with open source software now, and I think that that's actually a big game changer in terms of being able to adopt something like this. Any other questions? With that, thank you very much. Thank you very much, Cole. I would say either during the conference or after the conference by email, get a hold of Cole. When I met him about a year ago, he really impressed me right away how he's great at connecting people with answers, and if he doesn't know it, he'll find the right person. So either about open compute or open stack, he has a wealth of knowledge about both, so thanks very much again. Thank you.