 We're actually a little after time, so I'll probably go ahead and get started. People can wander in. It's too bad. I actually brought gifts. So I'm going to do something a little bit unique today. I guess just very quickly I want to thank Lloyd. I don't know where Lloyd is, but Lloyd sort of runs the open source track here at the Opusack Summit. I did this talk in San Diego. When I asked the question I'm about to ask, there was about three people, maybe four people that raised their hands, so first I'll start off by saying, introduce myself. My name is Cole Crawford. I'm the Chief Operating Officer of the Open Compute Foundation, similar to the Open Stack Foundation. We're a non-profit 501c6, and we're here to serve a function just like the Open Stack Foundation does. So that being said, thank you all for coming. Who here has heard of Open Compute? That is awesome. That's good news. Like I said, there was about three people last time six months ago that raised their hands. So thank you guys for coming. I know it's some people's last session. I know we're near the end, and I know you guys have all been sitting through a bunch of very, very technical talks regarding lots of wonderful software, considering you like Python. So I wanted to do something a little bit different. Last time in San Diego, I did a talk. It was fairly technical, kind of the history of how Open Compute got started, and we'll touch a little bit on that this time around as well. But I kind of thought to myself, all right, these guys, this is the last day, these guys have sat through a bunch of technical talks. If I was giving a TED talk, what would I talk about? So we'll sort of broad sweeping, hopefully, you guys will appreciate sort of the message around this, and then we'll go into sort of the depths of Open Compute and how that's related to OpenStack. So also very interactively, I want you guys to ask questions. I don't enjoy just speaking at you guys for 40 minutes or however long we're going. So if you have questions on anything, probably later in the presentation, please speak up. Raise your hand. Let's make this a conversation. And I'm going to stop occasionally on a slide. We're going to do some trivia. I've got some pretty cool Open Compute micro buffs. Have you guys seen these things yet? It's pretty cool. I've got a few. Just put them on your phone, and they'll clean your screen or your tablet or your laptop screen. They're kind of neat. We've got two Beagle Boards to give away. So we'll do some trivia interactively, and we'll go from there. So humans are made for collaboration. If you think about the caveman times, the lonely caveman trying to start the fire is nowhere as efficient as the cavemen that hunt and gather and bring back. This is sort of the way that as we've progressed as a species, we kind of start off with saying, hey, that's mine. So Freud had the concept of the id, the ego, and the super ego. And as you start thinking that you can let something go and start sharing it and not lose out, that's when innovation and collaboration can occur. Great example of that is this. And this is our first trivia question. Who can name the craft on the left, not Sputnik, or close? This was the first spacecraft to take the first human into space. That is worth this. That is the person it took. Anybody with the spacecraft? All right. We'll save it. There's lots of trivia in here, so it's called the Vostok one. The message is, in the space race, you had obviously the Soviet Union and the Americans racing for dominance and differentiation. They were obviously doing it very differently. And they both thought it was going to be a huge accomplishment for their nation. And in the end, it didn't really serve to do much but continue the Cold War, etc., etc. But now, although the Soviet Union still runs independently, they are obviously the way we get to the International Space Station today. So something very closed, ultimately ending up very open. So the universal constant is anybody change? The only thing that doesn't change is that everything changes. And this is sort of something I like to talk about. If you look at this, it's actually, it follows fairly closely the process of grieving, which I love in context of open compute. There's a lot of incumbents that were here to sort of stand up and say, it's not dominated anymore. This is going to be a community effort. So you know, it's very small for me here, so obviously starting with denial and slowly over time accepting and I'd like to think that open compute could be a path to enlightenment. It's certainly a path to efficiency and we'll see how the community grows and how enlightening the project is for everybody. So this is another great movie. One of the trivia questions here would be who was, we'll sort of talk about the bitter truth being more powerful than blissful ignorance, right? And this is sort of the red pill, blue pill dilemma inside the matrix. Anybody know the original character that wanted to take the blue pill? We're all geeks. Come on. Somebody should know this. Rick, who is the matrix character who wanted to take the blue pill? Cipher. You already have one though, right? Gosh. Okay. Another trivia question for one of these. You've all obviously heard of open compute, who knows the original three founding open compute members? There's one. That's one. The original founding members of the open compute project were three companies. I'll give you a hint. They're on this slide. Facebook. Rackspace. It goes to the person with the last answer. Good job. Community effort. Right? This is awesome. So you look at sort of the other guys, and by other guys, I want to just clarify. I don't mean Dell and HP and IBM and these guys. You have much more proprietary companies out there differentiating in ways that don't even line up with the vision of things like open stack and open compute. Not going to name names, but I don't know what does that say? No, you guys are good. You guys are members. HGST is a member. You guys are excellent. But there are companies out there that make lots and lots of money by building things that start with EXA and could be substituted with dollar. Not Dell, not HP, not IBM. We love those guys. There's obviously supporting open stack. Actually we've got representatives from both HP and Dell on the incubation committee for the open compute project. But there are people that are completely lost and are completely ignorant to the process of change and to the fact that the world around us is getting faster, innovation is happening quicker and quicker, and people are doing things out in the open which presented me with some really great slide opportunity which I didn't capitalize on. So curiosity killed the lion, right? There's when I, it was kind of a kind of an interesting duality there in the slide. Every great innovation could be framed with what if, right? When you start asking questions about the world you're in now and the world that you could create, that's when powerful ideas happen. And that being said, this is one of my favorite trivia questions. I've actually got two coming up for you. Who can name the cat? Who can name Alice's cat for a Beagle Board? Yes, yes, who is it? You can just come up here and I'll just keep handing you things. So please, and guys if you don't know what the Beagle Board is, this is also an open source project originally initiated by Texas Instruments, we like this. You can hack on these things. You can do a lot of cool sort of hacky things with these boards. So congratulations for that. When Alice started looking down the rabbit hole, and obviously it was curiosity that led her down there and ultimately it ended up in a very good thing and in this community we see, if you look at OpenStack as an example, this started with a what if question. I mean, I think Rick Clark was kind of the guy that asked that question, right? And for those that don't know, Rick is now with Cisco and great sponsors of the OpenStack project. And I was on the other side of this. I was on the government side working with NASA and the ANSA Labs team with Nova. Rick was at Rackspace and they had the whole Swift contribution. And it was a combination of Rackspace and NASA saying, hey, we've got this coming up here. It's obviously, they don't do anything in the private space. We obviously for national security reasons or whatever need that sort of privacy. Rackspace was growing very quickly and they needed something that was going to go beyond Slice host, beyond that VPS. And they started asking the questions of what if? And if I've got my story correctly, Rick basically said, what if we called NASA and started working together with those guys to create an alternative? And from there on, it's been what if over and over and over again in terms of OpenStack progression and innovation? What if we had a block storage capability? What if we had an imaging store? What if we had a way to view this stuff with a self-service provisioning portal? What if we had networking as a service? I mean, it's always that what if question. When you do that, you end up going far beyond what you thought possible, right? When you start pushing those types of boundaries and those types of limits, you end up achieving heights that you otherwise would not even expect. And I did not think that I would see in my lifetime, somebody jump out of a space portal with a suit on. So that being said, for one of the micro buffs, you can name either the project or the diver. Felix, boom. And the project. Anybody? Perfect. Great. So by working together and just asking questions, you can achieve really, really powerful things. And I want to reclaim a little bit of your time today. I'm going a little bit quicker than I would. So we're about on time, but the whole message of working together and what you can achieve when you do that is just fantastic. So what if we made a tool of storage array? What if instead of the incumbent 19-inch rack standard, which actually came out of the railroad switching days in the early 1900s and was then adopted by the music industry and the film industry is the way that they would rack their gear? Why is that efficient for data center computing? When you start asking those questions, the combination of why and what if leads you to very powerful things. So Facebook started asking these questions. What if we made it 21 inches? And what if originally made it 1.5 U instead of 1? Where 1 U fans are not very efficient and 2 U fans are only marginally more efficient than 1.5 U? And being mechanical, fans take a lot of draw in the data center. So they landed originally on a 1.5 U chassis that was 21 inches wide. And they had the foresight to think, OK, well, maybe for compute, 21 inches isn't all that necessary, but for storage. And how many of these do we have? Who here? Yeah, this is a good question. Who here knows what Facebook's biggest challenge was in terms of their data center? And just operationally, what's their biggest challenge? Those are big problems, but storage, right? Facebook was growing exponentially, right? Exponentially. Order of magnitude growth in terms of how much storage they were having to archive from production quality storage all the way down to very cold storage. So they said, what if we did 21 inches? And what if we could get an extra 3.5 inch drive horizontally in every 2 U? As drive density gets better, our storage story becomes much, much better. What if we stopped using the 240 power spec? When you step down from 480, you're actually just wasting energy. All you're doing is wasting energy in your data center when you step down from 480 to 240. Not only is it more equipment you have to buy, but you have electrical engineering that happens. And then you've got power supplies that are not as efficient because they're having to talk something different. You've got voltage spikes that can happen. You actually have a much less efficient and much more expensive solution to the problem you're trying to solve, right, which is efficient at scale computing. And yeah, good question for one of these. What's the magic number as you measure data center efficiency? First, who knows what it's called. Who knows what you measure data center efficiency in? Peely, you've already got one. Who else? You said it, though. I heard it. And what's the magic number? 1.0, exactly. And why is that 1.0? Anybody? Exactly. For everyone what you're consuming, you have that capability that you're processing, which it's a good way to measure. So Facebook started asking those questions. What if we did this? What if we actually took out all of the power supplies? What if we used lead batteries as a backup solution? These are amazing questions. You talk about a generator and a backup program that doesn't rely on this. And you've got a very, very, very expensive buildout on your hands in terms of what you need to do in case of power failure. And so what if we incorporated blade infrastructure into a community-driven open standard, right, where you now have, so the latest 3.0 spec over here, they gave me a laser. It's dangerous. So over here, you've got the latest 3.0 version of Open Compute, which is called Winterfell. It should have been a darn good question to get one of these away. And inside of Winterfell, you've got three, which is 1.3 of those. It takes up exactly 1.3 of the 21 inches. So you can fit three side by side. You get a very dense, very power efficient and very performant solution. And because of the way that we've architected, and I say we, it should say Facebook disclosure. I'm not a Facebook employee. But because of the way that Facebook architected this stuff, they were able to actually start thinking about bigger things like data centers. What if we designed a data center that was purpose built for the hardware we're designing? And what if we made that data center geographically aware in context of how it gets cooled? So Facebook, again, went back to their Palo Alto basement and started designing a data center that would ultimately achieve that 1.0 PUE, that magic number. And by the way, the standard data center operates at, anybody know standard data center? 1.5, roughly. There are companies like Google whose best published PUE is about 1.3. Facebook's Primeville data center, I think, just hit 1.03. So we're getting very, very, very, very close to that 1.0 number. But Facebook didn't want to do this alone. They came up with this great project called Project Freedom, which I'm not ashamed to say was freedom from Dell and HP. But they turned that into open compute. And it wasn't necessarily that they wanted out of buying from Dell or HP. That was never the case. If anybody here operate a heterogeneous data center, deal with IPMI problems maybe between the two, it's hard to operate a heterogeneous data center when you have different IPMI specifications, when you have different out of band management capabilities. And these companies, obviously, are differentiating on things that, going back to that second slide, differentiate on things that aren't important to the end user. And this was very much a push process where Facebook said, this is what we want. We want to pull from the ODMs. We want to pull from the other people that can manufacture these things and listen to us and ultimately build these things. And they ended up instantly with a community that said, you know what, we want this stuff as well. We want to be able to design purpose-built hardware for purpose-built data centers to achieve that magic PUE number that everybody's hoping that we can achieve. And this speaks to not only the C-level executives that are cutting the check for the power and cooling, but also all of the executives that are focused on carbon neutral data centers and green computing. So Facebook wasn't alone. Intel very quickly, I should have put Intel's logo on this slide, but Intel very quickly realized this. And this was profound for no other reason or least of which Intel was a founding member of the ODCA. So the fact that Intel was look at things that could be filtered through the ODCA, which is very much a vendor-driven consortium, to something like Open Compute, which is a very community-driven consortium. And by the way, we didn't start as a standard. Open Compute started as just a spec-level reference architecture. The community is what really determined that it was sort of standards-based. And since then, they've attracted, well, companies like Rackspace. So inside of the Open Compute Foundation, we run multiple projects, just like OpenStack runs multiple projects. We have our variations of NOVA and Glantz and Swift and all the great stuff. We manage motherboard, hardware management, certification, disaggregated IO, which we'll talk about in a second, open rack, storage, and data center design. And those are sort of the variations of the related technologies that we have to OpenStack. So giving this stuff away, Rackspace said, you know, we're growing really fast. We certainly operate at scale. We're getting closer and closer and closer to sort of hyperscale. What if we took the open rack and purpose built it for our needs? And you end up with a Rackspace open rack. And the open rack is the 21-inch enclosure, bus bars, et cetera, et cetera. So any questions so far? Guys are all with me? You're just tired. So Open is better. At scale, open source infrastructure, the ability. Somebody in the room and I were talking yesterday. I don't see him. But we were talking about things in open source. Well, just the things in general, to make them less painful, you do them more often. And that at least gets you comfortably numb to the situation that you're in. So in open stack LAN, deploy 75 times a day because deploying code is hard. I iterate on that code a lot. And this is sort of the same process we fall in the open compute world, where it usually takes from design to manufacturing upwards of 14 months inside of a traditional push-based process. We have gone from complete design to complete fabrication in less than six months. It's taken one of our contributors, it took them six months to put something down on a napkin and end up with an actual board that we showed off at the open compute summit in January. So we are a platform for rapid innovation. And we believe that differentiation can matter. We work with a, it's not a spin-off, but it's a, they've branched out the open rack in China and they've called it Scorpio. Now it's still largely based on open rack. You have some of the tier one vendors that have submitted some of their people and time into making that better. And as of 2.0, it sounds like they're on track to help bring that back into the latest spec we have for open rack. And so now we've got, you know, sort of international collaboration on purpose-built open rack that again is sort of geographically aware of where it's actually being built and used for, which is cool, right? Because the difference between, you know, the needs here or in the data center here, which by the way, for one of these, who knows where it is? Who knows where this data center is on the left? Primeville, who said it? Awesome. That's right. Primeville Oregon was the first data center that the rack space built to be open compute. So in China, the Tencent, Baidu and Alibaba work together on Scorpio and they're doing things that are relevant to their geographic location will be over in Tokyo next month, working on an OCP engineering summit where I kid you not, there's a proposal inside of open compute to hang racks from the ceiling. Because when the earthquakes in Tokyo or in Japan happened, having these things just be able to swing from the ceiling would be, this is a real submission. I mean, this is a real, we were gonna go through the thought process of what that would look like. So there's just, and again, it's a what if question, right? What if you put racks in the tash into the ceiling instead of the floor? It's absolutely amazing. And I guess the third thing is, one of the ways that we're really trying to help the OpenStack community is with certification. OpenStack obviously has come so far. It has some distance to go and there's a lot of great companies working on making sure that it ends up very robust, very feature rich and very stable. That being said, you have great companies like Rackspace, great, I mean, there's too many to list. But OpenStack still has a question it needs to answer for itself, and that's what is it, right? Is it APIs? Is it implementation? Can it be certified? Should it be certified? So we wanna help with that question because you certainly certify hardware, right? You may not certify software, but you do certainly certify hardware. You wanna make sure your data centers don't burn down. So that being said, yes, absolutely. Not some other software. And OpenStack, the question before OpenStack is, is it implementation? Are there bits that are associated with certification? Maybe. Is OpenStack nothing more than defining the APIs? And if so, are they the OpenStack APIs or are they combination of the OpenStack APIs plus the AWS compatible APIs? Do you use, are you certified? Are you cross certified? If you're using Swift versus Ceph, right? Or Gluster or whatever. Or are you, exactly. And so that's the question that is before the foundation now. Right, I'm sure anybody that's on the board also has heard that all week, maybe longer. But we wanna help, right? We can certainly say, well, at least from the physical layer up to where we stop, you can be certified. So we at least allow a platform for certification in an open way. Again, where the Dell's, the HP's, the IBM's, even the people that sell the thing that starts with Exa could come and contribute. So one of the things that open source communities always need to do is keep moving forward, right? You've seen open source projects that start up, they get a lot of momentum, and then they sort of don't do anything. The community sort of leaves because they feel like the traction has gone. And open source communities live in, or excuse me, open source projects live and die by their communities, right? We've seen, look what happened to the MySQL community after the acquisition. It's important for communities around open source projects to continually keep asking questions, continually move forward to change the world. And that being said, going back to something that I said earlier about Intel, we've got this contribution that was given to us as opposed to the ODCA, which is a very powerful testament to this particular contribution because it's really going to keep us moving forward on the next slide, we'll get into it. But for, we'll do a Beagle Board. For a Beagle Board, what is the name of this contribution? Open Compute Foundation. That's it. So this is Cypho, Silicon Photonics. Anybody know what Silicon Photonics is gonna do? That's true. So it will act as an interconnect, but it acts as a 100 gig interconnect that doesn't matter what it's talking. Doesn't matter if it's doing RDMA, it doesn't matter if it's talking PCIe, it just doesn't matter. So where we are all used to racking servers that compute servers that are storage, and then cache servers, right, whatever they are, that's how we think of a typical rack today, right? Inside of Silicon Photonics, which is a contribution by Intel, we are, we're going to be, we'll just sort of bypass the emerging slide, we'll get to the future, and you have what's called Disaggregated I.O., or the Disaggregated Rack, where because of the interconnect, and this is literally photons, right, this is literally like light acting as the interconnect at much more meaningful distances than what you achieved inside of Fiber, which also was limited to what you could move over it, where a place where I can go and take memory from server A and disk from server B and CPU from server C, and now I'm racking capacity, right? I'm no longer racking a server with X amount of capacity, I literally just have X more capacity that I can take from anywhere, which is a very cool concept. As we, and we're not far from this, right? We're not far from this at all, obviously with the connector, which is, that's not Photoshopped, that's a very real connector, that's Frank Frankowski on stage at the summit in January. That interface and that cable have been donated to the foundation, so anybody can actually eventually go and manufacture this connector and use it on standard open principles and standards. So that being said, what if you were able to take that cable, plug that cable into a port on a networking box that used a contribution already given to the Open Compute Foundation, it was actually called Group Hug, which is kind of funny, but we have this common slot architecture inside of Open Compute where today you can put any kind of mini or WIMP core server right next to an X86 or anything else through PCIe. So on one board, you could literally have ARM or X86, Intel, AMD, it doesn't matter. What if that same type of contribution exists in the networking world where you could literally through PCIe stick down an FPGA or stick down an ASIC, have maybe some co-process or some silicon co-processor that was able to lay down any operating system, any operating system from any switch manufacturer on Open Compute compliant gear, right? That's a pretty powerful question. So are we good so far? Good. Any questions? We're actually pretty much done. I just wanna kind of talk to you about the various projects inside of Open Compute. And then I did wanna give us about five minutes for Q&A. So I just wanna sort of talk to you about Open Compute and the projects. We have the motherboard contribution or excuse me, the motherboard project which has actually had tons and tons of contributions in it. We do have Rackspace that is going to be contributing their motherboard. We've got the storage project which sort of governs not just the hardware layer but further up from hardware how storage could be deployed inside of a data center. Facebook with the Noxbeck which is that 2U, the 2U 21 inch 30 drives LSI expander board. That was the first specification we've had in storage. Rackspace is also working on a storage box that they're going to release into the general public. We've got Disaggregated I.O. We just sort of talked about Disaggregated I.O. And that was a contribution by Intel. That's an interesting project. It ended up starting out as virtual I.O. ended up as Disaggregated I.O. All to govern how computers talk to computers and how we can start thinking about our data centers as instantly upgradable depending on where we plug in new capacity, where we can pull. So there are startups. There's somebody's probably thinking of a startup in their head in this room right now about orchestrating that, right? Because we're going to need that kind of ability that's not something that people have traditionally thought about inside of data centers. Yes, orchestration is important. Ask Rick about, go bug Rick about Denave. Okay. Orchestration is important. And Disaggregated I.O. will be there to serve that function. Obviously data center design is very important. If any one of these projects, and ultimately certification is important, right? We feel that certification is probably the most important project that we run. When you talk about, and I don't know why this is, but brands do matter. People want to know that they're consuming something that makes sense, right? So if I asked you guys to go deploy Kohl's version of OpenStack, maybe not, probably shouldn't put that in your production environment, but you can trust a Nebula or a Cloud Scaling or a Morph Labs or a Rackspace or help me out if I'm forgetting somebody, guys. One of these companies that have spent lots and lots and lots of time to make sure that OpenStack is stable, brands do matter. And so we care about certification from a consumption standpoint where you feel confident that you can take this technology, you can deploy this hardware, you can iterate on this hardware, and you can share this hardware. So the four basic principles of open source apply here that apply in software. And the beauty of it is today this is all very much Apache 2 like. We use a different license, it's the OWF license, but the only reason it's very Apache 2 like and we only kind of use that license because it does a better job of protecting what's important in hardware land which is more trademark related than, or excuse me, patent related than copyright related which copyrights great at governing software, patents typically govern hardware and the OWF does a better job of that. So we care about certification. We care about the open rack. We care about being able to solve for volcanoes or earthquakes or floods or whatever and we wanna enable anybody that has a specific problem to solve that specific problem in an open way. So we're just getting started, right? We're just beginning this journey. We have a long way to go. We want anybody that cares about doing things out in the open where they've got a big community to draw from and they can build up momentum for a specific project. We want that same sort of community that exists in the open stack ecosystem to exist in our ecosystem where people are constantly asking the question, what if? That being said, thank you very much. We'll take five minutes or so for Q and A and we'll try and reclaim some of your guys this time. Yes, great question. So the question was, who will do the certification? And we're a nonprofit and we don't want to enable sort of competitive differentiation through situational awareness. So we actually rely on, you've seen maybe on the open compute site, we have a lot of engineering summits at universities. So we're working actually very closely with a number of companies in spinning up engineering, OCP lab engineering efforts where hopefully one day we trade college credit for working in a certification lab in a college. It's a great idea, it's the very beginning of this, but to do a certification lab today, you need to be a nonprofit and you probably shouldn't be a manufacturer and you probably shouldn't be a solution provider. So we do have this ecosystem where all of the design, all of the community-based innovation can happen, but we do have solution providers that in hardware land you can go in the manufacturing world, you can go directly from L6 all the way to L10, fully racked certified ship to you in containers and ready to be powered on. But, and we do have solution providers that will do that for you also that sell this stuff. And we want to make sure that there's not this sort of competing, at least outside looking in competitive landscape where somebody's being favored because they have a relationship with a manufacturer or a big company like Facebook. Any other questions? Yes, not yet. In fact, it's a conversation we wanna have with networking companies, right? And if you look at, there's a lot of great things happening in the network space today. You've got the ONF, the Open Networking Foundation, you've got Open Daylight, which has a bunch of companies behind it run out of the Linux Foundation or run sort of in conjunction with the Linux Foundation. I think what you're seeing in the networking space is that these companies understand that manufacturing's hard, right? It's expensive, in fact. Andy Bechtelstein is on our board of directors at the Open Compute Foundation and one of my friends at Facebook just left Facebook to go do an open compute related startup and he sort of sat down, so he's the vice chair of the incubation committee at the project and Andy's actually the chairman and on the board. And when Andy, for those that don't know, started Sun Microsystems and founded Ariston Networks, for one of these, anybody know what Sun stands for from Sun Microsystems? There it is, Stanford University Networks. So, and Andy's advice to this person was, don't start a hardware company. Hardware's hard, right? If it wasn't, it'd be called Easyware. And I think networking companies, I heard just yesterday that, and I'll just say one big switch company, no longer has a fab team, right, for Silicon. They've got a design team, the design Silicon, but no more fabrication team. And you've got companies like Broadcom and others over in APAC that are working on very cheap chip sets, operating systems that are almost as capable as what's out there today from the equivalent sort of tier one networking vendors. One of the contributions that the project has recently been given, in fact, it's not even been voted on, so I don't know if it's gonna be a project, is this contribution called ONIE, which is Open Network Install Environment, which effectively brings all the goodness of something like iPixie over into that world where through that FPGA or through that ASIC and a management, you know, plane, you could potentially lay down any networking OS on top of this gear. So we wanna have those conversations, right? We think that all of these big networking companies are already thinking about things above layer one, right? I mean, you've got all of these SDN products, this is all well above layer one, right? So software defined networking is not gonna happen, obviously, at the physical layer, obviously. Any other questions? Great, thank you all very much.