 Alright, are we good in the back? Ready to go for audio? Alright, thank you everyone. I appreciate you making the time To spend a few minutes with me. If you are looking for happy hour, you are in the wrong session. All right, great I am Steve Helvey and I am part of the team at the open compute project foundation or OCP and When I come to these events There's usually two things that happen. One is I'm very excited about the type of information that's going to be coming And then after I sit through some of this information I'm either excited about where I am or I'm very nervous that I'm missing something and I'm not exactly where I want to be So my goal is a little bit of both today is to share with you what some of the companies are doing around open source hardware So we've been spending all day today, and you'll spend all day tomorrow or the next day Talking about open source software, and this is a bit about open source hardware So how many people have been in the data center? Christoph, John, Matt, wow, that's great. Wonderful In a typical traditional data center, an enterprise data center or Kolo, the data center to technician ratio in general is between say 2,000 to 3,000 servers per technician And then on a hyperscale The large volunteers, anyone care to guess what that data center to technician ratio is, or a server to technician It's 30,000 or more I've heard Meta's writing up to 40,000 nodes per technician, but to be conservative I've heard between say 20 and 30 So how do they do it? There were three things that Meta and Facebook Do extremely well They do their software very well Number two is they run open source hardware, which is what we're going to be talking about today And the third is they spend a lot of time optimizing their data center and their data center design The data center design itself I won't spend too much time on but you can find a lot of information on the web around how they've optimized the design within their data center So before I get started again, there are two things in two terms I want to set clear one is OEM and the other one is ODM or white box So the OEM is a company that takes products from a manufacturer Bundles them Puts them under their own brand name and then sells them to the widest possible market And these are the big-name server brands that you see in the market today The other segment of the market is called an ODM or a white box And these are the manufacturers these are the companies that make the servers for the OEMs So they'll do full product life cycle management design and a lot of their services are really around product Support only so you have the OEM that does more of the managed service approach and the ODMs that are more product Centric on their support. So keep those two terms in mind as we move forward What is open compute it is white box ODM commodity hardware plus open source That is the genesis of the OCP Right now we're running a little over 250 companies 8,000 engineers working across multiple areas of the data center the data center could be a compute Could be storage networking devices advance cooling immersion cooling 8,000 hardware engineers across 30 plus projects really solving common problems, and these are some of the larger Okay, all right, thank you Walking too far away. So these are really some of the more amazing hardware engineers in the industry this today vendors working with consultants hardware Architects even cloud players all working together on these common problems Resulting in over 200 contributions now a contribution could be a specification could be written Specification it could be a product Itself that's based off of that specification And it could be design guidelines as an example one thing that separates open compute from other Foundations is that we do not take a paper specification Into the foundation unless there is a product ready to go to market within around 120 days So that keeps people from really being a library or a knowledge base of unused Specifications this makes sure that whatever comes through the foundation. There's product on the backside of it And that's what keeps the pipeline moving This of course is just a snapshot a partial list of our Members now there are a few companies here that you may recognize and some other companies You may not in spur as an example if you've never heard of in spur They are the third largest server maker in the world We win down here the very last one one of the two or three key suppliers For the hyperscalers when it comes to compute and storage So these people are becoming more and more comfortable buying from white box or ODMs and Why is that well? It's a lot of has to do with the software things are becoming more cloud enabled It's easier to manage on the software so you can commoditize the hardware layer and you do not need to worry so much about it The best operating model with an open compute is that I have one spec Then I can go get multiple suppliers making me something very similar So instead of having half of my estate in one OEM Half of it in another OEM. I can have one spec and then I can have multiple suppliers making me something very very similar That's what the big Hyperscale cloud players do and that's what makes them extremely efficient a Homogeneous environment there are four tenants that embody everything that we do at open compute impact efficiency scale and openness every project Measures a contribution based on these four Areas now of course the measurement may vary based on the particular project, but everything that comes through The project is based on these four areas We give you a couple of quick examples This right here is a server bezel. It's the piece of plastic that goes across across the front of a server This is what we call gratuitous differentiation. There are literally hundreds of variations of this So some people say well, what value does it have other than the brand name across the front of the server? Here is an open compute server. We have this passion at OCP for simplicity to get rid of everything that we do not need Get rid of it. This server here has around 3kg less embodied energy than a normal server That's a big deal here in Europe and worldwide when it comes to measuring your scope emissions It also weighs less a great deal less when you're moving it around the data center And I'll call out these green touch points This is a toolless design. So you can typically replace any part on that server and Component when it's in about three to four minutes without using a tool That is another reason why you have one technician covering 20 to 30,000 notes Very simplistic designs. I'll give you one more example of efficiency in a typical server Across the back. You have eight 40 millimeter fans In a traditional OEM style server in These cubby servers over an OCP the OU is higher It's two 80 millimeter fans And it's something called the fan cube law where I can decrease the speed of my fan by half In order to do that though, I can decrease the amount of energy it takes to spin that fan by around seven eights And that's another big deal. You can run the servers hotter better airflow And you do not need to be spending as much energy How does this translate this is SK's? Test that they did in a typical colo environment so they didn't do it in the best optimal data center environment the bottom lines are OCP So inlet temperatures across the bottom and the power consumption on the y-axis And you can see a workload It's zero. It's running around 50 percent more efficient and even at 100 percent We're seeing between 19 to 20 percent. That is typically the ratio that we're seeing between 10 to 30 to 50 percent more efficient depending on your environment This is probably the best 14 or 15 minutes on The web around what goes into a Facebook rack? So I put the link here you can download the you can download it later Take a picture of it or you can just type into Google or YouTube. What's inside of Facebook data center rack a? Great example they go through how the servers are configured. They go through the power in the back You never have to go around to the back of an OCP rack There's one giant bus bar in the back it clips in all the cables on an OCP rack or front facing So you don't have to go around and work in that hot aisle Again another efficiency metric within OCP This is our hardware and software Co-design strategy. So where does OCP play and where do we not where do we stop? within the projects We rely a lot on alliances on the orchestration for the operating system, etc Then partnerships on the hardware abstraction and device drivers and the core area within OCP is around management security firmware and of course the device itself So some examples of what I mentioned around specifications translating into products In the upper left-hand corner you have edge servers The edge server itself was initially written by Nokia One of the more conservative companies out there that joined probably four or five years ago The first year they did absolutely nothing within OCP absolutely nothing they sat and listened Second year they contributed one spec and it was a tiny little seismic kit a little bar that goes across the back. That's all they did Third year they started to think more and more now They are not only leading the edge project and the telco project, but they've made multiple server Contributions. This is a company that is extremely conservative that now sees the value in Open source if you see Nokia's 5g announcements for anything open-edge related. It's all open source underneath that All open source hardware their software on top all open source hardware underneath it on the right-hand side is the Facebook server Anybody from Sardina systems in here Okay, Sardina systems is Took advantage of the ability to certify and validate their software Their enterprise open-stack software on OCP and that is being delivered Here in Europe through one of our partners called Vesper tech out of the UK So this company out of the UK is running open source hardware running open source software on top And they've branded themselves if any of you are here from a company or reseller They're moving away from they used to sell just traditional OEM boxes But instead of being one of 30 OEM vendor vendors in the country, they've now rebranded themselves as kind of the open hardware Vendor and that gives them the unique value proposition in the market where they're positioning themselves as white box ODM open source and open source software and Of course all the networking switches Probably our biggest estate of devices are on the networking side all open sourced through OCP if you want to know more about open compute This was just a quick high-level overview the two better sessions Wednesday and Thursday right John Wednesday Wednesday and Thursday Person on the left John Leung one of the smarter Longer-term guys for OCP and We'll be speaking in depth around the management piece and this Based on some of the feedback that I've had from people using traditional hardware moving over to OCP. This is the biggest area and The biggest obstacle is the management. How do I manage this new type of hardware? So I highly encourage you to catch John's session and then Kristoff Stripe from scale-up technology scale-up has over five data centers here in Germany Running OCP hardware and he will talk in detail about some of the things that he's seen some of the benefits some of the hassles Some of the good some of the bad So again open stack with OCP Wednesday and Thursday Here is my contact information Feel free to reach out if you have any questions at all you can visit the open OCP marketplace you can become a member It does not cost anything to participate in the projects to listen in on the project Only if you want to start making one of those contributions if you want to start writing those specifications Or if you want to start branding your company Then there's a membership Requirement associated with that but just to participate Download anything any of the material we have is wide open within OCP and of course we have tons of information on the YouTube as well and all of our summits are broadcast and Recorded join a project collaborate and contribute and final thoughts open software deserves open hardware It doesn't make sense if you're already doing the open-source software and you're seeing all the benefits for your customer If you can just add that open-source software or open source hardware underneath you can additionally get those those benefits and those operational efficiency At the scale and it doesn't have to be a brand new data center. So we have Co-location facilities just running one or two racks that are getting this same type of benefits Maybe not to quite the scale and a lot of it does depend on the capex Maybe I can't buy certain components as cheap as some of the hyperscalers can but I can still get those operational efficiency metrics at scale if I'm using even a smaller deployment say one two three five racks a quarter five racks a year etc So that is it and I'm under my 20 minutes. I'm going to open it up for questions if you guys have anything Yeah Yeah, there are Two Areas around the rack piece one is the what the hyperscalers using now meta That rack is slightly different. It is the exact same footprint as a traditional 19 inch rack What Facebook has done on the inside is they've turned the rails and it gives them 21 inches on the inside again same footprint They've turned the rails on the inside that allows you to fit three of those servers across When you're doing hard drives it allows for an extra hard drive gives you twenty five percent more density There's a power bus bar in the back Usually with one or two power zones and you can just clip in push in hot plug the the appliances in and out I get them mentioned all the cabling is on the front So they don't go around to the back of an OCP rack the second type of rack is A traditional 19 inch so just the fact that I showed you that one server we do have regular What they call pizza box servers that have been open source they've met those requirements They've written the specification they meet efficiency standards that we've set and Those are just traditional 19 inch form factors that will can then go in a traditional rack Same power So it doesn't necessarily mean the server that I just opened up here We also have shorter form factors that have come that edge server that I mentioned Those things can fit in utility poles Some ruggedized boxes that go in say airports or along railways And then we also have immersion tanks that have been validated as well So people are starting to do more advanced cooling rear door heat exchange Immersion and the biggest thing that we see now aside from the gear that's inside the rack So taking that out of consideration the fastest growing project with an open compute is around advanced cooling as the racks get hotter and denser So I think you can go up to around 15kw before you start running out of some air air ability How am I going to cool it? What am I going to do with the heat? Some countries will not let you put up data center up unless you have a heat reuse plan Singapore has relaxed their moratorium on it, but they're trying to figure out a tropical data center environment But what to do with the heat so in the past 10 years one of my colleagues says it's bad says it's bad set it best 10 years ago. It was am I going to use renewables? Everybody's using renewables now in their data center. Everybody's green Next 10 years is what am I going to do with that heat? So we have people growing of course agriculture We have a lot of here in Europe. You're lucky because you can plug into district heating systems But in America and Asia, what am I going to do with this heat and how am I going to handle it? So immersion advanced and he reuse is the big thing that we're working on now and it all impacts the software layer Thanks for the question anything else Great, so the question was if we have two members that basically want to contribute something very similar How do we choose which one? Well traditionally we The way things materialized with an open compute is Somebody will propose that they want to work on a common problem. Say hey, I have this edge server idea most of the time we get collaboration so they can actually write a Joint development agreement where these companies can work within the governance of OCP but together and then release everything out to the open community if If the documents are exactly the same we'll encourage them to work together if there are slight variations We will take both specs But we will encourage most of the time to just align with that spec. We've had a few people start to say well I My server slightly different say well can you just align to this because this drives Especially on the rack side. We used back a few years ago. We started to get a little bit Out of line here with the number of rack type of configurations We wanted so we started to align to just one or two or three different types of rack configurations John did I miss anything on that? Is that the way you would say that? It's probably our largest contribution as far as a number of companies working on one thing was 16 and Intel drove and led that primarily Yep Anything else? All right Well, I hope you got something out of this be thinking about open-source hardware when you're doing your open-source software And attend John's session and Kristoff session if you really want to know how the stuff works All right anything else Thank you so much for the time