 Should we, so we're at 150, good to start? Well, thank you everybody for coming. Let me first, so you're seeing one speaker up here rather than the original three that were on the agenda. All three have had different combinations of personal and flight issues. So Min from Facebook wound up actually stuck in Hawaii which seems like a very good problem to have. Rob and Sunid on our side are in California and stomach bug respectively. So I'm gonna be the stand-in for the day. My name is Kyle Forster. I'm the founder of Big Switch. I was also sort of one of the handful of people involved in this particular initiative that we're gonna talk about today. First of all, how many people here have heard of open compute hardware? I don't know if it's gonna be, and how many of like, more than like the one paragraph but could probably like write like a page about what it is. All right, so like about half-ish. I'm just trying to measure where I think it'd be most interesting to spend time. Let me, you know, feel free to ask questions anytime. Please ask from the mics because the whole thing's being videotaped. If you don't mind indulging me, I'll give a little bit of background on Big Switch and the company so you see where we're coming from. But most of this is actually us kind of as a user here and our experience and our journey setting up setting up a small open-stat cloud with the goal of, we actually had this explicit goal. Can we use the exact same hardware that's in production at Facebook and use that for a small-scale open-stat cloud for our office? The implications of it would be far beyond that but it was an interesting sort of limited time sprint type of goal and I'm gonna talk about what we found. Let me start off a little bit about Big Switch so you see a bit where we're coming from. Our company very specifically takes network designs that are in production at Google, at Microsoft, at Facebook, at Amazon and we package those designs for enterprise, service provider, and government use. So in some cases it's even the exact same switch hardware that's running at Google or running at Facebook and we write the software on top, that's our primary business. In some cases the adaptation is more significant, right? Our software runs on top of Dell and we changed a lot of the management interfaces to make it relevant for data centers that are kind of operated more traditionally. If you've never heard of us before, that's normal. The company's, I'm very, very proud of the company's on a tear since we launched our products, our first bare metal products in 2014. It's a bit, no need to go too far. It's a really fun time for the company. Our big news here at the show this week is that we just announced jointly with Verizon that for their very large scale OpenStack NFV pod they've selected that to run on big switch switching. So for us this is a wonderful validation of the work that we're doing. We've been part of a number of very, very high scale OpenStack clouds over the last 12 months. I think zooming back from the company more specific around this, we do a lot of work with the OpenCompute project. We're, right now the company's, we're actually the single largest contributor of software back to the OpenCompute project in the form of an open source version of our switch OS called OpenNetwork Linux. Fboss, Facebook's switching operating system has now been ported over to Fboss. Sorry, Fboss has now been ported over to OpenNetwork Linux. Microsoft's SAIC has a roadmap to port over to OpenNetwork Linux. So this particular part of our business is working with these hyperscale operators who write their own switching software and run that on top of the OpenNetwork Linux layer of our open source. It basically means that we want to be spending a ton of time with the OpenCompute project. I think that as a result we've gotten some level of, you know, wonderful access and great hardware and got a little bit of time. We spent certainly a ton of time with a Facebook wedge. Certainly with the wedge 40, with a wedge 100. And I think this was sort of the a natural outgrowth of that work. So here's the rack. This is, these are actually shots and we'll come back to a few of these. These are shots of an actual OCP rack that was mocked up to be the exact same hardware that's in production there, running up in our labs and our sort of journey of getting the thing up and running. Now first let me ask, who here has a lot of people have heard of OCP hardware? How many people have actually touched OCP hardware? Good, that's great. That's actually a lot more than I expected. So it's about a quarter. So some of this you'll probably find pretty remedial, but hopefully some of it's interesting. I think for the next bits more for people who haven't had the chance to actually touch it. I'd read a lot about it, but this was my first time actually touching the hardware itself. Okay, give me a second. The OCP rack is 21 inches wide and the size matters for anybody who hasn't, why did they choose this rack size? I mean, this is actually really painful for us. The rack actually has to sit separately from the rest of our rows. It causes all kinds of headaches with our ops guys about getting the thing in. It becomes really important because for folks who aren't as close to it, A3 servers, I don't know if you've ever played with twins or microchassies, but this is a really clean three, two RU servers can sit side by side in this rack very, very comfortably. And probably just as important. Okay, so you might have a little bit of play with the motherboard design, but you can fit 60 disks per two RU. We're gonna put compute terms OU, slightly different size, but that kind of disk width layout gives you actually this really intense density in a 21 inch rack that in a 19 inch rack, you're actually losing a fair number of inches. So there's a really, I think there's a really good argument to say, hey, these 21 inch racks are an awesome innovation. The downside is for those of us with kind of traditional sort of floor layouts, power layouts, it's a real headache to truck it in. But I think there are a couple of cases where this innovation should be considered by groups at large, well beyond the football field size data centers. So 21 versus 19 inch. One of the really, there are two really important innovations that I think are kind of just make this thing really nice. Everything that a human can touch is green. And that might not seem like that big of a deal, especially if you're like more of a software person, but if you're kind of a hardware person and you wind up like racking and stacking stuff fairly often, this is actually a really, really, really nice thing. Everything is designed to be field replaceable and field replaceable very easily. So you can actually field replace, you don't need a screwdriver, you don't need a drill. And that's, that winds up being like kind of a big deal when you're monkeying around in a lab and certainly a big deal at scale, but even if we're just like a one or two rack, everybody on our side noticed it. Like everybody noticed how much easier it was just move stuff around in the rack. Now, part of the innovation here, they claim they get a 30% increase in power utilization. I don't know if that's really, I have no kind of comment on that other than as a claim, but the power distribution itself is actually DC power that it's in exposed rails inside the rack. And power in is actually like vampire style taps that come in from the compute nodes and the storage nodes. So on the one hand, you get this really nice benefit, you actually save a ton of power because you don't wind up going through ACDC transformation, you don't wind up going through DCDC step down. That alone is actually a huge benefit, but like don't touch the rack. The dongle is actually really awesome. Has anybody here ever held an OCP debug dongle? So you're kind of noddling along, I'm guessing. What a, I'm curious to get your, I'll go into my take, what was your take on the first time you fooled around it? The, so this thing is a great idea. And again, like a lot of the design trade-offs that they made, you know, I'm trying to show that these are actually, these are smart ideas, but they came with trade-offs. So the goal was to have, you know, basically 100% IPMI, no separate, you know, no separate serial access. And like if you ever, you know, fussed around trying to find the right name, serial cable, or like, you know, this is actually, it's one of these things that looks, the idea of doing everything over IPMI is one of these things that looks great on paper. And then practically when you're actually monkeying around in a lab and moving shit around, it's actually a complete pain in the neck. And so the idea that, you know, this debug dongle is that you can actually get very quick LED displays on, like, I mean really simple, like, basic sort of power check stuff kind of come in and out of the different nodes in the rack. It's the exact same debug dongle that works across wedge, the switches, leopard, the servers, and then knocks the storage. So you can literally just go around with one in your hand and it goes into all of the different, at least standard like the Facebook path through the OCP stuff. So it actually turns out to be a great idea, because frankly about, like, I don't know, about every 45 minutes or so with our first like two days in the rack, we're plugging it in and just looking at LED colors coming out. I don't know if you've had the same experience. The numbering like helps a ton. And then the neat thing is it actually has USB access in and out. So you're just playing with USB cables instead of fussing around with terminal cables, which is really, really, really, really nice. So again, let me come back. It's one of these neat trade-offs. It looks really good on paper to have IPMI on the access. It creates all these basic like in-the-lab headaches and I can only imagine at 1,000 racks, that's a lot more of it, it's 1,000 extra headache. Even at like 10 racks, this stuff is a headache. And the debug dongle is the way that they address that headache and it's very elegant. So far, I hope what you've seen is like, hey, there's a series of design trade-offs that they had to make. All right, hey, no power dissipation. Okay, that means that everything human can touch is good. Okay, you know, there are parts that you really, really can't touch. Like, hey, all IPMI, that means the dongle. Now, here's one design trade-off that specifically for OpenStack, I actually think is a really, really big deal. On Leopard, the servers only have one ethernet port. There's a lot of control and data stuff that has to go in and out of OpenStack compute nodes. I mean, at least in our sort of reference topology, we literally have like seven different logical, very separate logical networks, control networks, right? That go in and out of a compute node for a full undercloud and overcloud. And running all of these on different VLANs in and out of the same ethernet port kind of seems like it's possible, but the trouble that you wind up having is startup conditions, where you have to do things like, make sure that the VLAN tagging that you're using for first boot is sort of synonymous or synchronous with the VLAN tagging that you're gonna use for separate OpenStack API control, and like the which VLANs you set up first actually winds up, it's incredibly frat. This ordering has to be exactly right, or the whole thing falls apart really fast. So there's just that very practical headache, and then there's the obvious one of, hey, with a single ethernet port, right? This is a real single point of failure. It's kind of, you know, we don't need to go into that. That's obvious for everybody in the room. But this is like for OpenStack use, this was actually like the workflow around getting the undercloud up and running, and then the overcloud going. This was actually the single biggest non-obvious headache that we ran into. Now newer versions of the OCP hardware have options for the mezzanine card with two ports, and so suddenly that's kind of a game changer, right? And it's not the exact stuff that's running in prod, but it makes a huge, I think, two ports is plenty for the kind of work that we all need to do together. So we were able to wrestle through a lot of the first boot stuff, and we came to a happy ending, right? We are actually currently now running a nice sized, a nice little kind of small DevTesty-sized OpenStack cloud up and running on the exact same hardware that Facebook is using in prod. So that was a big point of celebration for us. You know, just some of the fine print, the stuff that kind of scares us about even certainly about using this in like enterprise settings or service provider settings, you know, the single ethernet port is probably the biggest kind of scariest thing. And then there's some other bits and pieces. We had to upstream a patch for a disk estimation bug. UEFI boot mode was not supported on the installer, on the fuel installer that we were using, so we had to boot up in legacy mode, and then there was definitely some like IPMI versus, we tried the vendor IPMI, and we spent a lot of time wrestling with it. We finally got it to work. Open BMC, which is more the OCP sort of style of doing this, is a, we think a bit of a cleaner path, but we timed out before being able to use it. We're going to solve some of the headaches. So it works, but with some lint on it. So let me finish up with this idea of, okay, so where to go from here? And first of all, I was amazed and impressed that a quarter of the people here actually touched OCP hardware. That's a lot more than I was expecting, and that's a great thing. If it's specifically the exact version that the Facebook team has in prod, oh, I'd be really interested to hear from you. But I think let me end on this idea of, hey, what path, for everybody else, what path do you want to take forward with this? There's one that says, hey, look for special use cases. This stuff actually today is pretty damn good. The thing that jumps out at you is the density. I mean, a half rack, we're running just a massive amount of compute and storage. And so if what you're looking for is super, super high density and you're willing to live with a lot of other trade-offs, especially the single point of failure on the Ethernet link and some of the headache about getting the thing up and running, this thing today might be great. So we have one environment on our side that's in one of our engineering continuous integration environments. Like it's a CI environment, so actually the density of the compute and storage matters a lot. Resiliency doesn't really matter at all for this particular environment, at least not much. And since we've already done it once, we don't mind paying the setup headache. So for that really specific case, this is actually useful today. But would you say it's a general-purpose open-stack cloud for all the different workloads, at least that we have, running both in our engineering labs as well as our public-facing stuff? We're not moving our general-purpose open-stack cloud over to it. So this would be a very, very specialist use on our end. There's the pragmatist argument that says, hey, instead of using the exact stuff Facebook has in production, use minor variants, right? The 21-inch versus the 19-inch rack, that, at least for us, was the biggest decision. And there's kind of 19-inch things that you sacrifice quite a bit of density. But they're pretty close to the Facebook, like to the Facebook prod parts, and there's certainly OCP designs for them. And then whether or not you wanna use the exposed power rack, that's up to you. But I would say if you're gonna take a very pragmatic approach to building a general-purpose open-stack cloud and you want to use OCP hardware, I would really urge you to think strongly about using OCP-ish hardware, the enterprise-adapted versions of the stuff, if you wanna get going today. And then the last is if you wanna take the idealist approach, at least it's been our experience, and Bob, I saw you when you were walking in, I'm guessing it's, well, I'll be curious to hear from you. It's been our experience that the OCP community is incredibly easy to work with. Very, very, very flexible. It's easy at this point, the community is small enough. It's very easy to get things on the roadmap. So if you wanna instead take the idealist path and say, hey, I wanna be part of shaping this community to make it more relevant for the general-purpose open-stack clouds, I think a lot of that work is already happening with some of the work that they're doing, but there's always gonna be edges to say, hey, here's a particular open-stack cloud that we'd like to get on the hardware roadmap. If you wanna take this, hey, let's modify and mold OCP for open-stack, please talk to me after. This is an area where we're really interested as a big software contributor in the form of Open Network Linux. We're really interested in seeing OCP grow and OCP successful and OCP growing out of the, you know, the couple of use cases and a couple of verticals where it's being used today. And if there's anything that I can personally do or my team can do to help you get you plugged into the right people, this is something that we feel strongly about as part of the mission of our company, and we'd love to help out where we can. I'll be downstairs at the big switch booth after this. So thank you everybody and let me end there and take any questions as you have them. Thanks a lot.