 Okay, so let's get started. Welcome to our presentation today. So my name is Christoph. I'm a CEO and co-founder of Scale-Up. And with me, I have Frank. He's our COO at Scale-Up, responsible for all operations in our data centers. And also within the presentation, it's John Leong of Intel and he will focus a bit more on Redfish doing this presentation. So it will be a combined presentation, focusing quite a bit on the Open Compute project. And from our side, it will be a bit more focused on what we at Scale-Up did to build a more sustainable OpenStack cloud using OCP platforms. So a quick background on who we are. So we are a managed hosting company and co-location provider based in Hamburg and here in Berlin. We have seven data center sites here in Germany and we also operate OpenStack based clouds for an infrastructure as a service offering. And additionally, on top of our OpenStack infrastructure, there's also a managed Kubernetes platform. We are a small team. We are less than 20 people at this point and we are fully focused on open source. And for a couple of years, we are also more and more focused on becoming more sustainable. And this is in regards to everything we do in our company. It's not only about energy efficiency and things like that, but we are trying to reduce our carbon footprint with everything that we do. And so this somewhat led to this project that we will be talking about today. So the question is why build a cloud with open hardware and to be quite honest, I didn't really know what open hardware was two years ago. So in essence, a good friend approached me and asked me whether I would be interested in acquiring some Facebook service. And I was like, Facebook service? I'm sure they must use some service, but I wasn't aware that there's like a typical Facebook service. So it certainly got my attention. And I went onto Google and found out that that might actually be quite interesting. For us, so well, we basically went ahead and first of all, we ordered just a small testing setup with just a couple of servers just to get a feel of it. And well, during my search on Google and the likes, I found out that there's potentially a very interesting aspect for us in using those OCP servers because there was lots of talk about, besides many other benefits, that they will be much more energy efficient than like a traditional 19 inch based server. So, well, I mean, I'm the CEO, so I can do anything I like. So I just went ahead and ordered two full racks of those servers and figured, let's just give it a try. And I told Frank, LCO, that I just ordered two full racks of servers for a lot of money. And well, we went ahead and tried those out. What's also important with this, so we didn't buy new OCP gear, it was like refurbished servers. So they actually came from some Facebook data center somewhere, I don't know where, but they came from Facebook. And so this really also helps us to reduce our scope three emissions. And for those of you who don't know about those types of emissions, so there's scope one, two and three. Scope one is essentially, if you drive your car and the fuel you use, scope one, scope two is when you run your data center and the energy you put in into something and scope three emissions are the emissions that went into actually building the server. So by using a server that was previously used, that drastically reduces our type scope three emissions. So that's actually a very interesting aspect in using those refurbished OCP servers. So I think John had a presentation yesterday, well no, it was Steve actually, Steve from the Open Compute Foundation had a presentation on Tuesday, where he also briefly talked about some of the aspects of Open Hardware and OCP and I won't get into too much detail unless you have questions later on. But so one of the main benefits supposedly is the energy efficiency of OCP. So what you see on your right hand side is some graphs and statistics from SK Telecom. So that's a Telecom provider based in Korea. And so they did like a test in the real data center and the real co-location data center where they took a traditional 19 inch server and an OCP server and they compared how much more efficient the OCP server is running compared to a traditional server. So you see two different workloads, the upper one being while server doing nothing, being idle. And there you actually, so the lower line is the OCP one. You actually have up to 50% more efficiency in using those servers. If you run it at 100%, it goes down to about 19%. And that's somewhat in the range that I also found so 15 to 20% seems to be something realistic. So we basically did those tests ourselves. We just got one of our regular Dell servers. We're using Dell servers all over the place. It had the exact same configuration as the OCP server, same CPUs, same amount of memory, same disks, everything the same. And we did some synthetic benchmarks on those machines and those are the results that we got. And so that basically translates to exactly what SK Telecom also did. And while being based in Germany, saving 15 to 20% on the energy bill is interesting. Nowadays it's probably interesting for anyone in Europe, right? But that really helps us, well, first of all to save cost but also to being more sustainable. We did some additional tests after that. So in the One Data Center where we're currently running this stuff, there's no cold aisle containment yet. So we did some testing with cold aisle containment for those OCP racks. But well, the essence of those tests is that cold aisle containment for OCP does not bring a large benefits. And the reason for that is that those open compute servers and the open compute hardware can be run at much higher temperatures to begin with. So keeping your cold air contained in the cold aisle doesn't pose such a large benefit here. So just some stuff on the basic setup. So the types of nodes that we are using. So well, we run OpenStack and Ceph on it. So we have some OpenStack servers all with 25 gigabits Ethernet back end. The network stack is also fully OCP. So we're using it the Facebook wedge switches, 100G switches and the spine leaf architecture. And for the Ceph nodes we're also using OCP servers fully flash so only NVME drives in those servers just to give you an overview. And that's a picture of one of those OCP servers. So certainly running OCP is nothing special for someone like Facebook or all the big hyperscalers out there. For a smaller provider in the traditional data center it does pose its challenges. So one of the first issues that we discovered besides the fact that you don't get the servers packed like you're used to where you get one package for server you get basically a full rack and it's already ready with all servers in it. One of the first things that we had an issue with was that OCP racks expect to get the power from the top of the rack. Well, we have a traditional data center, we have raised floor and power is beneath the raised floor so it's kind of not ideal. So we actually, well, we went on Amazon and ordered some extra cables to get the power up to the top of the rack. Another thing that we discovered was that at least with the OCP server generation that we used, the board management controller was very basic. I mean, I told you that we use Dell server so we used to full IDRAC experience and well, the experience of that very raw and basic. BMC was not what we were expecting. In our traditional setups, we have dedicated IPI networks so we have dedicated interface just doing IPMI. There's certainly the possibility to have that on OCP but we learned the hard way that there are slight variations on how the actual manufacturer builds out those OCP servers and even though there is a possibility to have an IPMI interface, that does not mean that you will have that interface on your server so our servers didn't have that so we had to, well, basically put it on the existing network stack somehow. And another challenge that we faced was, well, now that you don't have any traditional servers anymore, you just have a full rack with servers that don't even have space to put a name tag on them anymore. You have to also rethink your deployment methods and how you actually go about into installing OpenStack, installing Ceph and all the likes so we had to decide on a way on how to do that. And Frank will tell you a bit more about that. I think running out of time, well, we thought about using OpenStack caller, well, first let me put it like this, we are using OpenStack since 10 years about right now, we started with Grizzly, we started with a Swift installation, it's short after we had our first complete cloud and so we're not new to the business, I'm pretty old, so I was using Puppet a lot, but this didn't seem to be a choice with a newer cloud, so it was pretty soon sure that we had to use Ansible anyhow. Luckily, we have a new colleague who's really smart with Ansible and this was just very lucky for us. Well, I worked with caller before, we had Edge Cloud set up with caller for Vietnamese customers some years ago and one of our, we are a small company so we have a certain portfolio, we customize cloud business very much towards our customers and customization with containers means you need the perfect working CI CD in your company to be able to build new containers and put them on the line as fast as you need them and this is an extra level of complicity which we avoided so we decided against caller this time and we were using Open Ansible but we ran into the next problem which is with Open Stack Ansible, you have hard-coded variables like Keystone or Newton or it's always Keystone underscore or Newton underscore or and these are, you cannot use one inventory for multiple clouds because you will have this Keystone all group and it's, you have all Keystone controller instances of all clouds in that group so you have to either use multiple instances of the inventory or you have to change this all group this Keystone or Newton or whatever group. We were in contact with the Keystone developers and they had some good arguments against changing this we are still in contact, we will keep in contact but we decided to go to just, what we finally do is with heavy usage of the original Open Stack Ansible roads, we build a layer on top so that we can use one inventory for all the clouds, yeah and so we succeeded in building our cloud and we're now able to just reproduce this at the other locations of our company I think, oh yeah, you have the description of this on that side, yeah. I would be round to answer any technical questions afterwards, I think it's no use to go deeper in detail right now. Okay, so to sum it up and give some more time to John to talk about Redfish which we would have loved to use if our servers would support it so we believe that using OCP hardware is also interesting for smaller operators so I mean for sure if you're a very large hyperscale like operator, sure OCP is probably any other way to go but I think we kind of proved that even for a smaller operator in a smaller company it can be possible to adopt OCP I mean there's still a few challenges here and there I had a conversation just last week with the foundation and there's certainly a few things that could be improved for smaller and medium sized enterprises to make it more easy for them to use OCP as well but it can be used and I think going down this road well basically adding, finally adding open hardware to your open software which like everyone is using is certainly the way to go and for us it also made it possible to become much more sustainable in running this infrastructure and I can only highly encourage anyone to take a look at OCP and don't be afraid of the first challenges once it's running it's running and it's still the same server inside so well with that I'm going to say thank you and hand over to John page down if I need to go back so I'm going to do a quick discussion of OCP platforms and first I'll tell you exactly so one of the challenges of integrating OCP hardware into deployment is manageability and in my discussion with Chris well you have old generation for OCP it won't work for you but this is the way we're going to go and I think further discussions I figure out a way of getting newer OCP hardware to rerun this test with the latest hardware so I represent Intel at OCP and I am the representative to the hardware management project so the first thing about OCP for those who weren't at the presentation two days ago is that OCP and what exactly is open compute projects it's basically it was founded on envy and jealousy of open source it was the they were looking at the innovation of the community within open source of being able to have things out in the open and built and improved and they said how do we bring this type of community innovation to hardware and what you do is you push hardware designs out into the open you allow people to take that hardware design, improve it and then contribute it back to that hardware community so that's what OCP does and it's out of that type of innovation that you get platforms with reduced power consumption because that's the way the community wanted to take the platforms so within the hardware management project when I went in it was as someone who brought in in Redfish so Redfish was a protocol definition and I wanted OCP and I told him you know you really have a whole lot of platforms you have HPC platforms you have storage platforms, you have network platforms you have server platforms your problem is going to be you cannot have a different manageability interface for each one of them you should have a single one and everyone who uses that interface should be able to get some base set of functionality for manageability and then above that you should be able to, if you're a particular platform you should be able to do additional stuff so the baseline so right now there's baseline profile which defines exactly what type of capabilities need to be available across OCP platforms and that's everything from some low level inventory to this setting of IP addresses to resetting system to inventory and then if you're a server you have additional stuff you can do you can now look at the processors look at the memories so on and so forth so during the process of rolling this out baseline exists the server profile would exist for that to exist and so the way the DMTF works is that innovation occurs below the interface so DMTF doesn't care how it's implemented so it can be implemented on a BMC platform which is always powered on so many you plug in a base board the BMC can come up and it can start listening and helping you provision the system Redfish can be implemented as a software agent so it can run on top of the OS it really doesn't matter to DMTF the Redfish it's just a restful interface for accessing manageability second is that within IP they also have open hardware so they started actually doing open hardware so they took the BMC and they isolated it onto a module stuck it on a connector so the first version was run BMC the second version is DCSAM and that's so as people design motherboards they no longer have to redesign the BMC and do the routing and the placement of it every time have the connector plug in the module and be done with it and so lastly there's the implementation so now you have a piece of hardware, you have a BMC on it what do you get the software or a firmware to run on it there's open BMC which is a Linux foundation repository and so you can the way they build these now is they all the remnant BMCs and DCSAM also have a AST and they all you can download a BMC run BMC version on there and then we work DMTEP works with run BMC open BMC to support the RedFish interface kind of a three way alliance there to make sure that the RedFish interface has both a way of defining it OCP will prescribe it as you have to have this subset of functionality and then open BMC will implement it and then run the performance test to make sure that functionality exists so the DMTEP had a alliance partnership with OpenStack at least seven years ago and so as part of that one thing that we talked about I think at the RedFish deep dive was Sushi which is the layer that you put below within Ironic Group Talk Run BMC so that's been ongoing for several years now DMTEP actually has a dedicated set of contractors to improve Sushi and expand the functionality of it so as we add functionality to RedFish if we see that there's a place to insert it within the Ironic base we can go do that so the way you read this is that this left-hand side is exactly what's in the Ironic base what it does and how much of it actually utilizes Sushi and RedFish to actually get to that functionality in the platform and that's all open source so so summary is that basically RedFish has taken managability and reduced it to a flexible interface so all you need is you need an HP path URI and then understanding what the content that's returned so the content is returned in JSON we describe it in JSON schema and we describe it in OpenAPI so that connects up to those particular tool chains in case you want to auto generate clients to run against the interface and the way you do an access is that you just wander down these bubble trees which exist throughout the documentation of going from service route which is RedFish v1, going to systems going to the first system that you have so by looking at the diagram you can see if I want to go to the first processor I can get down there and it returns a JSON packet a set of name value pairs those name value pairs are all described within the schema itself so there was a RedFish deep dive yesterday sorry you missed it however you can engage RedFish who have a public bulletin board which people ask questions on so if you're a newbie if you're a professional who's found problems in it you can still engage the DMTAP and we go through the forum on at least a weekly basis under someone actually sitting there usually responsive fairly quick and then there's a developer hub where you can wander through mockups of what RedFish tree looks like for various sets of systems and I think that's it yeah questions? okay thank you question I have to bring the guy who actually does the implementation of that right down basically uses another method other than RedFish to get to that functionality so I believe the functionality exists in ironic it just does need to get down to RedFish to get that information anything else?