 Hello, I'm Sam Matzik, senior software engineer at IBM. And so my name is Major Hayden. I work at Rack Space as a principal architect, working on our private clouds. And this presentation we're talking about the open, open, open cloud, which is open source from top to bottom, completely open architecture. There we go. So first off, we'll talk about open power. So how many people are familiar with the power chip, like the power architecture? Anybody? Okay. So really the unique thing about open power is that it delivers a power chip in a machine that looks a whole lot more like your average x86 machine that you're gonna see in the data center. The components are similar, the way you provision it is similar. It makes it easier for you to bring that into the environment. That's just one of the many facets. You also get other benefits such as open source accelerators, you can actually join these groups that you see here on the slide and customize it. So if you say, hey look, I wanna do something different with the board or I wanna do something different with the chassis, there's folks here who wanna help you with that. So you have the opportunity for more performance, more customization. And as we'll talk about on the next slide, the great thing about it is that it's the same tools that you're used to, the same operating systems that you're used to. In fact, you're still using the same Ubuntu, the same CentOS or Fedora or Debian or whatever you use. It's the same OS. You're using the same Linux kernel that an x86 machine would use. When you run OpenStack, you're using the same components. It's the same Nova, it's the same everything. And then everything throughout the stack, you have the same options to customize as you would on x86. So the real value there is that you have this ability to customize, you have this ability to get faster performance with accelerators, you have an ability to get faster IO, but it's all the same things that you're used to. Major said, you can see that the entire stack is all the same logos you're used to. To focus on the bottom, on the bottom is where we have the open power hardware architecture. And that includes open BMC. The firmware is open. The chip design is open. The mother board is open. The design of the interconnects to all your IO devices, your network card, your storage card, that is all open. If you want to see what's going on there, you can get the specs for that and see. You can look into the chip design, you can look into the source code of open BMC, you can look into the source code of the firmware. It's completely open. And if you've ever used something like a BMC from a major provider, like maybe Supermicro or Dell or HP or something like that, you've always said, man, I wish I could change this interface or I wish I could add an API or I wish this wasn't here or I could get access to this information. This is the platform that gives you that opportunity to go in there and do those customizations. Everything is open source from top to bottom. So you say, great, I want to get started. How do I do this? Where will I go? Well, we have open power cloud reference designs. The references and designs include the open power servers to get, how to rack and cable them, the components to get the hard drive, solid state disks, et cetera, network cards to put in the open power servers. And there's reference designs for databases of service for private compute cloud for just, you want to stand up a CEP cluster, you just want to stand up a Swift cluster by itself on the open power hardware. There's reference designs out there that tell you all the piece parts to go get, as well as once you get them, how do you set up all the toolkits on? How do you set up all the software on there? And so what we have is we have an open power cloud toolkit, which makes use of already existing open source deployment methods. If you look, there's a lot of, this slide's very busy, but right in the middle, you can see the CEP block on the right and the open stack block, the way that those are configured and installed is CEP Ansible and OpenStack Ansible, which are open source projects for deployment. And so what we've done is with the hardware reference architectures, there's an input file, it's a YAML file that lists all of your servers and knows how they're cabled and how they're plugged into the switches. And we've automated, we've tied together all these open source deployment methods so that you can get a cloud up with very, very little human interaction. You set up the simple file you're feeding in the beginning. It uses Cobbler to go and install your operating system. It has Ansible, open source Ansible Playbooks to go set up all your networking inside your operating system because when you set up an open stack cloud, you wanna have a private management network. You wanna have a private storage network for your CEP clusters to get to your compute host, et cetera, it sets all that up. And on top of that, the tool could also go in and talk to your switches and set up all the VLAN taking on all the necessary parts to let the VLANs go through between all your nodes. But then it pre-populates all the inputs for your CEP Ansible run. So your CEP cluster gets set up and populated automatically. It sets up a lot of the inputs for open stack Ansible. Open stack Ansible is a great open stack deployment tool. Major is very familiar with these, the core on open stack Ansible. There are a bunch of variables you can do, but if you've already installed the operating system and you know all of the IP addresses and everything that you've done, it pre-populates all of those and takes a pause so you can go in and tweak the other variables that you want for your open stack install with open stack Ansible and kick off the run. And then the run continues and installs your open stack cluster across all your nodes. And then it goes on and continues to install, set up and configure elastic stack, which some people may know as the Elk stack, as well as Nagios and configure common plugins in there. And at the end of it, with very little human interaction with open power cloud toolkit, you can go from top to bottom and have an open cloud running open source software on top of open hardware. And I think sometimes this, you know, especially getting your head wrapped around power, which is a little bit of a different beast and a different way to set up a system. I think a lot of folks will say, well, hey, all this openness sounds great, but I don't understand. My business is not interested in getting involved in hardware. That's totally fine. There are vendors out there right now. So Rackspace and IBM and a whole bunch of other names you saw there on the first slide. We've gotten together and created a platform called Barilie. And it's literally a system that you can call a manufacturer and they can build it for you. And there's a specific excuse that you can actually order and say, look, I'm not familiar with power, but here's what I'm trying to do. And you can actually just go pick those things literally off the shelf, get them racked up and use these standard tools to get all the standard cloud components deployed on the infrastructure. That would say thank you. And does anyone have any questions? So that could be a whole talk in itself. So we could maybe catch up on that one outside. I think it really depends on what you're looking for and where you have performance limiters in your current deployment. So some people may see a huge boost by going to power and some people see a little bit less. So I don't know the exact cost on Barilie right now. It really depends on how you configure it and what comes in the machine. Obviously if you're just getting a basic power machine with a small amount of RAM, it's gonna be very different versus if you fill it up with GPUs and try and do acceleration and things like that. Yeah, so, and that was your speaking specifically about Barilie. There's other open power servers as well. And with the power-based architecture, you can oftentimes get denser workloads. And so rather than just looking at cost, and cost is competitive, the cost by itself is competitive with x86. But you can't look at just cost alone. You have to look at price performance. So price performance of open power systems are, depending on the workload, we can usually get much higher price performance numbers on open power servers than on other architectures. And yeah, and depending on the configuration, I mean you can get almost 200 cores that you have access to in a server, which is pretty tremendous. But you also have the option to back that down to a smaller amount of cores. So if you're, let's say for example, you're running a very large database that maybe is licensed based on core count. You have the option to say, hey look, I don't want 200 cores. I wanna back that down to a smaller number. And so that way you still get great performance, but yet your licensing costs are less than what you might have on x86 or a different chip. And maybe we can talk more after. I'll open up. Does anyone else have any more questions? Okay, so the question was, why would someone leave Intel to go to power? And I will cycle back again to the price performance. If you can get two x price performance on power for the same workload, so the price performance quota, where you can get twice performance for a given cost, that would be one reason why, because you can do more workload on the power than on Intel. It's all about the right workload on the right system. So, I mean Intel's proven itself that it can take a wide variety of workloads and run them very quickly, which is fantastic. I mean, it's got a huge market share. But for certain workloads, for certain companies, you find that putting it on power, you can get quite a boost. Some of that's the power chip itself. Some of that's the IO interconnections. And some of that's what you can actually do in the machine when you have all those components in there. And yeah, we could talk for a whole hour on that if you wanted to. We could geek out on that. Anything else? Question was, do we have any figures on network performance? The bandwidth, their open power systems, I don't know the exact numbers. I am a software engineer, not an electrical engineer, but they do have, I believe, more PCI lanes. And so when you're talking network performance and you could put 100 gig ethernet in there, but you can have more PCI lanes to the core. And then the core has wider bandwidth. I know that it has wider bandwidth to memory as well. So I'm not exactly sure what you're looking at for network performance. I mean, you can put 100 gig ethernet in open power systems as well. All right, well, thank you all very much. All right.