 All right, good morning everyone. Thanks for joining us early in the morning. Hope you guys had all the fun at the parties last night. My name is Sean Murakami. I'm with IBM's Cloud Performance Team, part of the Software Group Strategy Division. And this is Phil Vestis, he's a senior technical staff member with IBM in our group as well. So just to start off, just want to see how many of you or your customers are using either legacy systems or have applications using legacy systems, backends for your applications. Good, it's about half of you guys. So today we're really going to talk to you about OpenStack and how the OpenStack can be used to leverage other platforms and not just only x86. So when we think about our journey through our OpenStack progression, when we start off, we're really looking at single no configurations and this journey through a distributed platform. How many of you are new to OpenStack and just getting started? Oh, good. So when we start off, we really start looking at what OpenStack features have to offer, what OpenStack's really about. In my experience, about a little over two years ago, when IBM helped start the OpenStack foundation, I was asked to go take a look at OpenStack, figure out what it is, and then within the next couple of weeks, we might actually need to talk to a customer about it. The way I got started learning about OpenStack is start off with a single no configuration, leverage tools such as DevStack. If you don't know what a DevStack is, you should really go check it out. It's really what a lot of developers use in the developer community to unit test their environment. And it's a good tool to help you just to get the wheels going and to learn about OpenStack and actually get it installed relatively quickly. So the next step in our OpenStack journey is really to look at multi-node deployments. And what I mean by multi-node deployments is when you start off with a single, typically a single controller node, and you're really looking to scale out your compute nodes, really trying to understand and run real workloads against OpenStack. This is where I learned a lot, myself, about the features and functions about OpenStack, because this is really when you get into the guts of the configurations and try to learn the ins and outs of what works for you and your applications. And this is also where you learn all the pitfalls and some of the good things that OpenStack has to offer. But this is really when you start looking at whether if it's managing multiple compute nodes or what hypervisors work best with your applications. Continue on the journey, we look at finally distributed deployments. And this is really when we start to look at moving your applications to more of a production or high availability environment to make sure that your applications can sustain, sorry, your OpenStack deployment can sustain your application workloads. This is also when you start to think about how to automate your OpenStack deployment to make these things repeatable and reliable. A lot of things we've been working on recently have been in this area around high availability. We gave a talk last year in Atlanta about this and at this summit we started expanding on our high availability thinking by leveraging Docker to run in these scenarios. So what concepts does OpenStack provide you to manage these distributed and multi-node deployments? So OpenStack provides you a couple of capabilities within its Nova configurations. We look at these four constructs. So the first two are really API level logical groupings of how you can split out your environment. So when you look at cells, we heard a little bit in the keynotes about some of the customers using cells to arrange different scheduling groups within a single OpenStack region. And when we talk about regions, it's really a way for us to segregate multiple OpenStack deployments. So typically our customers would use regions to define OpenStack deployments either geographically or between different data centers. The next two availability zones and host aggregates are really break-downs at the physical level of the OpenStack deployment. First I'll kind of go over host aggregates. So host aggregates is a grouping that you can put in your Nova config where you can really define or group like systems. So for example, if you have a set of physical nodes that have a certain rate configuration that's more applicable to your, perhaps you want to use for your database applications, we can break those up into those host aggregates. And availability zones are breakups of groups of systems. Typically you would break them up either between racks or a group of racks. And this is really where we start looking at how we want to deploy our applications into those nodes to sustain the high availability at the application or workable levels. So this picture kind of gives you an overview of those four concepts I just talked about. So at the biggest level we have in the boxes regions, these could be again in different data centers or across different geographies. Within the data center or within the region we can have a group of compute nodes which can classify as cells. So these can have different scheduling policies. Further breakdown, within each rack we can define these as availability zones. And across racks we could have host aggregates so we might have different sets of high IO compute nodes set up in across racks. I think one of the things we, again it's early in the morning we're trying to keep you guys awake. Before we just go on, it was maybe of interest to figure out who's kind of at this level of deployment with OpenStack. Is there anyone here who's at that level multi-node or distributed deployments with your customers? Great, what, if anyone's willing, what industries or segment are you operating in? Telecom? Finance, anyone? Telecom? Okay. Yeah, great. So I think we're really looking at, okay, what's next in your multi-node? So this is where we're gonna really look into and dive into potentially multiple platforms and why we need OpenStack to actually work on these other architecture environments. So, I don't know what to fail. All right. So yeah, we're kind of gonna leave some of those concepts that Sean talked about. Some of the OpenStack segregation concepts and focus for a few minutes on platforms. You know, obviously we can start talking about, you know, I think the most common platform in use in OpenStack is obviously the X86 architecture. You know, Gardner and others have mentioned that, you know, in the last few years of cloud, more than 50% of all traditional workloads have been virtualized, put in the cloud. And even, you know, before we even leave X86, there's obviously already a multi-compute set of capabilities on X86, you know, officially OpenStack supports all these hypervisors, KVM, Zen, Hyper-V, VMware, and I don't know if anyone caught Eric Windish's talk yesterday, the Nova Docker driver is currently out of tree, but it's there and people are using it. So, you know, X86 is definitely the most common platform in use. But, you know, so maybe it begs the question, why do we need other architectures in the cloud? You know, could we migrate these traditional workloads to X86 and continue on this path? And, you know, I think from an IBM perspective, from what our customers are telling us, you know, we definitely, you know, need those traditional architectures. And one picture we'd like to use to help explain that is really thinking of customer architectures, really put in two major buckets. One, we're calling system of record, and the other, system of engagement. What we use those terms for is, if you think of the back office, the traditional IT systems in an enterprise, many of those are on, you know, especially when we're talking about large enterprise on mainframes, and then, you know, looking at systems of engagement, that's more the mobile, the emerging spaces. This is where a lot of the activity around DevOps, continuous integration. And so there, in a sense, there are two different worlds here operating in the enterprise, and both are critical, obviously, much of the data, whether it's healthcare, whether it's other traditional backend operations, we need to find a way to integrate these two worlds. And one of the ways that that needs to happen is to not, you know, remove these systems of record, but find a way to bring the innovation of the systems of engagement to that world. So let's just talk about what architectures are available. In OpenSack today, beyond X86, and then we'll dive into a little bit of detail. I believe today these are the three non-X86 architectures that have OpenSack drivers. We'll talk briefly about ARM. Personally, I don't believe IBM has a lot of activity in the space, but obviously our power and Z systems are traditional hardware platforms that many of our customers are using. If you look at ARM, Canonical has added 64-bit ARM support to Ubuntu with KVM hypervisor support. And so, you know, again, I don't have a lot of experience in this area, but while microservers are kind of a new trend, they definitely can hold promise in the area of low energy, reduction of space, requirements, and you know, media serving and other potential uses do exist. And so we'll see what happens with ARM and microservers. You know, there are significant growth rates predicted in the area of microservers and support is there today. I don't know if anyone has heard of TriStack. That's another way to, you know, potentially start to learn about OpenStack. If you go to TriStack.org, I know at one point they were offering ARM servers in their pool that you could try out. So you can check that out. So moving on to one of IBM's traditional server platform forms. There have been some fairly significant announcements this year around power. And we're gonna get reconnected. All right, so as I mentioned, there's a couple of things we should talk about here. One being that, you know, power has had a long history of virtualization well before OpenStack. The PowerVM hypervisor and its current implementation has been around for a number of years, at least prior to KVM. But hypervisor technology on power dates well back, you know, a decade before that. So we'll talk briefly about PowerVM. But then this year we announced in April, PowerKVM and also LittleIndianSupport was added to Ubuntu's distribution. So 14.04 has PowerKVM capability. And we also have our traditional REL and SLES Distro support. So, you know, on power today, that means you have two hypervisor options, PowerVM and then the PowerKVM that was announced this year. And then it's worth mentioning that in addition here, we're bringing Docker to power. That work has already started in the upstream communities. And so that will be available in the future as well. You know, one of the traditional areas that power has used, how many have heard of Watson? Our cognitive computing supercomputer, that runs on the power architecture. And, you know, a lot of the traditional power features, the memory architecture has been a perfect fit for big data and analytics. And there's also a traditional install base around ERP and CRM applications. So let's move to System Z. How many have heard of System Z? Or know what I mean when I'm saying that? So that is IBM's traditional mainframe platform, which was supposed to die many, many years ago, that has lived on. Again, when you look at Z, from a virtualization perspective, even prior to the naming of ZVM, there has been virtualization capabilities on Z even back to the 1970s. But officially, the ZVM LPAR variant of that has been around since 1987. So, you know, we're gonna talk through what the capabilities are around that. But, you know, traditional Z workloads have been around for a very long time and a very significant percentage of global financial and enterprise data lives on Z today. You know, one mention here, you know, Visa has been relying on System Z for transactional processing, you know, reaching very, very high rates of transactions per second, especially in the holiday season. So, you know, their ability to have that kind of workload on Z is critical to their business. And yet, you know, that's another pointer that we can't ignore Z as a participant in cloud. Again, briefly looking at the hypervisor offerings. ZVM is there today, traditional LPAR support. If you look upstream, you'll already see IBM working to bring ZKVM, which is KVM and the LiveVirt and QMU components to Z. And again, at this point, that's upstream work that's happening. And then, in addition, there is the attempt to bring Docker to System Z as well, along with the work we're doing on power. So, we'll have a question. Yes. We can take a question now. Yes. Yeah, so, in a couple of charts, we're gonna look at, again, we're not talking about bringing these transactional systems to the cloud, but moving cloud workloads, these systems of engagement to the data. And we'll show why that's valuable. So that we're not talking about running these traditional workloads as cloud workloads, but getting cloud workloads like the web serving near the data and why that's valuable because of the transactional needs that we can't separate that and expect to get the same throughput between those systems. So yeah, let's move along and maybe it'll become clear as we describe some of the Z features. Again, our traditional mainframe strengths have been around the performance availability and security of these systems. Again, based in many significant technologies dating well into the 60s and 70s that have been iterated and continue to be improved into the modern era. But given that they started with the design points of being extremely scalable, a share everything model with extremely high-speed networks within the system and full CPU and memory sharing, we'll look at a consolidation example that shows rates of 30 to one moving from sort of commodity systems to system Z. Availability, so SLAs, the compliance rates for system Z are amazing when you look at it on a graph compared to other architectures. With our bringing Linux to this platform, a lot of these components and capabilities are exposed up to the Linux layer offering some of these same features of the platform within Z Linux. Security is obviously a hot topic in servers and cloud specifically. And again, the design of this traditional hypervisor has very significant isolation guarantees. And because of our traditional strengths here has significant logging, monitoring, auditing, capabilities that have been in the system for decades. So again, with our enterprise customers, we're finding that their performance and availability needs require bringing these cloud components to the Z system to handle the extreme throughput requirements that they've already had in their system of record transactional systems. So here's just a fairly simple example of a web server running separately on a Linux server, connected to DB2 on ZOS, bringing that to where they're both co-located within the system Z with the web serving and the DB2 in separate L-Pars but within the same server. So Z Linux and a ZOS image, both connected internally. You can see that the transactions per second have a huge performance increase. Much of that do again to the internal connectivity, the IO channels within the system Z ecosystem, allowing for much more significant throughput. And this is one example. We have quite a few studies that have been done and much more data around the transactional capabilities of co-locating within the Z server. And then a few other examples around consolidation. So a couple clients have done consolidation onto Z from X86. You can see that, again, increase of data center needs, complexity increasing the support, the licensing, the maintenance of those systems, therefore impacting their ability to service customer needs, the time to deploy new racks, new systems. And so both nationwide insurance and Baldor have consolidated onto a system Z and SAP DB2 applications running both Z Linux, ZOS, ZVM. And some of the numbers here that they found, energy costs were reduced greatly. Baldor noted a significant reduction in data center space requirements. And then obviously due to the consolidation, a lowering of administration and maintenance costs for those servers. So again, some of the benefits both of the system Z architecture, its capabilities combined with the benefits of consolidation, hopefully even with this brief overview, you can see that there's a great value to keeping Z in the ecosystem and combining that with systems of engagement workloads. So we're gonna talk a little bit about what it looks like today to bring Z into OpenStack. Then we'll talk about where things are heading and then we'll look at that for power as well. So today, mostly because of some historic reasons of how IBM started with cloud on Z, and especially with Linux virtualization, we end up with a region for Z using an x86 compute node that's a proxy to Xcat, which is another open source project that IBM developed historically for doing cloud administration. And that, the Xcat tool is what actually is deploying ZLinux LPARs onto the system Z through your compute proxy. And so basically today using ZVM for Linux that you'll have to have a region per LPAR proxying your OpenStack APIs, the node APIs through that to the system Z. That will communicate through Xcat to actually schedule and provision images onto the Z mainframe. Now as I mentioned, we are bringing KVM currently in the upstream communities to ZLinux and therefore the hope would be in the future that you have standard Nova, Libvert, KVM pathways to deploy ZLinux onto the system Z ecosystem. Similarly, power, we mentioned that there's power VM as an option for hypervisor and that power KVM was announced this year. So again, similar but slightly different pathways with power VM, you have to use the power VM driver and the IVM virtualization manager that's built into the power platform. With power VM, you're gonna have the ability to both run traditional AIX and system I instances as well as Linux. But with power KVM, there won't be support for HMCs for those that use the hardware management controller. But that takes IVM out of the picture and you're talking directly to KVM. Again, that's the standard Nova to Libvert to KVM path. But that also removes the ability to have AIX or the traditional system I OS images involved. So those are the two options, power VM, power KVM, power KVM obviously aligned with power eight and some of the canonical announcements around that that I mentioned. So, you know, what next? If you haven't been in these ecosystems of power systems Z, there's a couple of options to learn more. You can come talk to anyone at the IBM booth. There's a power eight system there. You can check out a power eight server. You can talk to folks there who have expertise. We also have some open stack offerings from IBM. Obviously we have upstreamed the capabilities that we've talked about, but we also have our own cloud manager with open stack and our IBM cloud orchestrator products that are already fully support managing Z and power architectures. And then also some places to try out Z and power. The IBM hosted orchestrator beta. And then yesterday I was talking to someone from Canonical who pointed at Runabove has power eight servers in their cloud. And so you can access a fully open stack capable power compute resource through Runabove. And earlier this year, we announced that software would also be offering power compute. And if you've seen the recent announcements of the IBM Watson services, that's running on power in software today. And that should be exposed as well in the future to other customers. If you already have power Z hardware, a few notes about what's required to be using open stack using KVM or the traditional hypervisors. This list of Z enterprise systems or what's available today does require ZVM63 or greater. So if you have significantly older Z hardware or older ZVM releases, open stack won't be viable there. Power VM requires power seven or power eight with the virtualization manager. Power KVM requires the newer power eight servers that were announced, but does support a rel seven big Indian slash 12 little Indian and the two latest have been two releases as little Indian. And as I mentioned, with power KVM, there isn't support for the hardware management console. These slides will be available online. There's a link there, which will take you to IBM's offering center for Linux on power and Z. And lots of detailed guides and configuration information is available there. Most of the IBM sessions have come and gone given we're on Wednesday. There are a few more. If you're sharp eyed, you saw that our description of our talk mentioned federated Keystone, which we weren't able to integrate, but the experts are talking at 1150 today. Some of the folks who were very involved in the development of that capability in Keystone. So that that's available there. And then directly after this and through the early afternoon, the IBM track sessions are available with again, directly after this next door, there'll be a more detailed talk about all of our cloud offerings, which will cover obviously the manageability of power and Z. So with that, any questions, anything that we didn't cover that you thought we would, or other interesting questions? I lost the last part of that. There's, I think there's a mic. Yeah, you said that you were bringing KVM or set KVM to Linux on system C. Is there a benefit to using KVM inside Linux on system C compared to just spitting up a new Linux instance on system C? Yeah, so, so as I mentioned, I think one of the major benefits will be having the sort of traditional Nova driver through live vert capability rather than the current XCAT having to create a region that talks to X86, the proxies through XCAT. I think as we get into next year, there'll be more information about, you know, what pros and cons of ZKVM versus traditional ZVM alpars. Yes, to bring blue gene. Oh, blue gene. I don't, I don't know, have an answer for that, but I know some of the blue gene folks, so we could tee that up if you wanna catch me afterward. Where we're running people being really sensitive about taking, they haven't necessarily built a ton of these yet, and beyond that, it's a next-gen story of a kind of crime system or a next-gen story of a kind of crime system. It's those kind of two players that would put in or bring the cloud closer to the more of a moving one visa transaction, but I'm communicating with money. Some of the communication is still very time-to-experience. Right. All right, we have a couple minutes if there's anything else. Otherwise, you know, feel free to catch us at the IBM booth or after the talk. Thanks very much. Thank you.