 Okay everyone, thank you for joining. My name's Andrew Trossman. I am a Distinguished Engineer at IBM. Don't ask me what that means. All right, well, so I hope you're having fun here at the OpenStack Summit. How many people recognize this t-shirt? Anybody recognize this one? Look familiar? It's from two years ago. Two years ago there were only 800 people at the OpenStack Summit and it has come a long, long way. How many people, this is their first time at an OpenStack Summit? Well, welcome, are you having fun? All right, who's had fun? I didn't hear anything. All right, all right, that's more like it. All right, well, so OpenStack is about having some fun and it's also about code and stuff like that. So I'm gonna talk to you about our smart cloud orchestrator product. As Tammy, those of you who were in Tammy's presentation earlier, she talked about, it's called Cloud Management with OpenStack. They just changed the name on us so it's a little bit challenging. Even she made an accidental slip and almost called it smart cloud entry. Anyways, so orchestrator includes the same code. We work with the same team. All the stuff that Tammy's team develops is included in smart cloud orchestrator and then we add some additional ingredients that give you some additional capabilities. And I'll take you through what that's about. But to start with, I do like this chart that I've graciously stolen from some other folks in the OpenStack community. I'm not sure who did it first. The folks from CERN or the folks from Cloud Scaling doesn't really matter. I think we're all seeing this, is that when you look at the clouds like Amazon that started out focusing on cattle style workloads, the whole idea of building reliable systems out of unreliable parts, it's wonderful. I'm big on it. I love it. It's fun. But we also have pets. We all have them and it's legacy. It's what it is. We all have these systems. And so how do we manage both? The guys at CERN are also big fans of helping to make OpenStack address both ends of the spectrum. And I think we're doing a good job within OpenStack. I think what OpenStack enables is at the infrastructure level that reality is your pet workloads are gonna have different infrastructure needs than your livestock workloads. And through a consistent API, you can make those choices. My simple example of that is when you're setting up a system, if you add an additional volume, so additional storage, you can choose. Well, what class of service? What volume type to use the appropriate sender term? Do I need, and for pets, I'm gonna give a different volume type than I would give to one of my livestock workloads. So I think OpenStack's done a great job of that, but there's more that we need to do because it's not just that our pets require different infrastructure. Our pets also require different management policies. And so that's what we're trying to help our customers with. This is a high level picture of what Orchestrator is about. We've tried to make it a loosely coupled system and that's easy to operate. At the bottom, no surprise, a lot of what Tammy showed you, it's an OpenStack implementation, OpenStack infrastructure. We have a variety of back-end supported ranging from VMware, KVM, et cetera, the Power and Z platforms, Tammy mentioned. We also have support for providing an OpenStack interface to non-OpenStack clouds like Amazon, for example. Because that's what we're trying to do is to enable an environment above this where you can build your automation, build your additional systems for managing both pets and cattle workloads, but using a consistent API. The next sort of major piece up are the patterns. And this is about very much, how many people have been in a heat session so far? And people, have you all heard about Tosca? So we'll talk a little bit about that when I get to the pattern section. Again, that's kind of, we use the term workload orchestration. Unfortunately, the word orchestrator is one of those that means lots and very little at the same time. So I personally try to, actually I have a little thing with my dev team. Whenever somebody says the word orchestration, I have to take a drink. So fortunately, I don't have anything particularly strong here with me. So, but I will take a drink every time somebody utters that word. So what we have above this, and I won't say the O word, is really a workflow engine. It's based on our BPM product. It's truly a best in class for anybody who's used it. How many people out there use BPM in your organization? Awesome, great to see. And so we'll talk about how you can use that to help you to manage the management policies for both pets and cattle. We also have the typical self-service catalog. We have integration to development tools and as well, service management tools. And I think that's really, we're particularly when we start talking about running pets in the cloud. It's about the integration with all of those management tools, whether it's your monitoring, your backup, et cetera, et cetera. So any questions so far? Anybody just wanna belt them out? If I come across, I say something stupid, just yell it out, and we're happy to deal with some questions. Okay, so at the low level of the infrastructure, so we've, we do, so OpenStack supports a variety of mechanisms, things like host aggregates, availability zones, regions. And we do make each of those available. What we typically deal with though, is at a region level. And now often what people will do is they'll create an availability zone and basically just wrap it as a region. And one of the reasons for that is, and this is something that frankly, doesn't really exist in OpenStack today, is the level of access control across multiple regions. So we, this is again, a place where we put a little bit of additional IBM capability to help you to manage that. And while at the same time we have folks, in fact one of the guys on my team is a core developer for Keystone. So we're trying to get there. I know earlier they talked about the federation work. I think they made it sound a little bit further ahead than it really is. But this is one of those places where we try to close that gap. So to answer the question, we really generally manage it at the region level for those reasons. So let's talk a bit about the patterns. Well, as no surprise, IBM has been working on patterns for many, many years. In fact, I think the first time, so my company was acquired in 2003. And even at that time, we were working with Microsoft. Microsoft had an initiative called DSI. Anybody remember that one? Where they had a white horse and they had these cool tooling. So we were trying to do the back end provisioning. Well, Microsoft would do this cool front end tooling because at the end of the day, your non-trivial applications are combinations of multiple systems collaborating together. How do you do that? And that's where complexity really starts to happen. So as no surprise, IBM's invested many, many years into tools and technologies to help make that easier. Here's a screen cap from one of those tools. It's a drag and drop editor. And you may have seen Michael Elder did a pitch yesterday of one of the latest tools that we were working on. But frankly, more important than the tools themselves is the content, right? Being able to pick up ready to use out of the box patterns that I just, I download it and I start going click, click, click and away it goes. It has essentially, it encapsulates a lot of best practices around this. There are over 200 patterns out there that we have available for not only smart cloud orchestrator but a suite of our offerings. How many people have heard of pure application systems? So it's the same technology, the same patterns that work across. In fact, I think I have a chart that describes. No, it's a little later on, but essentially it's the same patterns that work on pure systems, the same patterns work on orchestrator and the same patterns can work off-premise on software. And obviously this is something that's, when I get to a customer and a customer is using some sophisticated software, maybe something like Cognos or Lotus Connections. I mean, some of these are non-trivial and when they can go and download a package, a little pattern that makes it simple, that's just instant value. And even with all the fancy graphical drag and drop tools, you don't have to do all the work to assemble them. So this we find is extremely valuable. There are IBM provided patterns for IBM content but we also have a bunch of partners who provide patterns for their stuff as well. Okay, so I'm gonna talk a little bit about the next layer up of the O-word, which at the end of the day, it's our BPM tool that's integrated with the rest of the system. And so that means we have integration to the pattern technologies and we have integration to the OpenStack APIs. Now, before I get too much into this, I wanna just take a step back and talk a little bit about the open source of patterns. So as many of you know, in the previous release cycle, so Havana, we introduced heat into OpenStack which essentially it has been working on an alternate technology for doing patterns. And frankly, that's great. We continue to support our IBM proprietary pattern engine because as I said, there's a whole lot of valuable content out there. But what we wanna do is we wanna make, because the value is in the content, it's really important to have an ecosystem standards for providing content and for people to be able to create content from one place and then consume it in another. And so a number of you, in fact, I think I saw, there you are, Matt at the back, are one of our TOSCA experts, has been working in the TOSCA community. I think one of the greatest things in the last six months stretch for IceHouse that many of us are really proud about is that what we've been able to do is cross-pollinate between the TOSCA open standard community and the open stack heat, open source community on something called HOT. And this cross-pollination, so far we're trying to get towards complete convergence. So far what we have is collaboration between the two communities. And we've contributed into IceHouse an import tool. So you can import TOSCA patterns and pull them right into what's called HOT, heat orchestration templates, oh I said oh. And again, the goal here is really about ubiquity so that we can have an ecosystem of content providers and content consumers. So this coming summer, we are putting out our next release of Smart Cloud Orchestrator. The release version is 2.4. And of course, it includes IceHouse, which includes heat. And as I mentioned, it includes this TOSCA importer. Interestingly enough, we already had support for TOSCA in the proprietary engine. But again, this just shows how we're really, it's all about the content, but we're absolutely embracing the open standards and the open source implementations. So we have consistent use across these environments. So why do we need orchestration? This is an example. In fact, it's taken from a customer. I don't even know who it was. But it kind of showed, these are all the business processes that we have to put around all the stuff that we do. Now, admittedly, this is much more typical of what you see in production pet style workloads than you would in, say, livestock style workloads. But like I said, we all have to deal with these. So one of the things that we've tried to enable, in fact, what we found is that there are kind of two classes of why I need to do workflow or runbook style automation. So one is, it's more about the business process around IT. I mean, reality is, how many people here are from a regulated organization? I'm sure there are a lot of hands here. Some of them you probably can't, how many people can't put up their hand because you're not allowed? Anyway, some of you, you know what, if you ever do go on to the website and you look at some of the past presentations, great guy named Slipping Nate from Nate Burton from the NSA. Did a great presentation. He did one of the keynote talks. Was that in Hong Kong or the one before? It might have been Portland. I think it was Portland. Anyway, so he wouldn't be able to put up his hand, but I think you understand the point. And when it comes to that, it's not just the business processes that you need to automate and of course audit against, but it's also, you know what, we have organizations, we have people in place that use monitoring tools and they have knocks and they have processes for dealing with problems. They have trouble ticket deaths. They have backup and restore processes and technologies. We have to integrate our stuff with that. And OpenStack does a great job for giving us ubiquitous programmatic access to lots of infrastructure, but it doesn't help me with this problem. And that's the kind of thing that we use workflow automation to do. One of the things that we're working on is making it easy so that we can decouple your heat template that defines here's the thing that I need to stand up from the management policy or the integration that you need to do because when I deployed in a test environment, I don't need the same rigor and management integration as I do when I deploy into production. And we're trying to make those workflows as reusable as possible. I didn't really mention much about this, but one of the things that we have, aside from downloading patterns as reusable content in our marketplace, we have reusable workflows, not just the workflows, but one of the things, the terminology we use are called toolkits, and these are little integration modules that know how to talk to various technologies. And we have things ranging from talking to network and storage devices to management tools, to development tools like Urban Code, how many people have heard of Urban Code deploy? Excellent tool for operating a DevOps style discipline, and so we have a lot of these that you can go and download and then use out of the box. And that, of course, is also a really great way for our development teams to release additional functionality outside of the more typical long release cycles. And so here is a screen cap of the workloads, the workload automation, and so this is a screen cap from the tool, so this is the BPM tool that I mentioned that's integrated, and over here on the left-hand side, you can see that pallet of toolkits, which again, those things are, you can download them, add them into your pallet as you need them, and then you can graphically create these things. Now, these workflows, in fact, a lot of times you can just go and customize those workflows, so if you downloaded one, you could come in in this bottom pane, you can make some little customizations, many of them allow you to just script them right there. And, but I also want to point out, for those of you aren't familiar with the tool, is it also lets you easily build user interfaces, also drag and drop, really simple tooling, because sometimes these business processes require interaction with humans. I know we try to avoid it, and I don't like humans because I'm a techy nerd, just like many of you, but we still have to deal with humans, even if we don't like them, and so this is a great way to very quickly string together some interfaces that instantly appear within your environment, and so I'll show you, so here's the example where you know, so you've gone off, you've got a pattern, maybe you downloaded it, maybe you created a new one specifically to your application, and then you can present this in our service catalog, which is kind of a self-service interface that you can make available to your users, you can control which users see what kinds of capabilities that they're made available to, very easy to go in and create service catalogs like this, and behind each one of these things is a BPM workflow, and usually what happens is the first part of that workflow is gonna be a little bit of UI that is specific, maybe I gotta capture a little bit of information from the user before we go off and do it, and so here's an example where you click on that and all of a sudden that same user interface that I just showed you how you did it using the tool appears and asks a few questions, what port do you wanna run this thing on, and so on and so forth, all right. Just gonna take a moment to, anybody wanna see a demonstration? All right, let's have a look. So this is available on YouTube, hopefully it'll start looking a little bit clearer because even with my glasses is pretty, there we go, okay. So we're just gonna take an example where we're trying to associate management policies with a pet style workload, and in this particular pet, we're using a DB2 database, and so there's a simple, just enter in a little bit of information, what it is that you're trying to do and the usual things, and it's a bit of a wizard thing, so here we're gonna pick that we wanna use DB2, and now it's gonna give us what flavor, of course, open stack flavor, but what we're gonna have next is the different management things, literally as checkboxes. Do you want monitoring included in this? Do you want backup and restore included in this? Do you want big fix security compliance? And you select the ones you want, and instantly they're available to you. Obviously, you had to do some integration in the back. If, for example, you're using IBM's big fix, now called IEM, well, you go to the marketplace, you download the content pack that does that integration, and of course, that's great, nice and out of the box. How many people use something other than big fix? This is part of where, so if you're using something else, this might be one place where you have to build your own workflow to support your technology, or if it's a third-party vendor, that vendor may also have a pack available. And, well, so the rest of this goes through entering the details of IP addresses and things like that, but essentially what happens behind the scenes is while OpenStack is doing the basic deployment of that DB2 database and associating the storage volume and all that good stuff that OpenStack does, behind the scenes, we're going and making sure that we're integrating with your monitoring, integrating with your backup and restore systems and processes, so that all those systems that are in place for the rest of your pet workloads continue to work exactly as they have. This is, hopefully this looks pretty simple as a simple way to expose it. We're trying to make this easier with each release of our product, in fact, one of the ones that we're trying to do is make this available to all through the standard OpenStack Heat APIs. So behind the scenes you can make that available to your users as far as your users know. They have full unfettered access to OpenStack compatible clouds, but you still have the control to be able to go and apply the management policies that you need to by region by region. You may have a region that you define as being for different purposes. In fact, I was talking with some of the guys from CERN, one of the management policies that they do is whenever they deploy a Windows VM, it goes on to a Hyper-V, Hypervisor. Of course, they're licensing reasons why that's useful, whereas on the Linux side, that's gonna end up on a KVM host, again for licensing purposes. Any questions about this? Please. Yeah, you can do so. Absolutely. So APIs, let me just go back to Chartware for a moment. So the API at the bottom level obviously is OpenStack. It's standard OpenStack APIs, nothing new, nothing special. The patterns, I mentioned that we have a proprietary pattern engine, and yeah, that's a proprietary API to that, but we do have it available. We have toolkits for the workflow component, so it can talk directly to that. With this new release that includes Heat, you can use the Heat API at that level. Now, when you move up to that workflow layer, there's some other standards. They're not part of OpenStack yet, but they're standards like BPEL and BPMN, and again, our tool supports that as well. There are some emerging efforts in the OpenStack arena, but it's very, very early on. And again, as part of trying to maintain standards, and we'll certainly be working within the community to make sure that we're as close to the standards as possible. Sorry, with Congress? Yeah, I read a little bit about Congress. Anybody in the room know about Congress? And I don't mean the, yeah, I remember reading about it. It seemed like it was pretty early, so anybody who goes to Stackforge, anybody been to the Stackforge webpage? If you haven't, you should. It's a great place to see some of the new things going on. Actually, the one that I was thinking about was Mistral, which is also very early, but as this stuff evolves, we're trying to work within the communities to make sure these things are available. Of course, in the meantime, we do have standards around things like BPEL and BPMN that you can use today. I'm wondering that you're bringing support for heat. Yeah? So what does that mean? If I bring along a template, will that extend shade in for your pattern to change it somehow, or? Right, so how does our heat support look in our 2.4 release that's coming out? Actually, I'm gonna, geez, this light just drives me nuts. I'm gonna take a moment to tell you a little bit about something I'm really, really pleased about. How many people have heard of triple O? So we think it's a great idea. In fact, one of our technologies that we built and chose to contribute to OpenStack, when did we do this 2009 Tanish? We had the same technique of the cloud built on the cloud. And so actually, with our deployment, when you install the next release of Smart Cloud Orchestrator, it actually uses heat for the overcloud. So we're very happy about that. It's a great way for us to deploy the components of SCO itself, and Tammy showed you a little bit about the Horizon UI. And just as Tammy showed some of the extensions for the Platform Resource Scheduler, we also have extensions for our own deployment of the Components of OpenStack. So we're using it within our installer, that's good. Okay, in terms of the next layer up, it really is, it's kind of a peer to this. So right now, they're deployed separately. So if you have a pattern for the proprietary engine, it goes through the proprietary engine. If you have a pattern for heat, it goes through heat. It's all visible at the same OpenStack level, but otherwise they don't have, today, visibility to each other. This is obviously something that we're working on. Obviously, we want to include heat and within our own engine, but it takes time. In fact, at the same time this summer, as we release the next version of Orchestrator, the folks with pure application systems will also have both the heat engine as well as the proprietary engine. So again, you have the consistent patterns, both kinds of patterns, across all the different infrastructures. And now let me just find that cute picture. So when you think about going on-premise, off-premise, pure systems, Orchestrator, oh, I have to drink. That's why I usually say SCO. But so SCO can integrate across these environments. In fact, that's been a really valuable solution for a lot of our pure systems customers. And again, I think that it's that consistency, whether it's the proprietary patterns or the new heat patterns, whether you're using Tosca and converting them into HOD or, again, we're trying to make it available in all these platforms. Will our calls for Orchestrator eventually be over here at? Actually, there are a number of, in fact, if you go to the, there is a YouTube video about that integration. That's a common one. One of the ones, how many people went to the pulse, IBM's pulse? So it's a conference we had in Vegas earlier in the year. It would have been late February. Actually, I think one of our customers, the scenario they described was they wanted to use, they were using pure systems for their in-house production, but they were using software for development. And so they would use Orchestrator to manage the on-off-premise coordination. There are a number of other variations and scenarios that people come up with. And so some of that, again, the integration packages are available on the marketplace so you can download and use. Any other questions? The open stack we're using, is it a fork, is the question. The answer is no. It is, as Tammy told you, it's a standard open stack. The coming release is Icehouse. What's available today, smart cloud orchestrate, version 2.3 is with Grizzly. Now, how many people have used open stack with VMware? Grizzly, not very useful. And so for that, we have a proprietary driver that we use because, frankly, the communities wasn't good enough. In the meantime, we've got a number of folks, I don't know if they're in the room here today, but that have been working with the community to improve that driver. And I'm happy to say that our 2.4 release is now switching over to the community driver because we had to make that investment to get the community driver to that state. So it's not a fork. There are places where we've had to compensate. Like I said in the past with the VMware driver, we have a layer above that helps put some access control when you start dealing with multiple regions, because unfortunately what's available today in the community really isn't quite there. But again, we're working within the community on the one hand, and the other hand, we're making those additional capabilities. Well, so smart cloud orchestrator does include the open stack that Tammy mentioned about. But you're absolutely free to use others. We have folks in the room, in fact, that are using their own open stacks and we connect to them. In fact, in our summer release, the 2.4 release, we're making that easier, more flexible. Today it's a little bit constrained, but you can certainly do it. The problem is, in the current release, you're funneled through the EC2 APIs, which is a little bit annoying. In our 2.4 release, that constraint's lifted. Any other questions? Yeah, it's not. Okay, okay. Honestly, that's not my job. You know, there are a lot of competitive products out there. Personally, I'm very involved in our development, so I have to maintain arms distance away from our competitors, so I'm not gonna be able to give you the best answer. There are probably, I know I saw Rick in the room, as Matt in the room. If you want, we can hook you up with some of the folks who, that's their job. Any other questions? Oh yeah, absolutely. We've had KVM support since the first release of it. And I believe I'm running out of time. Any other questions? So that's an interesting question. So Tosca started out, it was an XML format, still is, but we've been working with the community on this, I don't know precisely the words, but a YAML-based format. So the interesting situation is that the Smart Cloud orchestrated, oh, orchestrated, it's in the field today, actually supports the real, the full XML Tosca. But what we do is we actually import it into our proprietary engine. Because the standard doesn't give you an implementation, just the standard. So we do have the import, that is available today, it's in the field, it's been there for a long time. In the coming release, we actually have both. So we have, you can take the XML Tosca, pull it into the proprietary engine, or you could take the YAML imported into, and run it on the heat engine. Did I get that right, Matt? Yes sir. Sorry, more questions. Yes. Yes. Yes. Yeah, I mean, this is one of those, can you go around, can you circumvent yourself? Yes, of course you can. Now, and this is actually one of the things, sorry? Okay, well, you wanna give us an example? Sure, so I- That's what I'm assuming. Where is the, I wanna make certain requests. Yep. Yep. Totally cool. Yeah, I mean, the truth of the matter is, I think systems that were designed for the cloud, cattle style workloads, they were designed to exploit the APIs, right? And those work really well. They don't need all the help and the hand holding that the pets do. And I think where a lot of this stuff really helps you to do is to run those pet workloads within the same environment. So frankly, I think it's more for the pets that really need that kind of stuff. Now having said that, one of the things that we've been seeing from a number of our customers is that desire to basically hook into the APIs. So that when somebody calls, just as an example, somebody goes and deploys a heat template. We wanna be able to allow him unfettered access to that API, goes and deploys this heat template, but we can trigger a workflow behind the scenes. This is, wait a second. We're gonna go and look at additional metadata maybe and make decisions. Maybe put up a few guardrails. In fact, we have some folks that like to apply logic to determine what region to deploy things. Not just the authorization, but in fact, the decision of when and where things should go. And that gives you an opportunity to do that. And of course, depending on where you go, you may have different guardrails. Again, this is, every customer seems to be going at this a little bit differently, but that seems to be a pretty common pattern that we've been seeing. Anybody else? So the question was about what about systems that have already been provisioned and I'm over time, I'm getting told to get the hell out of here. So I'm just gonna say yes, and thank you very much.