 All right, I think we're right at two o'clock. So go ahead and get started, all right, perfect. So first off, welcome everybody. My name is Michael Elder. I'm a senior technical staff member with IBM. I've worked in product development now for one in eight or 10 years. And over the last three or four years, we've been looking at DevOps, cloud, continuous delivery, things like that, and trying to understand how to help our clients do that better. So today I wanna talk about what we have been doing in the space with OpenStack, both using heat and virtual images, as well as experimenting with how we do that with containers, and then how we extend that even into platforms like Cloud Foundry. I'm also gonna be joined by Tom, who's gonna be helping me out with the Magnum part. So let Tom introduce yourself when we start. We'll introduce later. So let's go ahead and jump right in. So ultimately, what we see with cloud and DevOps driven mostly by the expectations of the consumer, it's a very different kind of world than what we were a decade ago. I think you'll see hundreds of slides that look like this. I just wanna kind of drive the point home that the rate of software delivery is continually increasing to keep up with marketing consumer demands. But the challenges there are still this sort of bifurcation that has existed between the development and the operations teams, largely siloed, largely due to historical concerns, executive leadership, things like that, that have treated them as separate parts of a larger problem, when really we see organizations that start pulling these teams together being more effective. And while cloud, I think, helps, it doesn't necessarily provide a silver bullet to fixing the collaboration problem between development and operations. But ultimately we think with automation and standardization, we can start approaching more of that. When we look at cloud, I think there is at least three different categories of different kinds of things you can deploy into cloud. And I like to kind of call them out to kind of set some context. The first, I think everyone at this conference is gonna be very familiar with, providing infrastructure as a service, being able to use virtual machines and automation to put together a complete environment and then use that for the development or test of your application. One of the things that we continuously see is this sort of spectrum between having a very bare metal, minimal image that everything is automated on top of, all the way to the other end of the spectrum where we actually build the virtual machine and snapshot it, and that sort of becomes the artifact that we carry through the actual deployment process. Container-based deployments are really not, in my opinion, they're not really true infrastructure as a service or platform as a service, because you don't really get IaaS with a container, you don't really get Paz, right? You're still building the container, you're still specifying all the details of your platform within the container, but it allows you to more easily encapsulate the concerns of the application and separate those from the concerns of the operation side of the house, but it also may require a different way of managing the architecture of your application, right? We typically see Docker containers work really well for microservices. If you're adapting sort of a very stateful kind of workload, it may not work as well, right? And so this starts to drive home, something that we've observed that there's probably not one single infrastructure as a service or container as a service type of model for perhaps really sort of a blend. And the thing is you get into platform as a service, you start to see more of that, right? I think everyone looks at Cloud Foundry and other platform as a service technologies as a great way to do certain kinds of applications, right? Things that are very engaging for the user, things that have to change and keep up with the pace of expectations more quickly, but you're probably not gonna move your checking and financial systems over to platform as a service. Those will probably continue to remain maybe traditional IT or maybe start to use infrastructure as a service, maybe start to use some container as a service type stuff. But the point that I wanna make here is really that Cloud offers many different types of technologies to help you drive greater automation and help drive your deployment pipeline. And you're probably not gonna pick just one, you're probably gonna pick a mixture. And so we're trying to account for how do you deal with that challenge of many different kinds of deployment models against your Cloud environment. So in my particular area with an IBM, we focus on DevOps. And so Urban Code as a company was actually acquired and brought into IBM several years ago. And when we look at DevOps though, one of the key things we see is that there's sort of three simple questions, right? What are you going to deploy? How are you going to deploy it? And where are you going to deploy it? And for each of those three pillars that we saw earlier, you can ask each of those questions and answer them. They're not, like I said before, I think ultimately you still have to account for how you do delivery, regardless of which kind of pillar you choose. And so what do I mean by that? So if we look today at sort of the organizational model, we typically see applications having been optimized with greater automation than we do the infrastructure layer, right? That's where infrastructure as a service hopefully begins to address it with that challenge. And we've seen that problem optimized where applications can be developed in more agile ways. They can have more fine-grained updates that are kind of continuously made to them over time. And we can get to the point where we're continuously deploying those applications against maybe existing physical systems. But below the line, if you're not leveraging infrastructure as a service, you're not leveraging a cloud-based delivery model, you're still more traditional IT, we often still see very long, complicated manual life cycles. And even when you start to adopt virtualization in cloud, we often see clients actually still have a ticketing system in front of it. So for instance, if I can ask the question today, in your organizations, how many of you use cloud of virtualization to some extent? Okay, and how many of you actually have access to go and do a self-service request of that cloud infrastructure versus having to go and file a service ticket waiting for a human to go and run the virtualization, stand it up and still manually configure it? So how many of you have access to that kind of self-service? So actually a pretty good number compared to other conferences that I've seen, but I think the challenge is still a lot of manual process that's still using sort of virtualization, but still not as optimized as it could be. And so what we see with technologies like Heat or Docker Compose or even the kind of manifest format from Cloud Foundry is this opportunity to begin encapsulating all of those different resources into a set of artifacts. And so these roles that were traditionally working through an administrative type of process to configure new systems or deploy new software now become roles that are really operating around a set of artifacts. They can file defects against them, they can version control them. And while before you start introducing cloud, things on the right look very different, right? Hardware versus platforms versus actual business applications because they look different, they were managed differently, but as you start to convert over to a fully software-defined infrastructure, whether that's IaaS or containers or platform as a service, then you can start to manage them all in a more consistent fashion. And I think this is really the promise and also I think the challenge of how we leverage cloud to deliver better software over time. So what we've been looking at is how do we enable our clients who may have been focused already on the DevOps problem with Twitter traditional IT to begin to take advantage of cloud in more efficient ways. And so we wanna be able to leverage technologies like heat and I'm gonna show you some live examples of what I mean by that to create full stacks that actually are composed of many different kinds of resources, virtual images, software-defined networking, actual storage, object storage, block storage. And then we've also begun to extend the heat engine to understand other things like our own notion of how we manage software deployment. Also, Amazon's notions of things like S3, which we can make compatible with Swift or ElastiCache or relational database as a service. And we'll see those additional kinds of resources coming into play. And so what I think is interesting is that when we look at a stack, it may not be all of one kind of thing like all infrastructure as a service. If I start dropping resources that deal with ElastiCache or relational database as a service or these other things next to it, right? That sort of drives home this idea that applications are going to be blended together through many different technologies, not sort of, you know, siloed. So once we create these stacks, then the idea would be to assemble them into multi-tier scalable environments. Now, what I think is interesting is that, you know, is anyone familiar with Conway's law? All right, a few people. So back in the 1980s, this guy named Conway came up with this idea that architecture follows organizational structure. So if your organization has a four-team organization to deliver a compiler, then you're going to end up with a four-pass compiler. And so I think when we look at artifacts, when we look at the kind of artifacts we're going to assemble, it's just sort of the physics of it. If you have database engineers and web engineers and networking engineers, they're all going to create the artifacts that are specific to their level of expertise. And then you're gonna have some level of integrators. In the picture, you know, we had this notion of the integrators and the specialists. Almost every organization I talked to has a different label for these things, but they almost always have these roles. A specialist is someone who knows something very deep, but not very broadly. And then an integrator is simply someone who pulls all the pieces together, but really doesn't know how any one thing works. They just know how they plug together. And so the artifacts that we start defining for these software-defined environments, I think will ultimately reflect that organizational structure that you have today. If you've got several silo teams around those different functions, and they're gonna create artifacts specific to their functions, and then we're gonna pull them together. And then ultimately, what we've tried to bring to the technology that's an open stack is this focus around portability, right? I think that's something that if you look at heat today, it works really well for open stack. How can we leverage the things we like about that language very open, very community-driven? Actual people can contribute code to it or help influence the direction of it. It's not influenced by a single vendor. But also leverage that in a way that we can create templates that are portable between open stack or Amazon or SoftLayer or other cloud environments. And in fact, we've even built content to support native VMware, simply because many of our clients have native VMware and may not be ready to move to cloud, right? When we leave the open stack summit while we have a lot of passion here, there's still many people that are way further back than we really think they should be. But we wanna help pull them forward. All right. And so this is gonna be sort of the first part of what we'll kind of focus in on is looking at how do we create heat templates that actually can be deployed across many different technologies. With an IBM, we obviously have Bluemix, which has support for cloud foundry, Docker and virtual images. Within, we also, I'll show you an example deploying to Amazon, different on-premises cloud technologies. And then understanding how that works across each stage of your deployment pipeline. And that's another thing I'll show you as well is how that notion of software delivery is not simply stand it up one time, but actually manage it in an ongoing fashion in the running environment. So in fact, let me, well here, I'll kind of do a quick preview and then we'll kind of jump into the live part. So this is Urban Code Deployed Patterns. This is actually built, this is a web-based editor built on top of heat, right? Each of the objects and figures that you see correspond to a heat resource that's back in the template. We can move between a diagram and a source base view. All of the elements in the palette are actually being read from the OpenStack API. So virtual images, networking, storage, et cetera, are all things that we're being discovered and can be used to create the final template. We're also layering in the concept of an application and understanding what environments the application has. And we can create new environments for these applications. And then from here, provide all of the parameters that would be exposed from the heat template. So what we actually see here are things like availability zone, the image flavors, et cetera, and we'll see we have some content assist to help track those down as well. So when we provision in a template, we might be targeting it for software, in which case we would provide software-specific information and feedback, or Amazon. And so I think what's, one of the things that we've really tried to focus on is that regardless of which cloud target you're hitting, you're able to get lots of feedback and help and guidance and syntax highlighting around that. We've also extended the heat resources that are available. So how many people are generally familiar with the concept of heat and heat plugins today? So actually good, a lot of good hands. So the net of that is that today, the heat engine itself can be extended through plugins. And as good community citizens, our goal to help clients leverage that technology is not to fork it or change it in a way that's not compatible with the open source, but simply leverage the plugin mechanism to create new value on top of it. And so some of the things that we've created include things like ElastiCache. So ElastiCache provides a common caching as a service capability on Amazon, supporting Memcache D as well as Redis, supporting object storage. We've designed some of these pieces like the virtual machines, like the networking, like the object storage in a way that they're compatible. So a template that describes object storage and heat can be deployed against Amazon for S3 or Swift if you're using OpenStack. Same thing for VM, same thing for networks. And then relational databases as a service, being able to define either a MySQL or PostgreSQL database back in for your application. And of course, those are also exposed in the palette discovering that information from the Amazon environment. The other thing we focused in on was looking at how do we provide simple versioning to people that, so one of the things that was kind of surprising to me, maybe not too surprising, but one of the things that will surprise some people is the fact that in many operation circles, if you're not already kind of at the edge of understanding software-defined networks or software-defined environments, source control doesn't exist, right? In the broader community, it really doesn't exist. So how do you expose things like Get as a source control system for templates in a way that doesn't require me to learn a lot of very sort of arcane Get command line syntax. We may all love Get, but we've all kicked it a few times too, I'm sure. And the net of it is that we can actually expose all of this directly in the web browser, see the various repositories, see the history, and see the comparisons as part of the tool. So with that, let's actually switch over and show some of this live. So what we're looking at is actually the capability we're talking about in urban code deployed patterns. In the diagram editor, in this example, we have the route table, we have the network, we have virtual images, we have components. For each of these things, again, it's simply backed by a heat resource type. But what you'll notice is that the way that we've leveraged the heat architecture allows us to create additional properties on top of it so that we understand not only the open stack pieces, but also other pieces, for instance, Amazon or SoftLayer or native VMware. And the idea here was to make it easy to create content either from the text editor or from the diagram editor. And as I move between these various things, the focus between them is kept in sync. So here I can click back and forth without a lot of trouble because what we find is that most of the roles that are engaged in creating these things, depending on whether they fit into that integrator category or the specialist category, they either want a very high level overview or they wanna be down in the weeds, right? There's really no kind of in the middle. So our goal was to try to support both models of working without a lot of changes. So in this example, by default, in this example, I just happened to have it connected to Amazon. We can list all the various regions, we present these in much the same way that you would see Keystone regions. If I select our open stack cloud, we have one region defined in that that's also deployed in the software environment. All of the content here on the right are things that I think you'll recognize, the images, the networks, and the security. All of this is actually coming from my Amazon account and it will change based on which Amazon account I'm connected to. On the left, we have a rich tree view that makes it easy to navigate. How many of you have created heat templates by hand? How many people thought that was an awesome, wonderful user experience? Okay, one guy in the back. You rock, but for me, I've been in software for a long time and all of the kind of things like linking up IDs or parameters, I wanted to know right away that there was a problem if the parameter didn't exist. Or I'm working against several different clouds, so what happens if my type isn't defined in that particular open stack environment? So the goal, immediately, very quickly in the web browser is to provide that kind of feedback and syntax highlighting things like undo, redo, work like you would expect so I can undo any kind of editing whether it's in the source editor or the diagram editor. It's very easy to find things very quickly. So here, if I'm looking for things that start with JKE and then I can either drag this into the text view or the diagram view. Now, in this case, the other thing I want to actually show you is this running live, and part of what you'll see is that we're not simply provisioning the content against the cloud, but we're also provisioning the software. And for this, we're bringing in a product called Urban Code Deploy with patterns that allows us to see our application, and this application is made up of various components, and those components know their versions, they know how to be installed, and of course, they're exposed in the palette as well. So let's go ahead and kick off a deployment against Amazon, and I have noticed the Wi-Fi is a little bit shoddy when this dialogue first comes up. So apologies for the delay. So here we have a database image. We're gonna pick our flavor. Here, the list is showing me all content from Amazon. In a moment, we'll see it from OpenStack. We're gonna pick a default gateway ID. We'll pick the SSH key to deploy against it, and then we've already got images selected for the web image and the database image. We'll click on provision. As this kicks off, it's actually executing calls down into the heat engine, creating that request to do a heat stack create, but instead of creating OpenStack resources, it's actually gonna be creating Amazon types. And we can do this for different clouds, but I show this because most people are looking at Amazon and OpenStack together. We can kind of see this view, which is more or less a kind of cleaned up view of the heat stack list, but we can also switch over to our application and get a much nicer presentation of what's going on. So I can see this request is currently running. And as I click into it, I'm gonna get a breakdown of everything that's part of the template. And as I dig into the servers, I'm gonna see all of the different pieces of software and their associated processes as they run, which gives you a much nicer view of what's going on, what's your total elapsed time, I think easier than using the heat stack command line for sure. Not that there's anything wrong with it, but it just provides a bit more feedback there. Now let me go back. I'm gonna switch over to OpenStack. So here I changed my cloud target, much in the way that you might shift the region or the project you're working on inside of Horizon. Now this is the same template, no magic, I haven't changed anything there. In the source again, it's all the same heat resource types. When I click on provision now, because in fact you may have noticed as well, things like the images are presented based on what's my OpenStack cloud. But because I've pointed to the OpenStack target, I'm now gonna get feedback from OpenStack. So I'll see flavors that are OpenStack flavors, I'll see the images from my OpenStack catalog. And so what's neat is that the template can be parameterized such that it's portable. I can use it across not only different OpenStack environments, but completely different cloud technologies as well. So we'll go ahead and click on provision for that. And I didn't change the name here at the top, so we'll just remember that this one's the OpenStack example, unless I already have one provision with that name, in which case I'll get that feedback here. So here let me pick a new image. And here we'll go ahead and change this name. Actually, I've already got a predefined configuration file. So in this example, when we are talking to a particular kind of cloud, we can capture its configuration in a way that makes it easy to link up. And so it's made some default selections for me. Some things I still can override. So substituting in different things like the network. Actually no, I apologize, it's your own configuration. Let me drop out of that. And instead we'll just go to what the default's there. All right, cool. So we'll kick that off. Now over in my environment for the application, I'm gonna see the Amazon example, which is coming online. And I'm gonna see the other environment here that's for provision to OpenStack. If I go over to my Amazon console, I'll actually see these virtual machines that are coming online from a moment ago. And not only did we create the virtual machines, which are currently initializing here, we also created all of the other network content as well. So if I break out to my network layer, so we actually define several security groups as well. In fact here what I can wanna show you is down here. So all of the different subnets that are required, whatever their subnet ranges are, and the entire VPC itself are defined as part of this process. In OpenStack, when I actually go and view the instances, we'll see the same kind of networks provisioned out here as well. So here are the ones that we'll see come online here in a minute. Actually I think my demo gods are being a little angry with me there. So while that's coming up though, let's kinda switch over. I wanna show just a few other highlights. So in addition to defining kind of the full networking of the environment and the virtual images, we also have been looking at adding support for things like Windows, right? So being able to generate the right kind of user data script for Linux versus Windows. So that's something that we've seen. This is a little bit of a newer feature. And then also looking at composition. So particularly if I have a set of those specialist roles that are working together, I might have some of them create topologies that are just pieces of that overall solution. So for instance, the actual network topology is one piece. It doesn't include any virtual machines. And the application topology is a separate piece that has a reference to the network that is actually supplied as a parameter into it. And so again, this is kind of reflecting the organizational structure through the artifacts that are created. And from here, we'll actually be able to import the various pieces. Now, if I were doing this by hand, there's a lot of things you'd have to keep straight. So for instance, in this example, we have the one piece that we've imported and it's various parameters. I can jump into the source very easily. I can see the content of the diagram very quickly. And we'll go ahead and save this. And then let's import the application piece. And so now we're pulling it in from one diagram, we can see all of the various segments of this heat architecture. If I select the target object, we're actually selecting the segments that are reflected in the diagram. All of the source that's required for both the nested pieces for the application topology, the network topology, and the piece here that's just simply referencing the building blocks is all located in one place. All right, any comments or questions so far? Yes. So this is part of that plugin architecture. So in addition to examples like the RDS or the ElastiCache, where we're creating completely new resource types that never existed before, for all of the core types, like image, network, subnet, block storage for sender, object storage for Swift, we've actually created a parallel architecture for EC2. Each of those types knows how to talk to the native Amazon API and enables us to create the heat template and read it. But when it's parsed in, we use the heat resource registry to bring our types in to actually execute it. And so the net of it is that as a user, I don't have to change the template. I might have a different configuration file, but the template itself remains the same with all the same content. That's correct. If I ran this straight from heat stack create, all the same technology applies. Right, and then I can trigger it either from the designer or from the application view in that case. Under the covers, it's still the same heat engine that's executing. One more question. Sure. No, heat still requires Keystone, but for the Amazon case, it simply does kind of the initial, can I accept this request? And then from there, each of the types know how to interact with the API key. That's part of what we're providing through this context in the top, understanding the cloud authorization. And so each cloud provider, like Amazon or SoftLayer or VMware, have their own slightly different set of parameters. We can pass this in as global parameters to the heat stack create so that it doesn't affect the template and the types know how to interpret those. All right, so I'm slowing in a little bit here, unfortunately, but let me switch back over because I want to also hit content around Docker as well. So in addition to what I've shown you here, we've also been experimenting on Bluemix with containers and virtual images. So Bluemix, IBM Bluemix started out primarily as cloud foundry, but has been expanding quite a bit. What's neat about this is that, again, with this kind of DevOps question of what, how, and where, now we simply have more concise answers, right? Docker images form the what, the how are actually just simply running the Docker containers, maybe using Mezos or Kubernetes or something else to manage kind of the clustering and distribution, and ultimately there's some notion of where to deploy it. Now, in the simple case, you can go today, everyone can actually go, sign up for free, and experiment with using the container service with your own built-in pipeline. So under this example, we actually have a build staging and production stage of the pipeline. We have a Docker repository that's backing the images that are provided in this environment. You can build new images into it, and then you can define the process for how you promote it forward, binding it to IPs and routes as needed so you can expose that Docker-based application very quickly in the broader internet. Now, this works great for many use cases. It'll work great, particularly for experimentation and DevTest. You can also look at it longer term for production. It's still in beta, so I wouldn't recommend production now, but we'll get to that point. But the other context we see providing value is using the same notions of the application and the inventory that we were looking at here a moment ago, where I can actually click in and see each of my environments and see exactly what's deployed in those environments, the versions of software, but apply that to where I either push them out to the public registry and deploy it through the pipeline there, or push it into my actual local Docker environments where I'm actually running production workload. So as we kind of expand past that, there's actually a lot of work we've done within Magnum, and I want to invite Tan up to show and talk about some of that. All right, Tan. Thanks, Michael. Okay, so my name is Tano. I'm based out of California at the IBM Silicon Valley Lab. I've been working on heat and lately Magnum. So what I'd like to do is take a few minutes to take a look at what's coming next in terms of support for container and over stack. Magnum is a new project that just became an official part of over stack just about two months ago. So Magnum is a container as a service. So what it does is that it deploy a cluster of hosts for you and then the container will be deployed on top of that cluster. And then after that, you can run your lifecycle operation on your container, like power resume update and so on. The project is pretty active. There are some 40 developer contributing and 40 kilo cycle IBM did contribute quite a bit there. There is a full talk and demo on Magnum by Adrian Otto and Steven Dake, the PTO on Thursday. So I won't steal the thunder here. But what I'd like to do is to go to a short example of how it works and then we can talk about how urban growth deployer can support Magnum very nicely. So a couple of very interesting things about Magnum is that they do intend to support multiple back end and different container manager like Kubernetes or Swarm. Now for hosting your container, you can run on your VM and over stack and you can run on bare metal and you can even run on within container, if that makes sense. Now for the context of what we talk about here what Magnum help is to really help us in terms of orchestration. The container that Magnum create is a first-class object in open stack. So that would make it much easier to integrate with the other surface in open stack like Keystone, Cinder, Neutron and so on. The container manager like Kubernetes or Swarm, they also handle some level of orchestration on their own. So Magnum by interacting with those managers directly, basically it gives you two choices. You can work at the container level level just like that or you can work at the full stack level or container with full infrastructure like networking and storage and so on. And the most important part is the last point that all the artifact that you create before you take a container, you have more JSON file, they are reuse assets, there's no change. Okay, so I'll walk through a quick example of how Magnum deploy WordPress with load balancer. This is very much work in progress but we'll show you kind of give you an idea of how it works, right. For here we are gonna use a Kubernetes cluster and the example actually came from the Git report for Kubernetes, right. So first thing we do is do a Magnum Bay create. So what it does is that Magnum will drive a set of heat template that goes out and create your cluster for you. So it would deploy the VM that run the master node and the minion nodes and so on. It also deploy the Neutron networking subnet, the router and so on, right. So next we do a part create and with that we give it the YAML file that describe the part, right. So here what it's gonna do is bring up a Docker instance for MySQL. It will also attach a scene volume to that instance. Then next you run a service create number three there and you give it a YAML file that describe your service. So service is a concept that is specific to Kubernetes but basically what it does is that it publish the port and IP that represent your database. Then we do another part create. We give it the YAML file that describe your WordPress and would bring up the Docker instance for WordPress and then it will talk to the MySQL instance. And then finally we do the last service create that would expose the WordPress instance to the external internet. So what's interesting here is that the YAML for the WordPress service specify a load balancer. So what Kubernetes does is that they would talk to Neutron in Opus app and bring up the load balancer that sit in front of the service. And then after that Kubernetes also take care of managing the member for the load balancing pool and so on. So from this example we see two key points. So the first point is that we can do really full orchestration with container. And here we see that it's integrated with Keystone, Cinder, Nova and Neutron. So that's pretty powerful. The second point is that the YAML file that you see here just by the fact that you create from to describe your container and they're just used here as is. Okay, so with that example we can kind of envision how Urban Group Deploy can help orchestrate container in Opus app. So as we saw with the heat template that Michael showed we can drag and drop the heat resource and add it them and so on. The five Magnum command that we saw in the last example is gonna become heat resource and you can treat them the same way you can drag and drop and so on. The manifest to describe your container we can envision that Urban Group Deploy can help connect the user, edit them and kind of provide support with validation and hit and so on. All the nice support for tool. So with that the Urban Group Deploy would take your description, the heat template and then deploy onto your environment. So for the, looking ahead a little bit for the Liberty Cycle we think that the, we can expect that Magnum will become much more mature and we think that a lot of this support is gonna come together very nicely. So with that let me hand back to. All right, thank you Tom, appreciate it. So I think the key thing I do wanna point out in terms of availability and content what I've shown up to this point with virtual images that's actually part of what's available today. What we're talking about here is a little bit on the future side and kind of shifting into an understanding that there is some future content here. When we look at the heat template we can also use today the native Docker Inc. container resource as part of a design as well. So I think when we look at Docker Compose it's a great format for pulling together a bunch of containers to a complete system. I think heat brings some interesting additional capabilities like being able to not only have Docker containers but also perhaps bringing in things like object storage as part of that example. And so under the cover is this kind of Swift container or S3 container bringing in additional networking, et cetera. Allows you to put together architectures that aren't strictly just the application content from the container but also begin to leverage these other things that are sort of outside of the traditional application box. Yes. Mm-hmm, sure. And a little bit of context there. So if folks aren't too familiar with the larger portfolio Pure App and IBM Cloud Orchestrator have been around for longer than we've been involved before OpenStack was here, right? And so there is overlap in the technologies of how we describe patterns. Where we're headed though is all in on OpenStack and all in in heat. And so there's already work underway to look at how do we take that content which was maybe a little bit ahead of where the open community was back to 2008, 2009 and move it forward now. But a lot of this work is looking at we're not just where we've been but how do we kind of get what we have and get it to where we need to be. Gotcha, I can repeat the question if it's helpful. I know we're kind of sensitive on time. Is it okay if we table that just for a minute? I'll hit a couple more slides and we'll talk right at the end. All right, so the key thing I wanted you to take away from this last segment is simply that not only do we anticipate having support for images, software, object storage but also bringing in containers into it as well. As we look forward, I think we'll also see examples of things like cloud foundry quite honestly kind of coming into heat, being able to understand an application as sort of a black box that we can deploy alongside it. So if I can move this along here a little bit. So again, the same kinds of questions right, there's a set of artifacts. There's a way to deploy it. In that case, it's a CF push. It's a much simpler deployment model but you still have artifacts. You still have configuration that you need to manage. I have a couple of slides here on hybrid cloud that I don't have the time to cover but I do want to get in a couple of quick plugs. One is there's lots of topics from IBM this week. So definitely do come check out some of the work we're doing. We've made a huge investment in OpenStack as many others have and believe very heavily in it. The slides are already available on SlideShare if you want to download the link. Just a convenient way to get access to that. I'll give you a second to take a picture. And then the last thing I actually want to make sure I get out is that we're hiring. So if you're interested in joining our team, we would love to have you. I'll move that back to the slide but if you want to take a snapshot of this link as well, IBM Biz, UC Jobs for you to actually take a look at the racks we have available in both Cleveland and Raleigh, North Carolina. So we'd love to have you join our team and I'll go back to the other one. So maybe that's a bad sign but in either case we'd love to have some input. So we're gonna be at the PED tonight at the IBM PED doing more demos, more content that I didn't have time to show here at six o'clock and then my team will be available every day at two o'clock for the rest of the week at the IBM PED. So if you'd like to come by and talk to us we'd love to have your input. Thank you.