 All right, folks, good afternoon. My name is Mo Abdullah. I am part of the IBM team that works day in and day out on OpenStack, both from the product management and the strategy point of view. I've been with IBM for a very, very long time and had a long heritage in terms of software, software related to our message queuing, middleware, et cetera, software related to the application development and then more recently around the operations side. And so having combined experiences in those and the evolution of cloud bringing a lot of them together, they said, we need people who have been through this life cycle to come and try and work on cloud. And I started on cloud about four, four and a half years ago, which was just about the time when the industry were really starting to rally around, work towards and I guess galvanize around sets of standards in this particular space. It was about four years ago that I guess we started to see the ideas and so on of OpenStack. So before I get into my material and my content, just a couple of advertisements and a couple of notes for us to talk about. Earlier today, the IBM team, and if you had not seen it, the folks are walking around, talked a lot about what we are doing with the community. A lot of the actual work that's going on in terms of the areas that were focused on OpenStack core, what we're doing on RefStack, a lot of these areas that perhaps, a lot of the practitioners focus. What I'm going to do is actually shift it a little bit and talk about how IBM is leveraging OpenStack as part of our portfolio and how do we see it evolving in terms of some of the products that we deliver into the market. I will give you a little bit of an advertisement, although Tammy is not here. We have beautiful t-shirts I'm told. Somebody told me that they feel very good. They may not look great if you don't like sports cars, but apparently they feel wonderful and I saw a whole bunch of people putting them underneath their jackets last night when it was very cold walking outside. So please do go to the booth. Oh, there it is, see? We don't have it, but we're having it. So please stop at the booth because you'll get a chance to also see some of the stuff that I'm talking about, as well as connect with some of the experts who really work on this stuff. The second part of this session, once I'm done with the product, it's actually gonna be live demos that will show you a lot of these pieces. So we're gonna have Scott and other members of the team come right after me and show you a lot of these pieces. Last thing last, please make this session interactive. I am very good, as the IBM team knows. It's standing in here and speaking for four hours. I promise you, please don't let me do that to you. Stop me as we're going along. Ask me questions, say, this doesn't make sense. You know, what about that? And we can go through them. And we have a couple of team members in here who talk about it. Now, before we get into any of the products and IBM portfolio and so on, a little bit of context setting. This really helps frame how IBM thinks about the cloud space and how it thinks about OpenStack in that particular context. Few interesting messages. As far as the clients that we've been working with, a lot of them really have been gravitating towards cloud in a context of solving a problem. A problem which is around how they start to innovate very, very fast, which is really what cloud started with. An area in which you can get access very quickly to resources, to the infrastructure you need, as well as for an area where you do rapid development, rapid iteration, new styles of applications. But also what we start to see is a lot of customers came at the cloud from a different angle. The angle of looking at how do they actually optimize their environment, you know, virtualization of, starting to do a little bit more self-service, starting to automate a lot of the processes that govern their IT environments. Very quickly I would say that this particular camp that started with cloud around the idea of modernizing their applications, optimizing their environment, started to see that they needed to also leverage the cloud for innovation. And the same way a lot of the companies that started with cloud for quick DevOps, quick style of applications, mobile delivery, social apps, analytics, quickly realized that the value of those applications can only be extended when they really are connected into and part of these classical systems. Thus, emerged the whole notion from an IBM perspective of a hybrid early on, that a cloud is not just about a public cloud environment and the infrastructure you need to set up for it, or a private cloud environment and the infrastructure and processes you need, but rather how do you actually create a hybrid cloud environment? Make sense? I know we are in Europe, but believe me it is what's six in the morning my time and in the US we need a little bit of feedback. Make sense? Wonderful, thank you. This always helps me. So, now of course hybrid is another interesting word because everybody, I think it's even more confused word than the word cloud itself. So when you think about the word hybrid, we actually just have a couple of things that I wanna level as to how we think about it. Number one, we don't see hybrid as just something that is infrastructure-oriented on-premise and off-premise. We think of hybrid as really transcending all sorts of environments that you have. Your traditional custom, these new generation integrated systems, dedicated on-premise, dedicated off-premise, or public cloud environments. Any one of those environments in a mix and match present an opportunity for hybrid. And when you really dig deeper and you think, what am I thinking about when I say the word hybrid? What we found working with a lot of our clients is that they really have three challenges, which are the core of what we're doing with OpenStack and the rest of our solutions. The first is there is a strong integration challenge at the infrastructure layer, at the application layer, at the process layer. The second is really a challenge around how do you start to govern and start to rule across these multiple environments? People understood how to do these things in the classical data centers and even how to evolve them. But now that they have disparate environments and applications, how do you actually start to put governance around this? And the third is how do you achieve some level of portability between these environments? Portability comes not because you would like to migrate only, but portability because you would like to start to have some flexibility in where and how you apply certain things. A classical example around portability is actually driven heavily by European clients. I don't think anybody is alien to regulation, compliance rules that are emerging. Just in Europe, we have about 104 country-level, municipality-level compliance rules that are emerging around data protection, privacy, where you store things. And they're not just happening at the application, you know, you're thinking, oh, it's banking application. No, believe me not. Even Twitter data at the moment is under in some countries jurisdiction for privacy and private information management. So your little app that used to do some analytics and sentiment and you store it in a little object store and you put it in there, I'll tell you what, that needs to be protected, ungoverned and so on. So when you start to think about portability, there's a lot of interesting scenarios that come up in this hybrid world and the technologies you need to enable it, where you may be doing an application in one space and your data is stored in another. Make sense? Any comments, questions? No IBM questions or comments. Rest of the team? All right, so let me take it a step further because again, this will help us refine as the work we're doing and put all of our offerings and the work we're doing with OpenStack in context. So these are examples of four key use cases that we actually see where a lot of people have been driving this hybrid notion. The first, and actually you're gonna see a demo of this right after it, is the application split. This is from a European point of view, not worldwide, but from a European point of view, is the number one case. Where you actually have a lot of people starting to develop their mobile application in some new DevTest, new private cloud, sometimes with partners, but they're actually putting another part of the application, perhaps their backend system, perhaps part of their business process that's governing, that's under compliance, or even their data in a different location. So this whole application split point of view is, I think, introduces a lot of both technology challenges from an OpenStack perspective, but also process and governance challenges. The second one that you see is around DevTest. This is, I think, where a lot of people relate to or started. I wanna be able to create a DevTest environment in here where I'm iterating, I'm not really exerting my own actual expense and so on. And then once I'm done with these particular workloads, I wanna capture them, and I wanna bring them back and run them in my pre-test or inside of the lifecycle I have around my production environment. I'll give you a real-life example, one that I can talk about, not necessarily this beverage company which is blinded, although I think there's only two beverage companies in the world, so that's a 50-50 guess for you, is around GameStop. I think some places in Europe they exist, others don't. Do people know GameStop? Okay. If you have children in the United States, believe me, you know GameStop, and I have two of them. So GameStop is a shop that basically is on the High Street, and they sell video games. And they also sell second-hand video games. And one of the things they wanted to do, as an example, is to start to think about the way they would do business experiments for how they would drive more customers to come into their stores, because they noticed a trend that less and less customers are coming to the actual shop. Most people are going online, going to their competition, et cetera. So think about this in the old days. In the old days, what they talk to us about is that it would take several thousands, hundreds of thousands of dollars to do one experiment because it involves acquiring some hardware, loading softwares, time waiting, et cetera. They would run an experiment, they would wait for the business results, and then they would come back and decide whether they're gonna move on it or not. So they were bottlenecked by their infrastructure and their ability to actually experiment quickly. And they didn't wanna do one, they wanted to do 15. So you're telling me, what is an experiment? Let me share with you what I mean by an experiment. So here is an experiment. One of their experiments, it's true life. And I actually tested it on my own son when I was working with them. So a lot of the kids buy a game, and if they are like my kids, about two months later, they're bored of the game. Batman is no longer the flavor of the month and they want to buy FIFA 15 because the World Cup, FIFA 14 because the World Cup has come in. So you have all of these little games sitting in there on the shelf collecting dust for people like me to clean. So one of their experiments was, wouldn't it be great, given that our target audience in this case is teenagers and children, that we actually allow them to have in their mobile application, in the GameStop application, a way to scan the game. And when they scan it, it will actually tell them that the store GameStop, which is down the road, will buy it from you for $5. And the GameStop, that is three kilometers away, will buy it from you from $10. And the GameStop behind your street actually will not buy it because they already have so many of them in stock. And if they do so, the kid will come to the father or mother and they will say, oh, I would love to sell my game so I can go and I just earn $5 to buy another one. That's an example of a business experiment. It's a great experiment when you think about it. So this is exactly what they actually leverage. They start to build a lot of these dev test environments. They start to pump out these beta of the mobile applications to trial it out. They did it on 15 cases and they down selected into three business area that they wanted to invest. Now they didn't want to run it all off-premise, they came back and ran it on-premise, the three that they selected because they wanted to scale it, harden it, et cetera. Workload portability, this is a very interesting one. This is very simple. Two use cases for this particular client. The first one is around the fact that they saw a lot of growth in the United States and in some other areas around the healthcare law. When the healthcare law came and they were getting a lot of people subscribing and subsidiaries. And so what they decided is they said, we would like to prioritize our own premise infrastructure for the critical business applications and we're gonna move our dev test or some of the other pieces of our infrastructure off-premise. And we're gonna take this workload, push it in there, dedicate our environments to serve the critical business needs and then bring it back. The next phase for them is to actually say that they're gonna expand and they're actually expanding into Europe. And so how do they take some of their assets and easily start to stand them up inside of London and get them going without having to take many, many months? And then finally of course is the classical spillover and this is very, very typical of analytics. People in many cases would run some compilations and so on and they decide that they're about to run out of the allocated capacity and they wanna spillover and run elsewhere. Make sense guys? Yes? Thank you. Okay, so how does this relate then into what we're doing and we're driving day in and day out? So clearly there's a lot of work in terms of maturing components of technology and infrastructure. But more importantly, there's more to be done to realize these types of use cases. We've learned one by one. So from an IBM perspective, first and foremost we recognize that in order to realize these use cases that we just talked about, you really need to start to find a way to level the playing field around all of the infrastructure components that you have. The second thing we realized is that you have to find ways in which you start to connect what's happening in the new development environments along with the classical development environments, that if people continue to develop those two things in isolation with no connection, we're gonna introduce next wave of challenges around hybrid. How do we connect these two? And then the third is that you really need to have an end-to-end view of a DevOps lifecycle, recognizing that these are operating on two different speeds, but somehow you need to connect them. So this in a nutshell represents IBM's thinking around how we organize a hybrid portfolio. At the heart of this, like any other vendor or so on, we didn't just endorse OpenStack from a point of view of, it's a great technology. It actually is a wonderful technology. We did a lot of studies before we jumped on it and decided to partner and grow and put our resources. But was the real promise of OpenStack was its ability to actually achieve a number of those things with the design that it had? First and foremost, around APIs that are standardized across these environments. Second, implementations, best practice implementations around the basic elements of storage, compute, networking, but also around additional services, what is now the industry-terming infrastructure services plus, like databases, patterns, some level of orchestration, all of those pieces that are coming in addition to unbuilding on top of that core. And then finally, and I think that's very, very important, all of the pieces that you would need in terms of the development life cycle, such as database elements, ability to integrate with depth tooling, and ability to actually start to harmonize the management layers. Questions, comments? This is my most complicated chart. You must have questions. Okay, well it gets denser in text, which you can read, so one thing that did occur, we went through this presentation when another group up in the Meridian, is they started to talk a little bit more around what do we really mean in terms of these new environments and classical environments. From an IBM point of view, just to be clear, a lot of these new environment is in IBM turned the IBM Bluemix, so this is IBM's implementation of a PaaS based on Cloud Foundry. And all of that new compositional style of delivery, where a lot of things are services that you compose. Today this particular world, if you really dig into it, whether it's Cloud Foundry based or the rest, is purely PaaS oriented. Middleware services along it. From our point of view, it's something that needs to be expanded. And then when we think about classical applications, this is the bread and butter three tier applications that you structured, organized, understood. And in this context, we think about those as patterns, patterns that you've hardened and repeatedly deployed around Oracle databases, WebSeer applications, SAP environments, and so on. So let's talk about this part first and how OpenStack is starting to change what we already had in market to enable this hybrid notion. So first, in order for a lot of the clients who are entering into this Cloud journey to really start building their private Clouds and a lot of their Cloud enabled environments, we have an offering here called the IBM Cloud Manager with OpenStack. For all intents and purposes, and to simplify it, and you can read through a lot of the text on their speaker notes, it really is an OpenStack distribution that we've extended in several ways, IBM platforms, amongst others, and added some componentry working with our clients that we learned that they need. So if you think about it, all the regular API access, nothing changed, nothing modified, all preserved, all protected, but one componentry in there is the whole notion of one adding better UIs, management in front of it, monitoring capacity, defining projects, defining resource pools. The second is what a lot of our clients and specialty service providers asked, approval chains so that you can actually start to dictate how do you allocate these resources, not just willy-nilly, and last but not least is this notion of resource management. When you're deploying OpenStack at scale, and Tammy is behind me by the way, she has now the t-shirt, she's our expert and lead architect in here if you wanted to dig a little bit more, when we started to work with a lot of clients at scale, we quickly learned that one of the key aspects of leveraging a cloud in appropriate manner is figuring out how you lay a policy on placement of a lot of these resources and managing things such as put my data here and don't put it here, put my application here and don't put my application in there. And that's really one of the marketing terms, I guess, value adds that we have within this particular package. That particular capability is an example of what we had in here with one of the European banks, and as I keep saying, this whole notion of compliance and so on comes into play. They essentially looked at how do they quickly leverage a private cloud environment for some of the experimental use cases that we talked about, but at the same time, they wanted to have strong policy enforced between what they put against their certified in-house environment versus the OpenStack private cloud that they built with OpenStack IBM Cloud Manager with OpenStack. One of the things that was very interesting for them as they started to work is they exhibited the next tier of requirements, which isn't just help me stand up and OpenStack private cloud quickly and help me manage against two different environments with all the rights approvals, is they extended it and started to do what our other clients do, which is they started to look at OpenStack and say, how do I connect it into some of my traditional systems? And in their case, and in this other client's case in China, they actually have a lot of power systems. Power is exhibiting growth on many levels, especially around some of these new application styles for performance, for scale, et cetera. So a lot of people are starting to say, how do I build an OpenStack environment that connects into all of my Intel cores, et cetera, but also into some of my power environments? In this instance, one of the things we learned as a team IBM, the power of building OpenStack, of course, the whole API gives you the transparency, the consistency, but it does not mitigate the need for the virtualization administrators to start to also tinker with and or work with some of the core virtualized resources. Many of the people who, for example, use VMware continue to use some of their VMware environment. This is also true of power. So we took the opportunity to introduce the actual power virtualization center, very much in line with consistent and very integrated with the OpenStack environment, such that somebody who's working with OpenStack and power really feels that this is both a seamless experience and there is connectivity at the resource management layer. Any questions, comments so far? Yes. It's actually both. It's actually all of the CPU resources, also placement of certain VMs within conditions, and you can do it when the context of a hybrid that is spread on and off-premise. And actually, you'll get to see it. This particular, I only say it because I hear it a lot from our clients. This particular capability is one that many, many people have been sort of, once they up their usage of OpenStack, fall quickly into needing. And so we've been enhancing it very quickly in terms of the resource scheduler and resource manager. No, today this is part of the IBM proprietary value add. Yeah. We had to find something to charge you for. My truth mentioned that it's the implement of choice is they want to use the Nova scheduler not take advantage of your features or if they want to go with the features that they would take advantage of the app. So we don't want anybody to use the value. Any other questions on this? It's coming soon. No. So, and actually you should see this in my charts, but I just realized that while I have a beautifully yellow line here, it's not appearing here. And I don't know why. This IBM Cloud Manager with OpenStack really focuses on how do you build your own private cloud. IBM Cloud Orchestrator sits in here and works across. And we'll come to talk about this one in a minute. Because we see an evolution for how people are entering this. Some people enter it here, some people enter it here, some people enter it here, some people start in the middle because they realize they need a kernel, some people come at it here and some people come at it here. The beauty is it's all built on an OpenStack so no matter where you start, you plug the next component. This is something that by the way we learned. Part of the pains and just a word about IBM. In IBM, when you work, they don't put you in a project for like two years and then they tell you, oh, you can move into another project. They sort of, how do I say this nicely? They make you live with the legacy of your choices. So if you actually screwed up the first time, they make you fix it the next time. So prior to the cloud, a lot of the team who's been working on this had gone through many evolutions of automations and run book. And so we learned through a lot of that experience how to actually build this from a cloud perspective. And one of the key things we learned is you gotta modularize this. You really have to make it much more Lego-like in terms of the capabilities that snap on and add. And that was built in as part of the architecture of everything we deliver. I used to have much more hair, I promise you, before I worked out. I really promise you, really, I'll show you pictures. My mother doesn't recognize me. My older brother by six years, they think he's the younger one. But anyway, nobody's couple of you. Oh, sorry, yeah. Yeah, absolutely. So SoftLayer actually is in this particular part in here where you actually have your public instance and also you have your own dedicated off-premise environments. And I'm about to go into that one next. Now that we covered the how do you start your cloud with what we have with the cloud manager as well as what we have with the new extensions into power as a hypervisor manager. So now my yellow box, by the way, is pointing here. One of the main things that we actually announced at this conference is the introduction of what we're calling the IBM Cloud OpenStack Services. So for a different set of clients, one of the things we heard a lot from them is you know what, I am not a big army of people. And while I have two or three guys who are digging into OpenStack and learning, it's still hard for me to get started. And I don't really wanna spend all this time to build up the infrastructure myself. I don't wanna set it all up. I don't wanna become an expert between all of the networking and so on and so forth. I just want to use OpenStack and I want to manage some applications that I'm getting in there. And so what we did as a team is we responded to a lot of that feedback and we introduced this offering. What's interesting about this, however, is that it's not a shared public environment. It's actually dedicated off-premise to the client. So all of the type of needs you need around privacy, security, isolation, et cetera, are preserved in a per instance that is created for the client. Now when you look at this and you start to dig a little bit under the details, there are a lot of aspects that we learned from early beta with some of the clients. So for example, we learned what are the standard topologies that most people would wanna start with if they're doing spillover or application splits or if they're trying to do just DevTest so that you can have a whole preconfigured environment. The second thing we learned, which I think is very important and comes in the context of a hybrid, is that through this particular environment where most people would wanna start, we've introduced, introducing by the end of the year, a connection directly into an on-premise open stack. So through this dedicated environment, you can start to actually lay down a local region that you can work with and start to achieve a hybrid between resources on-premise and in this dedicated environment. So quickly we're introducing a hybrid open stack, raw open stack for the customers. Because what we've observed is people wanna start with open stack, they wanna learn, they wanna start to push some of their images and environment in there and then what they wanna do is start to do a lot of these hybrid use cases we talked about. So our goal, the reason I keep saying this is our goal is not necessarily to just do open stack for open stack sake. Open stack is critical and has achieving the hybrid which is where we see everybody going. So our next sets of investments are driven by these types of needs. Make sense? Any questions on this one? How does it fit in terms of OPEX? I'm sorry? How does it fit in terms of OPEX? In terms of OPEX or open stack? TCO. Oh, TCO. Do we have some of the details around? I don't know who it is. We actually have some numbers around some of the OPEX pieces, how much you charge that we can share with you. Comparison between that versus your own environment on-prem. But you basically charge through a monthly arrangement. There is no, it's very much in a cloud style. It's the most of stuff. Yeah. And we can get you a lot of the details. It's not a, you know. Rectangle. Yeah. And when you get one of those environments just for the record, so that it's, as I keep saying, it's not a public environment. It's a dedicated. So you would request it and I think it's 72 hours later, they'll email you back with all your credentials and set up environment. Okay? I assume. And oh, by the way, you can actually go and trial it for yourself. We have a little trial that you can get going for no cost and you'll see how it works. So the next piece is, and I'm going to deviate a little bit because there's something I said earlier that I would like to stress on. So now that you have OpenStack across all of the IBM environments. So if you have your own traditional hardware, PowerZ, your own Intel boxes that you want to layer OpenStack on top of, or if you've acquired one of IBM's integrated systems, which is what we call the pure family. It's pure application, pure data. These are vertical cloud in a box environments for particular workloads. Or whether you decided to create your own from scratch, dedicated private cloud or you decided to get your own dedicated or off premise. All of these have now been enabled through an OpenStack implementation or OpenStack APIs. Now that we did this and we feel really good about a lot of the early implementations and how people are starting to use them. We hit the next level of requirement when it comes to hybrid, which is how do you expose all of this infrastructure as a set of services into the developer? Because if all you've done is created a private cloud with improved capacity and all that's fine. But if you're still operating in the old mode of if you like gating it from your developers, I don't think you've really struck the court correct in terms of unleashing the ability to innovate and use this environment. So one of the things we actually also announced at the conference is the introduction of these core services from OpenStack right into the IBM Bluemix environment. So today the developers who are going into Bluemix and innovating rapidly, building these mobile applications can actually start to access all of these infrastructure services as composable elements into their environment. We're starting with ObjectStore and we're quickly adding between now and February additional services that will bring infrastructure right up into that whole foundry environment. So our definition of a PAS is changing. It's not a PAS in the classical term. It's a cloud environment that you need to give your developers to be able to work with and interact with these services. Classical example around the Twitter piece that I mentioned. A developer that goes into Bluemix, and actually I should ask, how many people have seen or heard of Bluemix? None IBMers, put your hands down. None IBMers. All right, if you have not, I really encourage you to go and give it a look. It's free. It's actually, if I want to market anything, I'll market this to you. It's gotten a great rave reviews from people who try to use it. Very simple. Just get some of your developers working on it. You will see that through that particular environment people are building these types of mobile applications. And I'll tell you, we had a client in Australia that almost immediately came back to us and said, I'm building the stuff, but I can't store a lot of the information because today Bluemix is a public, if you like, service. I cannot store a lot of these Twitter information much longer because I'm now starting to use them for querying and doing personal data correlation. So they came to us and they said, I would like to be able, when I'm using the object store, to point to the object store right in my environment instead of the object store that's sitting in soft layer. So what you start to see with the service is the programmer who is writing all of their code and storing a lot of the information to an object store very quickly can say, don't point to this one, which I got started with, point in this one with zero code change. That's the type of flexibility and innovation that we're starting to give these developers. On the other side, of course, the administrators and the ops team are the ones who decided how do they actually service this open stack object store facility up there and what controls they wanna put outward. This is an area we're heavily investing in. It's a growth area for us. We would love to recruit users. We'd love to recruit your feedback. What do you care about? Especially if you are the provider of these services. You're the guy who's setting up a lot of these on-premise object stores or computers or so on. What do you wanna see? How do you wanna see it work for your developers who may be interacting with this foundry environment? Question? Yes, Swift. Yes, sorry, my little text says here Swift. Yes. So today, the first instance to get people going in this beta is SoftLayer. Very quickly, you're gonna see the same sort of, in Bluemix, we call them hexagons, actual things you click on to use. The hexagon is gonna be able to surface to you another types of object services. Here's my local environment, use that one. Make sense? Any other questions? I am very excited about this. I'll tell you, developers are very excited about this. We were with a Meetup group out of Docker and they were like, whoo-hoo, you know, what am I gonna get some of the Docker services in here and so on and so forth. Sorry. Yes, yes, thank you, thank you for that. So what drives people to areas like Bluemix? I think what's important to point out is that Paz in the classical way of thinking is that middle tier that's connecting a lot of the SAS environments as well as the infrastructure. Now, as it started, people wanted to just build common runtimes, node, Python, PHP, get my application running quickly. The true value, however, started to emerge when you're able to extend a lot of these backend systems, SAS properties, or start to interact with different infrastructures through that one development environment. So in the case of Watson, classical thing that many banks, many insurance companies, many healthcare providers have the equivalent of. From an IBM perspective, Watson is a honking big thing. It's beautiful, I'll tell you, but it's a honking big thing. Takes probably half of this room and not everybody can afford one of their own. But it's a beautiful system that you can start to feed it to do Q and A and so on. So what they did is they said, we're gonna make this backend system backend environment extensible and available to all of these providers to start to innovate on it. So they exposed services through Bluemix. Now these platform developers don't need to think or buy or acquire or understand all of the language of a Watson. They can simply say, oh, there is a Watson Q and A service as an example. I'm just gonna make it part of my application and call into it. And every time I use it, of course I'll pay it some charge, but I now have a great Watson-empowered service as part of my application. So that's what we see Bluemix evolving to and growing as in terms of enabling a lot of that. Any other comments, questions? Okay, how am I doing on time, by the way? Cause I'm terrible at time. Okay, and I don't have my cock. Yes. All right. So we talked about IBM Cloud Manager with OpenStack. We talked about the IBM Cloud OpenStack services that we just announced. We talked about the direction that we have in terms of enabling a hybrid experience between the two. We talked about how we're starting to surface some of these capabilities into these new environments as services that are just prepared and in the context of the developer. The next piece that I wanted to talk about is the question that was raised around orchestrator. So this actually has classically and traditionally been the main entry point for a lot of cloud environments because people are challenged with it. Classically it was about how do I upgrade and update my operational processes and IT processes through a set of standardized and customized and to my audience service catalogs. Click and get an environment. The truth is that started to actually evolve from just the notion of a self-service catalog into the ability to actually integrate a lot of these heterogeneous systems and start to apply some policy and management. So in the case of Cloud Orchestrator, one of the first things that we did is when we introduced it, we used to ship it with our own OpenStack, the IBM OpenStack distribution. And we did that mainly because we had hardened some of these connections. As we worked with the community and evolved a lot of the rest of these certifications and compatibilities between the stacks, we actually now, the first thing we did is disconnected the orchestrator from the IBM distribution. And we now allow it to talk to any OpenStack compliant environment plus others. So if you look back at this picture, the orchestrator now can start to become the place in which you define your standard, if you like, this IT process, policy pattern and be able to actually run it, deploy it, monitor it across any one of these environments. All OpenStack from a first class and then of course other environments like AWS, et cetera, et cetera, that are not there. The second thing that is, became very critical is as we found customers trying to use OpenStack more and more and move from DevTest into production, they started to realize that they need to plug it into their remedy, they need to plug it into service now, they need to plug it into the IBM control desk, they wanna plug it into some enterprise monitoring system and they wanted to wrapper a lot of these OpenStack based services with hardened operational components. And again, that's the second thing we emphasized and really put into this orchestrator or the kernel. The idea that as you just provision your core compute networking configure, set it up, et cetera, automate all of that, that you can snap on these additional services. An example of work we're doing is with a team at Juniper. One of the things that you'll find, for example, is as you work through this particular environment, one of the things you wanna do in a pre-configuration and so on and post is work with a network, configure a lot of the virtualized resources. And that is an example of how we've also taken the orchestrator and started to open it up for teams like Juniper and others to plug into it and become part of the end-to-end automation. And if you have not seen particularly the IBM Juniper thing, I encourage you to, again, something that we launched a bit and will continue to iterate on and been getting a lot of good feedback from many clients. Last thing last, which I would like to talk about, the other thing I said that we get punished or rewarded in IBM for the prior mistakes we make, in a lot of these orchestration type of solutions in the market and many of them really are on par with each other. I mean, believe me, when you wake up in the morning as Marco does and that's his job in life, you keep an eye as to what your friends and your enemies are doing here and there. And a lot of these things are on par, but there was one big difference that we made as part of the design of the IBM solution that we believed in from the outset and learning from our past mistakes is that an orchestrator is only as good as its extensibility, the ability for it to talk to all of these targeted endpoints. So one of the key design points we made in here was the library that you would get out of an orchestrator is open. So if you have your own chef recipes, we don't just make them compatible, we actually onboard them as first class automation packets into the whole tier so that you can automate the processes. So this is really where we see the next tier in terms of a hybrid cloud. Again, all built on OpenStack. So you're telling me, Mo, what's the OpenStack piece in here? You just told me that you disconnected the OpenStack and you're talking to it. So is it just like OpenStack helps orchestration and you really are just taking advantage of OpenStack? That's what it feels like. Actually no, there is another important piece that OpenStack plays in the orchestrator solution. Now if you think about orchestration, you think about infrastructure and you say, a lot of automation, set up my storage, configure the volume, give me some key, go configure my switch, deploy a VM, monitor the VM, classical stuff. You go a level up and you say, but actually I wanna capture a lot of that in a workload because I have a particular workload that I've defined that wants to go against these environments. And I wanna wrapper it in an IT process. So I not only wanna configure the low level infrastructure, not only do I wanna define it, but I also would like to be able to plug in into if there is a problem called this help desk, if I need approvals, connect that. That part around the definition of the pattern is now all heat-based. Heat is becoming the language that not only transcends API but creates compatibility in terms of how we start to understand the whole topology from application all the way down to the infrastructure resources. Make sense? And again, when you think about the IBM investments from an OpenStack team point of view, that's where we brought a lot of our expertise, work we've done around Tosca, work we've done around patterns and really putting, doubling down in terms of some of the work we have in here. So today actually, it does not fully map yet. So Tosca, when we did the work early on with Tosca, we had a strong need and a requirement to connect the DevTools cycles, things like urban code, et cetera, where we're creating a lot of these assets and the application tier to the environments you're deploying to, CMDBs, et cetera. And what we learned through Tosca is that part of the handover has to be a definition that allows you to understand how to orchestrate the various components, including the application. When we came to work with heat, we realized that heat kinda came at it bottoms up. It started with the infrastructure and resource definitions and it lacked a lot of the application tiers. So we took the people that we had working on Tosca and that community and we actually brought them and integrated them right into the heat team. And we're starting to retrofit into heat, some extensions, et cetera. They're all compatible, compliant, forward-looking that define a lot of these application extensions. And I can take you through more detail in terms of showing you how the heat template looks like. So today, we don't, we can do that, yes. But that's more a point in time. And then moving forward, we want native heat to be able to absorb a lot of these definitions. For managed service, but I may want to use a private managed service that takes in both. Actually, excellent point. So the question that was asked, and I know we're at time, so I'm going to cover my last thing and then I only have one more piece. And, but it's a very interesting question. One of the things that comes into play here when you're doing hybrid is, how do I pay and price and charge for a lot of these things? And from an IBM point of view, the way we're going about it is it depends on your entry point. So if you chose to start with an on-premise solution and wanted to extend out, then it becomes a licensed feature. If you started off-premise and you want to drop down, it becomes on a resource consumption. So closing here for the team, couple of areas we're looking at that we already talked through. The first is, and I think I'm going to focus on this particular column, integration we're doing a lot of work on, brokerage in terms of more price, metrics, capacity we're doing. This is a hot space for a lot of the teams that are starting to bring the development and infrastructure components together. Containers, you heard us, our session I think will stand alone and if you have not seen it, I recommend you do, around how we are working with containers, how we're integrating them with OpenStack and how we're surfacing them. That's one of the new areas that are going to become important when it comes to hybrid. With that, folks, I will tell you one more time, please go to the booth, get your t-shirt, it's a beautiful t-shirt, I keep telling you this, and please stay with us to actually see the demo because seeing is believing with a lot of the things. Thank you so much. Thank you.