 All right. Hello, everybody. Good afternoon. You are at this session here for IBM and Cloudsoft, how companies of all sizes leverage OpenStack-based private clouds. I'm Osmeir Mohamed. I'm an offering manager at IBM. I came into IBM through the Blue Box acquisition. So we've been building private clouds based on OpenStack for more than three years, and we've been delivering it on IBM infrastructure for about almost two years. I'll let Duncan introduce himself, but Duncan is a partner and customer of Blue Box Private Cloud. Duncan? Yeah, hi. Good afternoon. Thanks for joining us. Duncan Johnson, what founder and CEO with Cloudsoft, I'll introduce the company more formally in the second half of the presentation, but delighted to be here sharing the stage with you, Osmeir. All right. Perfect. So what we'd like to do is, you know, I think everybody's got their fill of technology and OpenStack, and we'll cover a little bit of that today, but what we wanted to convey today was sort of how we're transforming these businesses that are running on OpenStack and managed private cloud and what they're going through. So we're going to be focused mainly on the companies. The ones that are public will share their names. The ones that are not will anonymize them. And I'll do that the first half, and the second half, actually, I'm going to let Duncan tell his story. So, you know, you hear it from the horse's mouth. And we keep it, and I think we've got a reasonable size crowd. So if there's going to be questions, we'll, I think we'll take it towards the end, but we'll definitely have ample time to cover that. All right, so just a quick primer on what IBM's offering is around the private cloud. So, again, these are built of the foundations of Bluebox. And so the first two that you see, BlueMix Private Cloud and BlueMix Private Cloud Local, those are offerings that came in through the acquisition. We modified them to run completely on SoftLayer, or SoftLayer is always now known as BlueMix Infrastructure. We use Community OpenStack. And IBM, even prior to the acquisition of Bluebox, was a huge contributor to OpenStack. And we actually built the service on top of sort of the contribution of the community. And what we do, as you saw in the keynote yesterday, managed private clouds is where you have sort of a joint responsibility between the customer and the vendor to deliver the cloud experience. I'll cover a little bit more on that. So for any customer that doesn't want any vendor lock-in, wants Community OpenStack, wants to deploy sort of generic operating systems, they may want to run a pass that's open source. We feel that BlueMix Private Cloud and BlueMix Private Local would be optimal for that. And the difference between BlueMix Private Cloud and BlueMix Private Cloud Local, Private Cloud runs in a SoftLayer data center. There's about 24 SoftLayer data centers in the world. We can stand up Private Cloud in less than three days in any of those locations, while BlueMix Private Cloud Local runs in your data center. So there are companies that must run behind their firewall, or have infrastructure that need to leverage behind their firewall, and will gladly manage that cloud for you remotely. So we'll have remote hands, we'll VPN into the environment, but you experience between BlueMix Private Cloud and Local will be identical. The only difference will be where the physical infrastructure sits. So moving on to the bottom of that, we actually introduced a new offering because we heard from customers, a lot of enterprises, that require a vendor-specific distribution or OpenStack. So we partnered up with Red Hat, and really we see this as the best of both worlds. We're gonna bring our scale and expertise in running Private Clouds to the table, while customers that have a support contract or have applications that require REL and that Red Hat stack to run can get the benefit from IBM and also the benefit from Red Hat. So if you need REL OS, so you need running Windows or SUSE, you wanna run an enterprise database, you wanna run OpenShift or Docker orchestration, that will be the offering that we would position for our customers. But again, the people that are supporting these, the automation that we use, the data centers that we deploy on, are identical between Blue Mix Private Cloud and Blue Mix Private Cloud with the Red Hat. So in terms of OpenStack services, we support nine today. We've got a couple that we're adding here in the next couple of months. And if you're familiar, or most of you are familiar with your cell phones, the way that these services will show up is they'll just show up the next time you log in. So we will actually do the deployment of the services in the back end and then we'll just turn it on for you. During that next maintenance window. So it is as simple as that experience that you get on your cell phone and also as you would get if you're going into the public cloud. So we do try to make the experience running a private cloud and consuming the private cloud very similar to what you would see in the public domain. The only difference would be that the bare metal infrastructure that you're running on is dedicated just for one customer. And the model that we've seen after doing this for many years is what we call a shared responsibility model. So let's start with what IBM does. We focus on the things that run underneath the hypervisor. So this is a sample of things that we would do. So anything to do with keeping the management and the uptime of the cloud, we take that on. The oily bits that you don't wanna touch is how I would tell the customer. So if there's a security breach, we will patch it. If there's a failure in the services of the bare metal infrastructure, we will go replace it, right? We'll maintain the SLA 99.95. If we don't maintain the SLA, we give money back to you, right? So we manage the cloud because we know that it's important to you as a business and we will actually provide you credits for not keeping our end of the bargain. We'll do upgrades and maintenance. So these are the things that if any of you who've built or opens that cloud, it's easy to sort of get to the first milestone. Question is whether you can get to the second and third and fourth milestone, right? And so we run hundreds of clouds. We try to keep them standardized and we learn from every single interaction that we have with our clouds and it pays dividends across all of our customer base. Technical support. So anytime that you have an issue within OpenStack, an API question or something's not working as expected, you call us. And we have experts that are already 7x24 to go do that. We also will assign you a customer success manager that will help you onboard. We actually have a five week protocol in terms of getting you onboard with OpenStack. So a lot of these things that people have hard time or they have to go through different vendors, we try to bring that into one consistent experience for all of our customers. So what do you do as a customer? Well, anything to do with the virtual instances and the applications, right? We don't wanna touch your data, right? We don't wanna tell you what automation scripts to use. We don't wanna tell you what OS is to use. You go ahead and use whatever you want, right? Very similar to what you, the experience that you have in the public cloud. A backup and application. Backup and for VM and your applications, right? You may have a tool that you need to use. We're not gonna specify what that is. Additional users that you want beyond the cloud administrator. So all these things that we do and then we find that in two, three years of running clouds that we found a happy medium where customers actually do want control over their cloud but they're also parts of the cloud they have no interest in. All right, so that's a quick overview of sort of the offering list. Let's see what we've done with our customers. So first off is Treehouse. Treehouse is an education company they're based in the Pacific Northwest and they provide online education for professionals that wanna learn how to code, right? Which is a huge growing market similar to what you see with Code Academy and things of that nature. Treehouse was a customer of Bluebox prior to us being acquired by IBM. And so one of the big things we had to go do, we had to move them from our data centers into an IBM data center. So we physically had to move their cloud. And in the process we also had to upgrade their cloud. So there were a bunch of moving parts. We actually ran two clouds side by side for them so we allow ample time for them to migrate. But what they were looking for, right? When we became part of IBM they were very concerned about, hey, are you guys gonna become very curated and very specific about your implementation or OpenStack? So that's why they wanted to keep their old cloud running. They really literally wanted to see API call to API call whether we were staying consistent. And we did, right? So they were able to validate over a period of 30 days that it was, you know, it truly was open infrastructure. We had background compatibility around the API. We provide competitive pricing to them. They were, we used different storage between our old data center and the new one. So the old data centers we had a vendor specific sand that we used while in IBM we use Ceph. And so they wanted to make sure that whatever applications they ran before got the same IOPS. Actually we gave them more IOPS running on Ceph. And then they needed the same SLA as before, right? These are a bunch of guys that do coding. They have no ops people on staff. They are completely dependent on us. So we were able to go deliver that. You know, the solution was IBM Bluemix Private Cloud. The picture gives you sort of what that infrastructure looks like. We built them infrastructure with separate control and data planes. So we had dedicated controllers running all the open stack services. And then they had a choice of different compute nodes because they had different kinds of workloads that they were deploying. So we built different resource pools to ensure that workloads will go on the right type of compute node. And then they had a block storage called, we call it hybrid block storage node, running with Ceph in the background. And then we have a pair of gateways. So all of our clouds have a pair of gateways on the front. This is where we put the ACLs, the firewalls, any termination of VPNs and whatnot. So that's what we did. So Treehouse I believe has one location in DC and there's potential for them to move to another one. So as you'll see with Duncan's presentation, a lot of our customers tend to deploy more than one cloud. The next one is Lixil. So Lixil is a company in Japan. From the US they are equivalent of the Home Depot of Japan. And Lixil came about via a acquisition of five companies. And so when they started engaging with IBM prior to actually prior to the Blue Box acquisition, they were already on the strategic path to go figure out how they were gonna go from five different IT platforms to one. And so what they landed on, where they were gonna have two major platforms, they had a platform that was based on OpenStack and any net new applications were gonna be built on OpenStack and they also had their enterprise workloads that were not being refactored and those were gonna stay on VMware. But what was clear is they were gonna get out of the data center business. So both their VMware platform and their OpenStack platform is being hosted in an IBM data center, literally side by side. So, but focusing on the OpenStack piece, so they made a strategic decision to go with OpenStack, they couldn't move to the public cloud but they wanted an elastic and public cloud type of experience. And so we went through, we actually did a three month POC with them in the Tokyo data center to prove all these things. So very similar to what we did with Treehouse, they just need to see the IOPS, make sure that the applications could do there. Their API compatibility was so on and so forth and they were expanding too. So they actually had a five year growth plan and they didn't want it to make sure that as they went through month by month, we could actually deploy additional resources within the SLA that we provided, which is less than one day. So they could have additional compute nodes, additional storage nodes deployed and available to them in OpenStack in a day or less. So we went through a lot of that and it was a great win. So in terms of what that looked like, again, very similar to what we did for Treehouse, just at a larger scale. So we had the same dedicated controllers, the same Viada pair, but instead of a one gigabit ethernet link into the data center, we had a 10 gigabit ethernet link in and it was a dedicated circuit. So they actually had a dedicated circuit from their location to the IBM data center and then it would pass through the Viadas before it got to the OpenStack infrastructure. And then they had a choice of different compute nodes. I optimized just to give you an example, is our compute node with SSD back storage and the fastest CPU that we have in our catalog. And then, so, but the additional thing that they needed, they needed access to bare metal servers. So they had workloads that could run on virtual machines. That was the primary use case that they had for their OpenStack cloud, but they also had some items that needed to run in bare metal, specifically databases. So we don't have ironic yet. You probably noticed that in the second slide that I showed, but IBM, by virtue of us having the software infrastructure, we can provision bare metal servers and connect it over the network. So even though you can't provision the bare metal servers through OpenStack, you can consume bare metal servers by provisioning it via a separate set of APIs and then linking it over the network. One interesting thing that we have to do with Lixl was the last item that you see on the list, which is VM-like migration. So they were long-term VMware users. They were very used to being able to move VMs as they needed. As you know, with modern elastic clouds, you sort of just let the scheduler decide where it goes within the resource pool and then your application deals with any sort of restart of the VM if you need to. Well, they couldn't make that operational change. So we actually went ahead and enabled VM-like migration for them. And so they don't use it very often. I think a lot of times is when, as they migrate their workloads off of their old infrastructures to the new ones, they may have put the workload on the wrong compute node. And so to free up some space, they may use this, but it's not something they use every day, but it was something that we had to innovate and deliver and make sure it was available to them in that timeframe. All right, so the last one, I had to anonymize, but it's a real estate company in Asia. And so they're doing, unlike a lot of places in the US where you're sort of building in a small plot of land and there's really one or two residences, this company here creates city master plans. So they take a huge swath of land, they develop it, and also they can come along with it, schools, residential, commercial. And so, but what they wanted to do, they wanted to control the experience, very similar to how we control the experience for our customers, they wanted to control the experience for their customers. So think of TV, think of audio, security, all of that was gonna be wired together. So they needed a very modern infrastructure to sort of build that out. But they couldn't host it in an IBM data center, right? Part of their master plan was to have the IT inside a part of their master plan. But they're a real estate company, they don't know how to run IT. So what we ended up doing is we ended up deploying Bluebox, sorry, Blooming's private cloud local for this customer. Okay, I lied. We are building it right now. So literally we have people in Asia building out the cloud right now. And it's going well, we hope to have it lit up and hand the API endpoints to them before the end of the week. And so what they were looking for was absolutely location, so it needed to be local. But also they knew that at some point their customers would want to go and go to the public cloud. And so different from private cloud, we have a public cloud and something that they wanted to be able to tie. So for example, they could provide credentials to a customer that would work on their private cloud and also on their public cloud. So those were things that they wanted to make sure were available to them. And then for the commercial side of their city plan, they also wanted to make sure that those customers could get more than just IaaS, right? And so that's where some of the other parts of IBM, visioning, garage, consulting, managed services were the things they wanted to partner up with. So it was while we're gonna talk about OpenStack, a lot of what swayed the customer over to IBM was definitely around location and also around the Valiat services around there. So and what it looked like, hey, it looks actually very similar to what we talked about before. And the reason why is that we build standard clouds, right? So for as much as we see a lot of customers have very unique problems, very unique needs, right? And they start off looking very, very different. When you actually start building your cloud, it actually ends up looking fairly similar, right? A little bit of customization, there's definitely a lot of people scales that are different, but the underlying technology is the same. And speaking as an operator, that's how we can run all these clouds and upgrade them very easily, right? They all fundamentally look the same to us. So but again, it's built again, even though it's an on-prem solution, we still have the dedicated controllers, we have the same type of compute nodes, and instead of the hybrid storage, which is a mix of SSDs and hard drives, this one here is pure SSDs. So they want a performance right out of the gate. So in summary, what I would say around a lot of these customers, right? Obviously, they were looking to go to a modern platform. It was never, like in the case of Lixill, it was deliberate to have two platforms, right? They understood that you can't put a square peg in a round hole. You can't take a legacy app and try to run in an open stack. And I think maybe some of you know this already, it's really hard. You can't try to make open stack run like VMware. There'll always be things in the VMware stack that cannot be replicated on open stack. Having said that, in the case of live migration, we were able to replicate that and definitely meet the needs of the customer. We definitely dealt with customers that really their core competency wasn't running the infrastructure, right? So if you've got a, you know, if you're a corporation, if you're an organization that has people in data centers, right? If you deal with wiring and cabling and stacking or racking and stacking, managed private clouds isn't really a good fit because what we do is we take away the things that you're not interested in. If you're transforming yourself from a company that's used to do that, but you want to move further up the stack, then managed private cloud makes a lot of sense, right? And then I think hybrid is definitely something that we've seen a lot. So we have customers that actually started, like I think the real estate company actually did their POC on the hosted platform because it was the easier, cheaper thing to do. But then when we actually deployed it or deploying it, we did it locally because the experience was the same. They got 95% of what they needed from the hosted offering, but they finally went to the local. And we're actually seeing customers run both, local and dedicated. They would run production on the local and they would run test dev on hosted. So we're seeing all sorts of different combinations here as we go deeper and deeper into the market. So with that, Duncan, I'm gonna pass it over to you and let you tell your story. Thank you. So now we're switching to I guess our experience. And I would say I really like that slide, the division of responsibilities because that's been borne out by our experience but I'll get into that in a second. So we're not as well known as IBM and Red Hat, so here's a quick summary of who we are and what we're doing. And so our focus is on hybrid cloud application management and I'll explain in a couple of slides what I mean by that, but key to us is being able to model an application and then to be able to deploy it and most importantly manage it throughout its life cycle. I was talking actually, I met with the GM for OpenStack for Red Hat this morning, Rakesh, and he said, well, that's really letting a thousand flowers bloom on this platform and I really like that idea that the platform is rock solid, but it's what you then do with it that I think interests us and our customers. We're very much involved in the open source movement. We are in fact the founders of Apache Brooklyn, which if you're familiar with the Apache software process, sorry, the foundation process, you go through kind of school incubation, primary, secondary, and eventually you sort of graduate. So we're now a top level project, which is very important for us and for the community. And that's really the foundation of everything I'm gonna talk about. When I talk about the Cloudsoft application management platform, I'm talking about the commercial offering built upon Apache Brooklyn. And we're pioneers in what's known as autonomic computing. So this is the idea that essentially you want to do everything fully automated end to end and in much the same way that your autonomous nervous system keeps your heartbeat, core temperature, breathing regulated, especially when you're standing on stage and don't really want to be thinking about those things. It's the same idea within IT. So model deploy and manage. So the model is, think of that as really the blueprint. In fact, we call it a blueprint. So that defines the components that make up your application, the interconnection or relationship between those, any dependencies that need to be sort of taken care of. But most importantly of all, then the policies that you want to apply at runtime, really the best management practices around how to actually look after this application or service. So that model you can think of as being the class if you're into sort of object-oriented programming. So now you want to take that model, that blueprint and deploy it somewhere. And so at that point you instantiate a live model of that same blueprint. So now you've got real software components, you've brought them up correctly, you've wired up everything, you've made sure all the dependencies are sort of dealt with, recognized, and now it's running on your cloud. What you then want to be able to do is actually get data from it, sense what's going on in the environment, both from the application itself but also from the surrounding environment. So that may be through the platform, it may be through other network monitoring tools. But the idea is that this is a living, breathing thing and you want to get data from it in order to manage it properly. So unless something is providing you with that data, you can't react to situations that arise and then take action. And so that's the whole notion of management. So sensing the environment and then affecting the environment of fundamental concepts. And you do that on an ongoing basis. So the blueprint is not fixed for all time. It's a bit like you build your house but now you want to add an extra wing to it, maybe two wings, maybe you've got divorced so you need separate houses. But the idea is, that's a terrible joke, my wife will kill me when I get home and it's being recorded. Never add a lift, that's my advice. But the idea is those blueprints really do allow you to actually land and expand on any environment. So that does run the gamut from physical and that can be bare metal. It can be just a plain vanilla, bring your own nodes or bring your own servers to the party through to virtual then local or dedicated private environments. So local being on-prem, typically dedicated or managed being something that's been hosted for you. Now you heard probably yesterday's keynote that we're now looking at private cloud in three different ways, including remote managed. So the point is that whatever the flavor of cloud, public, private, virtual or physical, our blueprints will run in those environments and in some cases they'll run across many of those environments. So this is a mind map of the Klaus of AMP product. I don't expect you to take it all in but the idea is that you're basically able to connect through to private clouds, to enterprise infrastructure, to public clouds. You can also connect through to and work with things like Cloud Foundry and Kubernetes and Blockchain. The idea, these are all aspects or examples of what you can do once you have an autonomic control plane and you have an application management platform. So the fundamental thing is the platform on its own is not very interesting but when you build up sets of blueprints it becomes much more interesting. So why do we choose IBM and Red Hat? Well if I wind the clock back prior to the partnership doing Red Hat and IBM we'd already engaged with the blue box team just after we met with them actually in Paris before the acquisition in the course of getting to know them they got acquired by IBM and I think we were the first actual IBM soft layer based private cloud to be stood up and that was in San Jose so that's going about 18 months now. But we didn't just stick with one location we always wanted to have multiple locations because one of the things we feel is really important is to be able to actually deploy real world applications across real world infrastructure. So we wanted to roll out a global private cloud and that becomes a test bed for us and it also becomes a great proving ground for customer evaluations. So fast forward to literally what? End of March. So at the end of March we actually upgraded our environment and at this point we decided not just to upgrade two other nodes in fact one of them we moved from Singapore to Tokyo not physically but we chose to relocate and reposition ourselves across San Jose, London and Tokyo but in London we felt we should actually work with Red Hat and actually stand up for Red Hat powered Blue Mix private cloud. So we did. So as you described when you sort of brought on a customer from their environment into the Blue Mix environment we did much the same thing moving from the old environment to the new environment. So we were running in London, San Jose and Singapore and so that great process went really, really smoothly. So this is a deep, politicized view of the world. When we posted the slides I'll give you the credit it came from, it's our Creative Commons from Wikipedia. So this is the situation today. Three clusters, one in London, one in San Jose and one in Tokyo but that frankly would not be terribly interesting to us. We're not for the fact that we can now connect those. So there's a fully meshed private network and this is where we're actually able to leverage IBM's own private network which is something that software brought to the party. I don't know if you want to say anything about that but... I think the couple of things here so they look like one from a Keystone standpoint so we enabled single sign-on for Cloudsoft. So the same credentials that work in San Jose will also work in Tokyo and London, right? So that's key so that's a single consistent experience. I think the second thing is obviously the lines that connect together. So that's the global network, the global private network that all software data centers are connected by so you can send traffic, replication traffic, whatever it is that you want from one cloud to the next for free. Yep, not just free but it's reliable as well. So I mean this was one of the things that if you're gonna choose a partner, choose one that does have a global footprint but also can provide you with from a security point of view a very robust, reliable way of connecting those individual clusters so that what you can then do of course is choose when or if you want to go out onto the internet. Of course you do, you're offering a client-facing application but it also gives you the ability to do things like a wide area of replication. So we've worked a lot with Bashow and React as a good example of that. But of course in order to do that I have to complete this incredibly expensive only done on Sunday graphic which shows of course us running the application management platform in each of those locations. Again, just as you do at the network and the OpenStack tier, we're able to sort of utilize all of those locations from any one of those points of view. So we can be starting up in London and deploy to both London, San Jose and Tokyo and vice versa. So it gives you complete flexibility. Of course we're not limited to those three locations. There are 50, 60 data centers around the world where we could drop one of these clusters in. And actually as I look at that I think it's somewhat Northern Hemisphere biased. At least with Singapore we just about crept into the Southern Hemisphere. So we'll probably at some point look at Australia. Well possibly, I'd like to go there. The only problem with that is if you look at the submarine cables I'm not sure it's always the most efficient place to land and expand. So this is one of these interesting things. When you actually get into the guts of the internet and the global networking it really is submarine cables that will drive where you want to put things. Which is why Singapore by the way is a great point to those presidents. One thing I didn't want to touch upon right so what we actually did when we migrated clouds off from the old infrastructure to the new plus migrate them from data center to data center we actually built parallel clouds. So we actually built these new clouds in the same data center as what they were in. And then we provided a time where they could migrate their workloads. So going back to a couple of slides back where we had that shared responsibility so we would build the clouds and provide the API endpoints, credentials and what not but we're dependent on the customer to sort of move the workloads. Part of it is customers know their workloads better than we do. The other part of it is we don't touch customer data. There's a lot of compliance and security concern and so that again, it really is sort of around what we want to go do with the private cloud. We do want to help you get your applications running, get you going, migrate, ride the technology curve. I think on the new infrastructure, it's faster and I think at the top. Double the capacity and the one mistake we made and this was our choice, not your was we went for the sort of entry level clusters. So one gig top of rack and basically everything's now 10 gig and a lot more horsepower. In the case of the, you probably want to get into the guts of the Red Hat configuration. It's even richer than that, isn't it? With the separate controllers. Exactly right. And so, but the thing we're going to go to is really allow customers ride the innovation curve, ride the cost curve, but also not make them reinvent themselves every 12 months. So we're there to really help you migrate because again, letting CloudSoft run their business, provide value to their customers is the most important thing and we're there to sort of stay out of the way. So the way we see it is, if you don't notice IBM and the Blue Box team, we're doing our job. And by the way, you're not, if somebody's worried about, well, how much control do I have over this? You still have exactly the same access to the open stack environment. You can set things up yourself and in our case, oftentimes that's automatically, but the idea is that if there's ever an issue, there's always somebody that you can call night or day 24-7. And I used to almost be terrified of calling the help because if I did, I'd get three people all pinging me saying, how can I help? I mean, it was phenomenal. It is phenomenal. So, but that's quite good actually. That does train you to think, if I do that, I'm going to create a lot of excitement. So, but phenomenal support, phenomenal response. And in terms of migration, yes. I mean, helping us migrate both the, not just the applications, which actually, as you rightly pointed out, we have a good handle on but also making sure that the environment was replicated, which is also important in this context. And everything we're doing here, of course, we're doing this, we're learning as we go and that then benefits our customers. So when we work with customers that are Red Hat customers, they're delighted that we have a Red Hat environment because they can look at what we're doing with OpenShift, as well as, of course, what we're doing with Cloud Foundry and Bluemix and so on and so forth. Yeah, and again, and I think this also shows, we're fairly agnostic when it comes to the solution, right? What we provide is the experience and the uptime, whether you decide to run all community, all Red Hat, a combination of both, that's really the customer choice. Exactly, thank you. So in summary, better together is a lame political slogan which we could probably do within the UK right now. But we do get a very, very consistent experience wherever we go with these guys. And it's not just the Bluemix Private Cloud, we're also working with them within the context of the Bluemix Container Service. That's probably a talk for another day. But again, it's the very open, you get the APIs you expect, not some sort of homegrown thing. And you can do cool stuff, like you can run blockchain networks and as I've mentioned before, multi-site database replication, et cetera, et cetera. And so, slideware, that's the blockchain stuff, and these are a couple of the announcements. But it's probably, do we have time just to quickly, I won't try and do everything. Yeah, yeah, we should do, you know, see it so you're gonna do this live? I'm gonna try and do a demo. All right, there we go, all right, so. Now I do know that I have to turn PowerPoint off, otherwise I can see it, all you see is that, which I'm sure Red Hat and IBM are delighted, but it's not actually showing you a demo if I stay in PowerPoint mode, so I know I killed everything. All right, no, there we go. It's there, okay, excellent. There was no pause there, nothing, no panic, no look on my face, it said, heck. So in the interest of time, I'm not gonna try and deploy sort of a globally spanning sort of hyperledge of fabric, or indeed a Kubernetes cluster, but these are just examples of some of the things that one can do. So if I actually drill down on one of these, so this is showing us running essentially a fabric that is spanning, this is spanning San Jose, Tokyo, and London. I'm now looking obviously at the world through my eyes, which is looking at software that's running on those clusters, but the point is that's a single blueprint spanning multiple locations, and we can also do that across multiple clouds, so if I just close that guy up, and this is the other thing, same blueprint just gave it different locations instead of saying deploy us in Bluemix private cloud across the three locations that we manage. We said no, let's run it across Amazon, Google, and Azure, sorry IBM. And are you making native open stack calls when you're deploying to? No, what we're doing, so I should have mentioned this earlier, under the covers we also use Apache J Clouds, so that gives us the lingua franca for things like compute and storage. The networking is still a little challenging, but we've put some thought into providing application network security that maps very much onto things like security groups and so on, so it works beautifully in AWS, and on open stack. So the same, again it's bubbling up these concepts, so from an applications developer point of view, I'm creating a blueprint that really is Venn diagrams, so all I care about is these things are in one group, these things are in another group, this can talk to this over this port and vice versa, so rather than getting into really deep territory when you're doing SDN or overlay or anything like that, so then finally there is a Kubernetes cluster which we deployed earlier, this is the cooking show after all, where what we're essentially doing, again it's another blueprint and that blueprint stands up a Kubernetes cluster for you, and one can obviously do this automatically, but in terms of the idea of effectors and so on, here's the worker cluster, I can resize by delta, so I better make sure I know what the size is. Okay, so six, well yes I did ask for six worker nodes, so each one of those by the way maps on to an open stack VM running on BlueMix, but I can say, okay delta, I'm just gonna go up by one, is that gonna do that Robert or is that gonna take it down to one? It's gonna go up by one, you see never work with children, animals or CEOs, so there we go, it's just started another node like that, so and if you're really interested in what's going on, you can poke into it and see that it's starting, it's provisioning it and so on and so forth, all of which is useful when you're debugging, not what you want to do when you're doing stuff fully automated, but just one example of the kind of thing one can do. Yeah, I really like this demo because it doesn't really need to show open stack, open stack is in the background, we're doing our stuff, and it'll be clear if we're not doing what the SLA says, this demo won't work, so we're sort of the invisible hand in the back. All right, I think we've got a couple more minutes, if there's any questions, if you can head to the mic, I'm more than happy to answer those. You mentioned having a single side on between open stack instances and the three locations, are you using federated keystone for that? So you can log in to one? Yeah, so we're using keystone to keystone federation, but we also, for some customers, actually they'll have an ID provider on the back end, like a central database, so in the case of Cloudsoft, they're using keystone to keystone federation, so the three clouds trust each other, but we do have another model where you can use an ID provider. Cool, thank you. Again, it's the idea it's pluggable, so which I think is a theme of this conference, isn't it, the idea of compacting things? Yeah, compostable, I think. Composable? Composable, yeah, compostable. Uh, ouch. Yeah. Anything and anything else? So, no? One more, here we go. So what are the services span across data centers apart from keystone? What about horizon? What about glands? Do you have a way of making sure that the images sync up across geos? And from a portal perspective, is horizon a single dashboard for multiple regions? Right, all good questions. So today, what we federate across is keystone. I think it's the only thing that we federate across. So yeah, so in the case of horizon, in order for Duncan's team to go from San Jose to Tokyo, they'll have to go put in the appropriate DNS name or the IP address. But what I would say is, and this is actually suited our purposes, is each one of these we can treat as an individual endpoint, because logically that's what we want to do if I want to deploy an application, I want to control where particular components are running. But the really cool thing, which is kind of magic to some of myself, is that there's a shared private network that shows up in all of those clusters. And as long as you're using that network, everything is fully connected. And really what we wanted was connectivity versus trying to sort of synchronize images and so on, because we're not really so much concerned about images as we are building software. So typically we'll take stock images and we'll work with those. And then we lay down the software and not just install software, but make sure it boots up correctly and that all the interconnections are and dependencies are taken care of. So that really suited our model. It's in a way, it's the least invasive way of doing it because you can ignore it. And if you want to use it, it's just a shared private network. Yeah, and I remember the conversation we had around single sign-on, it was really about if there's a new person that joins Cloudsoft and instead of having to put the credentials in three times, you just put it in once, right? Whilst for the other customers, they truly want to know who you are, right? So at least for us, we've seen the use case be different between Keystone and Keystone Federation versus an ID provider. But to say that, we've been working together for 18 months, we've had items on the roadmap. There were things where we, Cloudsoft committed to us and we put things on the roadmap. And so as OpenStack improves support across all these different projects and what's important to IBM is that they're stable. As ultimately, we drive a service. So whatever we put into production needs to work. And so quality of the code, stability and whatnot is key. But as that innovation curves continues in OpenStack, we'll gladly adopt it and make it available to customers. But good question. All right, so we are at 420. On behalf of Duncan and IBM, I'd like to thank you for being with us. We'll hang out, but thanks for coming. Thank you, thank you very much.