 That's right up here. So welcome. Thanks for coming. I'm Jason McGee. Myself and Asmere are going to take you through an interesting conversation over the next 40 minutes on Cloud, on how to build a modern world class full function cloud stack using open technologies and what we're doing in IBM to deliver that stack to the market. I'm going to kick us off and kind of set a little context and talk about how I think a cloud stack is put together and how open source technologies play a fundamental role in the building of that stack and give you a little bit of an introduction to how we're delivering that in IBM. And then you turn it over to Asmere and he's going to come up and go into some of the details on some of our capabilities and some use cases from clients and interactions that we've had about how they're adopting cloud. So let's get started. One of the things that we believe in IBM very deeply is that open technology is a differentiator. Open technology is the right model for how we should build an exposed cloud. And I think cloud has had an interesting history in that a lot of early cloud activities, especially early public cloud activities, were not based on community. They were not based on open technologies. They were not based on APIs that gave users the freedom to select vendors and move their workloads as appropriate. And so as we started a number of years ago building our cloud, we knew from the beginning that open technology was going to play an important role. Now, many people, when they think about cloud, think about different layers of cloud. You think about infrastructure. We're here at OpenStack Summit. We're talking a lot about infrastructure and how OpenStack provides APIs for things like VMs and bare metal and networking and storage. But cloud today really is much deeper a stack than just infrastructure. So I thought I'd start with a view of what are the layers that make up a modern cloud stack? Independent of vendor, independent of technology, what are the capabilities we should all expect in a modern stack? So I think it starts, of course, with infrastructure. And a modern stack has, at its base, the core infrastructure capabilities around virtualized servers, around bare metal servers in many cases, around storage, both block and object, backup and archive, files, various forms of storage technologies that you need as an application developer for building your apps. Networking, which both means the core networking provided by the cloud for doing packet routing, for doing isolation, for doing private networking and overlays, and interconnect back to your enterprise network, but also network services. How do you do load balancing, firewalls, and packet inspection and all the network functions that are needed to build and run your application at scale in a cloud, especially in a multi-tenant context where you have other users in that environment? On top of that physical infrastructure layer, there is almost always now a platform layer. And this is a recognition that core infrastructure is powerful, but it's a pretty low-level primitive as an application developer. If you're building an app, having a VM is nice. It gives you the core runtime environment to host your code, but often you would like something a little higher. And I think in the platform layer, I think of it in two parts. There's a set of higher-level application runtimes around containers, around structured containers with things like Cloud Foundry, and you see this emergence of new models like serverless and event-oriented technologies that allow developers a way to build and run their code in a way that lets them focus just on the application artifacts that they want to deploy. On top of those core runtime models, there's a set of what I think of as foundational services, kind of common horizontal capabilities that almost everybody needs when they build applications. You have management functions, like logging and monitoring and alerting and operational tools. You have DevOps functions to help you do delivery and deployment. You have automation and pattern tools like Heat that let you do the automated stand-up and configuration of complex topologies for applications. You have security functions, and you have integration functions that let you connect back to existing applications or connect back to existing resources. So these two layers make up a set of platform functions. And historically in Cloud, there was infrastructure as a service, and there was platform as a service, and they were kind of two separate domains. And I think if you look at the evolution of Cloud over the last, let's say, 18 months, you see those layers coming together. And you see in most Cloud environments a spectrum of capabilities from infrastructure through platform that's available. Now on top of that, that set of platform capabilities, there's also often a set of domain-specific services. So these are higher-level services that application developers want to use that are specific to a particular technology domain. So you might have services around mobile for doing things like push. You might have services around data and analytics for doing, of course, data storage, and search, and query, but also for doing analytics and data processing. You have now, in many cases, cognitive capabilities, AI capabilities to help you put intelligence into your applications. You have functions around IoT. You have functions around video. So this, I think, at its heart is the stack of capabilities you will see in most modern, full-function Cloud environments. It's one of the dimensions to this picture. And that's where does all this run. And one of the things that we believe in IBM is that we're not all going to move to one place. That whole world is not going to move just to public Cloud. Public Cloud, specifically shared multi-tenant public Cloud environments, will play a huge role in where many of our applications run. And I suspect that all of you at some level are using public Cloud. But there are other models that are important as well. If you really look at an enterprise and you look at the business requirements that they have, you look at regulatory requirements, you look at laws within different countries, there's a need for other models. There's a need for what we call dedicated, meaning I want a private instance in that shared data center that's isolated from everyone else, physically isolated compute, physically isolated networking, but still hosted by a Cloud provider like IBM. And there's also a role for local, meaning I want to run Cloud in my data center, behind my firewall in my shop. But it's still Cloud. What does Cloud mean? Cloud is a service, an API, and an experience that you're delivering. And so Cloud in your data center is still a service that needs to get delivered. But for whatever reason you've decided, maybe it needs to be close to data that you're using. And so there's a latency requirement that says I need to stay in my data center. And so one of our goals is to deliver this full stack across all three delivery models with the same API and the same experience. So as a developer, you can interact with your Cloud in the same way. You can interact with the same APIs and you have freedom of movement between these different delivery models. So this is my view of what a full Cloud stack looks like. Now, as I said in the beginning, for many years now, IBM has built its technology direction on top of an open community and open source view. And whether it was in the past around Linux and Apache and Java and WebSphere and other technologies that were built on open technologies, in Cloud, the same path is playing out as well. And there's a number of centers of gravity. We're at one big one, which is OpenStack, providing the community and the technology around that core infrastructure layer. But there's also activities in CNCF and OCI in the container space. There's activities around Spark, around Cloud Foundry. There's a number of important communities that provide pieces of that stack that come together to build Cloud. Now, if you take that, let's take that generic view of what a modern stack looks like and start to overlay the specifics of how IBM has built their stack. And at a very high level, you can see the IBM stack has three pieces today, and we'll go into details. At the infrastructure layer, we have soft layer providing public and dedicated environments. And we have BlueBox providing dedicated open stack and local. So we have a set of offerings around the infrastructure layer. And then we have a set of platform and domain services around IBM BlueMix. Conceptually, these are all coming together into one thing, one user experience across all three. But these are the components that make up that layer. The more interesting view to me is this one that says, well, if you remapped that layer chart that I drew and looked at the technologies that we're using to build it, it would look like this. So at the infrastructure layer, we have open stack with Nova and Ironic, with Neutron and OVN on the networking side with Cinder and Swift on the storage side. At the platform layer, we have really three run times, run time models that we're supporting. We have in the container space, container services built on top of Docker and moving towards things like Kubernetes and Swarm as interfaces into containers. We have a Cloud Foundry environment that gives you a really rapid way to go from code to running applications in the cloud. And then we have this new thing that we introduced back in February called OpenWisk. OpenWisk is a serverless event-oriented programming model, something that we developed, but then have contributed and open sourced that project. And so we have, as part of the OpenWisk project, a community building around next generation serverless application models. So within our platform, we support all three. At that foundation layer, there's a whole bunch of technologies that make up that set of foundational services I mentioned. Many of them are built on open source projects. An obvious example would be the whole logging, monitoring, metrics, and alerting space we've built on top of Elastic Stack. So LogStash, Elastic Search, and Kibana provide the core technology to allow us to expose to you a set of logging and monitoring services. Runtimes play an important role as well. So within that platform, there's a set of application runtimes that we expose for developers. You, of course, as a full stack platform, can bring whatever runtime you want and run any language you want and run any middleware runtime you want. But we provide a few out of the box. No surprise, we have a strong supporter on Java. We have a strong heritage in Java and IBM. We also have a lot of work going on with Node. And many of our applications are built with Node.js. And we have strong participation in that community and some strong offerings in that space. And then the emerging one is Swift. I don't know how many of you have taken the opportunity to play with Swift on the server. But Apple, with help from us, has open sourced the Swift language. And we are providing support for Swift in Bluemix, both as a language you can run in Cloud Foundry and inside of containers and run web apps on Swift and have package management around Swift. So we think that's going to be one of the next really big languages on the server side is Swift-based development. And there's a lot of people in the world who are either already or quickly becoming skilled in Swift because of its role in the iOS ecosystem. And that's now translating into the server. And then if you go up in that domain layer, there's a number of technologies in that domain layer. Ones I would call out in the data space. We have an offering called IBM Cloudn, which is based on Apache CouchDB. We have huge participation in the Spark community. And we have places like the Open API Initiative and Swagger, where we're starting with the community to define how do you define your APIs themselves and expose those APIs to users. So this is the stack we're building and how it's put together. The best place to go to get access to this is IBM Bluemix. So you can go to bluemix.net and have access to this kind of full modern stack. So we've brought together infrastructure and platform into one experience. And you can go here and you can deploy open stack object storage via Swift. And you can deploy containers. And you can deploy all the assets of your application that you need to run on the cloud. And Bluemix itself is available all around the world. We have 28 regions running Bluemix today, both public and dedicated. So you can choose to run your application where you need it. And then, of course, we have a set of local capabilities that let you bring that same experience into your data center and have that common API that crosses those boundaries. If you looked at Bluemix, there's a number of elements that make up that platform. Diversity and run times, which I mentioned. So open stack VMs, bare metal, containers, cloud foundry, serverless, a big focus on data. I mean, if you haven't spent some time on Bluemix and go play with things like Watson and the kind of really powerful cognitive APIs that are available as a developer through a simple REST API, you can do advanced image processing, you can do natural language processing, you can do sentiment and tone analysis on text. You can get access to some pretty incredible capabilities through simple REST APIs exposed through a cloud platform. The Bluemix platform today has over 120 services that are available for developers to use to build out their applications from across all those domains in that layering that I showed you. Rich data capabilities, a whole bunch of cognitive capabilities, core security and application functions, all built on open technologies. And I think that's a real powerful statement. Because it means, as a developer, you can build those applications. You can have the freedom to move them. You can take an application running on IBA Bluemix and run it on an OpenStack cloud of your own. You can move your container-based applications to some other container hosting environment. You aren't locked in by API. You're locked in in the sense of having access to this rich set of capabilities. So you can leverage those capabilities but have freedom to move as you need to. So that is context. I'm going to turn it over to Asmere. He's going to dive a little bit deeper on some of these concepts and how we're delivering them to the market. Is that right? Thank you, Jason. Thanks a lot. All right. Thanks, everyone. So I'm going to go through here, use this as our reference point. So we've got our cake. We've got our layers. I'll cover a little bit of the layers. But more importantly, I'm going to cover more about the slices. So the public slice, the dedicated slice, and the local slice. So just to dig a bit deeper on the layers, so this is sort of what you see. So the next drill down, we're leveraging soft layers significantly for both public and also for dedicated. But we also allow you to experience that same environment inside your own data center. And we'll talk a little more about that. As you go further off, we're continuing that open by design approach. We're using OpenStacks and Linux as the underlying technologies. And then, of course, you've got the applications. And really what we see is this is really where the rubber has to road. It's great for us to talk about infrastructure and hardware and the technologies underneath it. They're all enablers. What really drives the business, what really drives innovation are the upper layers. And so, as Jason said, check out BlueMix. That's really where you get to see all that power harness together in one environment. In terms of the slices, so we've got a slice that's public. So we'll talk a little bit more about BlueMix services and what you can see there and how OpenStack is helping power that out. And then BlueBox. So BlueBox is actually new to the IBM family. We got acquired about 10 months ago after Vancouver. And so we've been spending a lot of that time really enabling our technology and merging that into the IBM technology to provide more goodness. And predominantly around the private cloud space, I'll talk a little bit more about what dedicated provides and also how we're enabling that same functionality inside local. So let's start with public. So this is the same BlueMix UI that Jason showed earlier. You can see there's a bunch of services here that are actually enabled by OpenStack. So our virtual server, block storage, object storage, security groups are all underpinned by OpenStack. Object is GA today. The other three are in beta. But they're all there. And you can really see both how these all pieces come together and also how OpenStack is powering it. You can also consume these services using the OpenStack API and the CLI. So we don't limit you just to BlueMix. But again, really for a developer service, that's really where we feel the power can materialize. If you dig down a little bit more around, BlueMix is all about open APIs. That's something that we hold very close in terms of our design language. Elastic Resources, obviously, is about building these, as you heard yesterday in the keynote, these mode two applications that sort of auto grow and auto shrink as you go along. And so this is the framework that allows you to go do that. Pay as you go. Cloud, as you always commit to what you need. And so this business model supports that. And then we focus on SLA. You can build the best application out there and the best UI. And if it's not running, who cares? So there's a lot of work that's required for us to maintain close to 100% SLA. But as Jason mentioned, public isn't for everyone. Or I should say, you will need more than just public. And there's a bunch of reasons why. You want to have resources just for your own team. That could be cultural, or that could be really technology driven. You want to focus on the applications. And you want somebody else to manage the infrastructure. You want to ensure that when you talk to your customers, you're providing that SLA. And you're in control of your environment. You want choice. You want to be able to run it here for certain instances and there for other instances. And you want to be global. We hear this all the time. And so when you look at that, it's very hard for you to have one monolithic solution to that. And so private clouds have a play there. And this is what Bluebox brought to the IBM portfolio. We run managed open stack private clouds. We've been doing it for a long, long time. We pivoted to open stack about three years ago. And so we've had a lot of experience both on running clouds for other people and also in open stack themselves. And we curate a lot of that to ensure that we, again, maintain that SLA for customers. So we ensure that there's performance, availability. So being part of IBM has really grown that aspect of our business. Before IBM, we only had four data centers. And now we're in 14 countries, 16 different data centers. And so we've done that in the space of about four months. And so that strong platform that IBM had allowed us to grow, in fact, exponentially here in a very short period of time. As cost-effective, we've continued to bring in pay-as-you-go as you adopt an open stack. And that really helps. A lot of customers are still in that proof of concept mode. They don't know what they want. They don't know how to commit. And so a business model allows you to commit to what you need, whether it's a 30-day term or a 12-month term, really, really helps in terms of how we are moving the project forward. It's elastic. You can shrink and grow your cloud as needed. It's always private and secure. We literally will provision bare-metal servers inside your data center or in SoftLayer that only you and your groups will run. And it's 100% open. We don't fork open stack. You heard Shamile and Tyler talk about that earlier. So IBM's very much focused on that. Make sure that there's through openness there. And more importantly, compatibility between the different approaches that we have. So for BlueBox, we've got two offerings. We've got dedicated, dedicated runs in SoftLayer. And then we have local and local runs inside your data center. Everything else is the same. We have the same service. It's the same open stack projects. We have 12 of those. We deploy it the same way. We manage it the same way. We do upgrades. All of this is the same. And so the only difference that you have to choose is really geography. And we work very hard to maintain that. If you look the next level down, we have these cloud building blocks. If you want to build an elastic cloud, you've got to have building blocks that can scale with you. And so we've adopted, just like open stack, a scale out architecture. So our storage, our compute, and networking is all scale out. And so we actually curate different types of nodes, whether for compute, or for block storage, or object storage. And then you literally use it like Lego building blocks and build out your clouds. And this is the same building blocks, whether you're running in dedicated or on local. So since I work on dedicated predominantly, it's great being part of leveraging SoftLayer. Because SoftLayer can have the same hardware in 14 different countries all the time. So if you want a cloud, and you'll hear CloudSoft after launch, they've built three clouds in three different SoftLayer data centers, exact same footprint. And they're able to do very impressive things on top of that because the substrate is identical. Compatibility, 100% across three different geographies. So those are the kind of things that we want to enable with our infrastructure, right? Have you focused on the top layer items? We'll take care of the rest. But it starts with these building blocks that are well thought out, and really a good match for your applications. And the way we also do this, the team at BlueBox, we're a global team. We support customers seven by 24, but we have to have remote hands. We can't be in 14 different countries at the same time. And how we do this is via our relay technology. So we actually have something called BoxPanel, and that's how we use to interact with our customers to allow the life-cycle management of our clouds. That's how we can have the cloud admin. Those of you saw Boris yesterday on the keynote, right? Now we're not vodka drinking bears, right? We're real ninjas that are working on this, but we can only multiply so far. And this relay technology allows us to multiply. And it's important for us to do this in a secure manner, right? So I get asked this a lot. It's great in all that you can manage my cloud, but what happens to my data? Well, we encrypt everything, right? We encrypt all the data in transit. All the connections are our tunnel connection. Sensitive data is encrypted at rest. And for local, when we run in your data center, your data cut doesn't come to us. We leave it behind. So it's very, very important if you're gonna, to build that trust with a customer, because cloud is great, building applications is great, but it only takes one security incident and all of that is lost, right? So we do take this very seriously. In addition to that, we wanna maintain SLAs. SLAs is very, very important for us. That's everything that we do where there's building, expanding through a new data center, adding a new feature, building a new node type. All revolves around, can we maintain that 99.95 SLA, right? It's very, very important. So there's a lot of the work that you don't see, but extremely important for a business and extremely important for your application that you build. So what does that buy us? Well, it buys us true hybrid cloud, right? And I'm sure a lot of people think, well, hybrid cloud needs to be public and private, right? Not necessarily. It could be any combination that you want, depending on your use case. So I'll give you an example today. Today, our customers generally run POCs on dedicated, whether they're gonna go into production on dedicated or local. Why is the exact same experience? Every single thing that you can do on dedicated, you can do on local and vice versa. Same compute node types, same story technology, same IP address scheme, controllers, and so it gives you all sorts of portability. So think of it, right? If you've got the substrate that's identical, everything that you do on top of it will work. Guarantee. So that's what we've been focused on. We really used to ensure image portability, API compatibility, all the projects are the same. And so we spent a lot of time making sure that if we put one thing in one offering, that the next offering either has it immediately or has a very aggressive timeline to adopt it, right? We ensure that that stays very, very, very quick. And then we've got things that are coming in upstream, a single sign-on via Keystone Federation. So things like that allows you to have the same credentials. If you've got multiple clouds, the same credentials gets you into all these clouds and it's all trusted. So these are the things that allow you to go do very interesting applications and use cases and it's hidden, it's in the substrate, it just happens. So you can focus on what's important for your business. We talked about geography. So these are the locations in Softlayer that is dedicated ready, right? And as I said, we were able to roll out in these different geographies in about four months, right? So when we got the first one done, took us half the time through and we get the second one. By the time we got to the fifth one, we were doing one once a week. So really great for us to do that. So we talked about the bare metal server. We can have the same bare metal anywhere in these locations. But more importantly, if you look, there's all these lines that connect the data center. Yeah, that's the network that Softlayer has and probably doesn't get as much PR as it needs to, but it's phenomenal. It's a high speed, 10 gig backbone. And what you can do is you can connect. So you got two clouds in two different regions. You can connect your traffic and send them across this private network. One, at high speeds, it's two, it's secure. And then three, it's free. And so you avoid the public internet and you get to do stuff. So again, that's very powerful. If you're connecting local and dedicated, you can run on direct link. So you can hop onto points of present in your city and then get onto the backbone and voila, you're on the high speed network anywhere you wanna go in the world. So these are the things that allow us to really drive home that whole anywhere in the world view that we have for cloud. But we also get other benefits. So, and we see there's another use case for hybrid. Well, you've got your single tenant cloud that Blue Box enables. And then we talked about direct link, the global private network, being your own IP addresses, which you think it just happens right now. It's really hard for you to have IP addresses that don't conflict or allow IP addresses that conflict to work together. So that's a lot of work there. But we've also seen things like, well, I just need a little bit more storage. We, for private clouds, as you can imagine, you're buying for what you need today. And if you don't know what you need today, if you're on the other ends of the spectrum, it's hard for you to get that elasticity. Well, if you're connected on the same network as all these other great soft layer services, you can tap into them. And that's what we've done. You can tap into all these different types of storage and backup services and whatnot, and really just build that service that you need and have it distributed out, right? We're the ones that are burdened in terms of the infrastructure and ensuring that the SLA is up so you can maintain what you need on your part of the stack. All right, slight change gears. Let's talk a bit more about Bluemix. So we've got Bluemix public. We've got Bluemix dedicated. Both of those are on software. What about Bluemix in your own data center? Well, that's one of the first things the Bluebox and the Bluemix team worked on when we joined the IBM fold. And we're in beta today. So Bluemix Local, on Bluemix Local, we announced that interconnect in February. And so we get the same experience that you get, that you have in public, but behind the firewall. And we've been able to take Bluemix that was built around an open core and team it with Bluemix. It's also built around an open core. And really we want to enable that same full stack experience as you go across the different digital geographies. So this is a huge milestone for the team. Really, it's been great. And looking forward to getting this out in GA and having that goodness of wherever people want it to be. Another thing we've been doing, and we've also seen this too, where when people in that bi-modal IT phase, I don't know where I'm gonna go. I've got some sunk investments. I need to be able to either consolidate that in one place or have a different strategy than it running somewhere else. And so there's a group at IBM called the Pure Application Team. They've been focused more on the commercial side of that first type of workload, but they're also embracing that second mode of IT. And Bluebox has been able for that. So we're actually using heat and the heat orchestration templates to enable all these open patterns on this. And so you can have effectively one platform and allow you to run the different kinds of applications, whether they're mode one or mode two, on the same platform. That's a huge savings for customers that want to go into cloud, want to have it in their data center, but are not sure what the mix is gonna be or how they're gonna transition from one to the next. So we're really looking to see how we can help customers do that in a very pragmatic way. Another thing that people don't think about because it's all about technology is around regulation. There's a bunch of companies that are not, how they do business is not determined by budget or by technology, determined by the industry that they work in. Healthcare is one. So HIPAA is a rule that protects how patient information is shared. And as a company in that industry, you have to prove to the regulatory commission that your workloads are covering the privacy rule. So we help customers achieve that compliance. So we allow customers to run HIPAA-regulated workloads on our clouds. Fairly unique offering. We spent a lot of time looking into space and the vast majority of what's available for HIPAA is either managed services or not cloud or we'll provide infrastructure somebody else takes care of the controls. So we enable about 60 different controls at different layers of the cake to allow customers to be able to meet that compliance requirement. So this is a huge thing. We just came out with this past quarter. Okay. And then we spent a lot of time talking about technology. I think Jonathan yesterday talked about how there was a customer that did all this automation and went from 44 days to 42 days of deployment, which is very true. Before we joined IBM, Bluebox, we were a startup getting a lot of customers and trying to get people on and people will start doing POCs. We stand up the cloud in record time. We check in two weeks later, nothing's happened. Oh, I've got something else came up. I can't this or whatever. And so they would pay for a whole month and nothing really happened on the cloud. So we instituted this five week onboarding exercise where one of our customer success managers will actually talk to a customer, understand what their goals are, help them out. Some of them don't even know how to get started right now. What are credentials that I use for Horizon? What's Swift, right? How do I see that? And so all these things are sort of covered in that five week onboarding process and it's really helped it for us. From a business standpoint, we've actually seen people actually start using the cloud much faster and also go much deeper in terms of what they do. So they'll use more block storage. They'll spend up more instances. They'll do snapshots and what and so on so forth. And as a business it allowed us to grow much faster. We've already talked about month to month commit, right? That's super important for us. While I'd say the vast majority of our customers sign longer term contracts, the fact that they can actually cancel 30 days, they have to give us 30 days notice and jump off. That's huge for a lot of customers. They don't want to be locked in in this error cloud. And so we're enabling that both from a technology standpoint and also from process standpoint. And then finally support and upgrades, right? I don't know how many of you run your own open-side clouds and if you've ever done an upgrade, it's hard, right? The demo today was great, but that was today's demo. It's not the first time they've done it, right? So up until today it's hard. And so we've had to learn that the benefit for us is that we have hundreds of clouds under management. So we get to do it a hundred times. And so we know how to do the upgrades. We know how to support. And so those are the things that we build as a team and we make that a process and try to deliver that goodness to customers. All right, so that's good. So this is all theory, great and all. So where's the rubber hit the road? Well, I'm gonna show you some case studies of actual real customers. Unfortunately, I can't put their names up here of what we've enabled for them, right? Using private cloud as a service, enabling their applications on top of it. And we'll have more of these to share. I'm sure next time we meet up in Barcelona. So a large distributor of casual games. As you can imagine, they're mainly DevOps. They really want to focus on the application. They didn't want to manage OpenStack. And they really wanted something that was their own. They didn't want to use the public cloud. So they came to us. They were close to where Blue Box is located and we delivered that cloud and met them. And so, and they got the benefit, right? They got to focus on what was important to them. They got the open APIs that they needed and they were able to scale out, right? Within that data center and they're looking to scale out to a different data center. Another customer, they do concerts. So live event management, right? So a lot is, while I love to go see concerts in person, I see a lot of people actually see it online. And so that's what they do. They load this up. They have their own SaaS offering. They needed the elasticity, but they needed to ensure that they have a lot of control over that environment. So we came in and spun up additional clusters to ensure that they're close, where the cloud is, is close to where they need to go run that event, right? So that, you know, sort of a private cloud elasticity at work. And then we also have a company that enables technology students into the job market, right? And, you know, while IBM wants to get these people too, there's, you need to provide choice. And they were actually running with a competitor and it was, there was an SLA play. They could not get their applications up and running. Anytime students would come in, they would either not get on or not get the performance that they need. Billing was all over the map. And so we came in and solved that problem. So again, a different aspect of how we can solve a problem where it was really more around making sure that the service was running. And then they could price it out. So we priced on a monthly basis. They could figure out six months out what their demand looked like and budget for that. So that was extremely important for them. And I'm gonna close. Maybe we'll have time for some questions with Cloudsoft. So Cloudsoft is gonna come in after lunch. I think Hernan and Duncan are gonna have a conversation. And so they've been spending a lot of time with us. They currently have three clouds today in three different geographies running on SoftLayer. So I'm gonna, I'm not gonna still live under and let them talk about how Cloudsoft is enabling their business using IBM and all the components that we have underneath the cloud. All right, so just to close, I think there's a lot of reasons why there's a preference building in the market for IBM. Obviously we're providing the different flavors that you can consume what you need from us. And we're committed to making sure that all of those components that you see here across the different layers and the different cuts of the cake, if you will, are enabled. So it is consistent experience no matter where it's in your data center, where it's in public or private. Control and choice, right? You decide what those dials are, not us. Make it predictable, right? When it needs to be up, plain and simple. And there's a lot of hard work that happens there. Then just operational excellence, right? Making sure that we focus on the customer, making sure that they're number one and making sure that they're successful. So I'm gonna be here with Jason if you have any questions, but thank you for spending time with us today and there's more, more sessions after lunch. So please come back and with that, thank you.