 for Intel. In fact, I work for Intel IT. So one of the questions that I always get is when I talk about what we're doing in the cloud space, people always ask me if we're going to go into business selling cloud services. And I always say, no, I work for IT. So basically, when we talk about cloud services, we're really talking about how we run our own business from an IT perspective. So what I'm going to talk to you today about is extending open-stack infrastructure as a service with Cloud Foundry Platform as a Service. And so I'll tell you our enterprise story of how we're using open-stack and Cloud Foundry together. So I'm going to do that. And I work as the principal engineer for our cloud. And I'm responsible for the cloud architecture, basically. And I'm a huge, huge proponent of Platform as a Service. So in fact, I am such a big fan that I think I have way too much content for this time slot. So what we'll do is we may jump around a little bit and maybe move through the material a little bit quickly. So just so you know. So what I'll do is I want to tell you a little bit about Intel IT, what we look like. I will go ahead and tell you a little bit about what our cloud direction is from an IT standpoint and how we are going about adopting cloud as a user. And then what we'll do is we'll focus in on the Platform as a Service and Cloud Foundry. I'm going to show you some basic Cloud Foundry architecture and then how it works in terms of provisioning Cloud Foundry on open-stack. I'll also share with you some of our challenges and what the roadmap looks like for us around Platform as a Service. So we'll probably just take a really, really quick look at that. And then we'll summarize things. So let's talk about Intel IT Vital Statistics. So you can see our IT department is over 6,000 employees. That's just in IT, so we're pretty big. And we're supporting almost 100,000 employees that are worldwide with about 64 data centers. And you can see that we're down. So we've been doing that same consolidation around data centers. And we've been virtualizing. So you can see the numbers here. And then the other thing is the number of devices that we're supporting from a client standpoint have been exploding. And that's because we do a lot of bring your own device programs. So we support a whole wide variety of different devices. And I think there's a number here, too, that says that a lot of these, a lot of the devices, I think it's, I want to say it's about half that our employee-owned devices versus Intel-owned devices. So that's pretty interesting. All right, so let me give you a little bit of background. So we started our cloud journey back in about 2009 when we formulated our strategy. And when we started, it was really about adopting the cloud from the inside out. So cloud was a pretty new thing. Back then, we were trying to figure out, how do we want to engage with cloud? We decided what we would do is we would take our own data centers and turn them into the cloud computing model. And then we would extend that out in terms of public scenarios, public cloud scenarios. So what we were doing is we were basically doing that internal transformation. And then our use of public cloud has been very, very selective over time. So where we are today in terms of our cloud is that we're still using software as a service in terms of public cloud doing that. And we've created this SaaS playbook to make it easier to adopt SaaS solutions on board them more quickly. We have full-blown infrastructure as a service and platform as a service. And we're working with hybrid cloud models right now. And we do have SDNs already rolled out. And we've also introduced a database as a service. And where we're going in the future, so where is all the action right now? We've been consolidating clouds together in terms of infrastructure as a service. So we're using OpenSack as our control plane for that. And then we are looking to do some other things around our control plane and automate provisioning so that we can provision bare metal the same way that we provision virtual servers. And we have a vision around creating a smart orchestration layer for all of our workloads. So I will talk about that some more during the presentation. All right. So how many people here have heard of the Open Data Center Alliance? OK, few people. So we participate as part of the Open Data Center Alliance. It is a consortium of about 300 companies that have gotten together to express requirements for how we consume cloud computing. So we publish requirements. We publish usage models. The idea is that we're trying to gang up on cloud service providers so that they actually build technologies that we can use. And services that we can consume. So as part of that consortium, we have developed something called the Cloud Maturity Model. And you can go to the odca.org website, and you can download that. But this is just one model from that maturity model, where it shows the enterprise adoption roadmap around cloud. So what you can see is we've been pretty typical. What we've found is that our trajectory for how we're adopting cloud from an Intel IT perspective is pretty similar to the way a lot of other large IT organizations are adopting cloud. And what you can see here is that we're in this stage two production, stage three investment area in the middle in terms of Intel IT. But it's kind of interesting. There's some tools that are available. You can kind of benchmark yourself and then think about how you get from one level of maturity to the next. So bottom line here is that the end game is to achieve this federated, interoperable, and open cloud. That's really where we're going. And the other thing that you'll notice here that I think is pretty interesting is that the migration of legacy apps to cloud aware. So we're really trying to make that transition within the environment too, and you can see the emergence where Paz fits into things. So I mentioned this before that we have really been focused on enterprise private cloud. And so there's a couple of reasons for that. One is that we're pretty big. So you could say that we're big enough to be our own service provider. And therefore, we want to consume our own capacity before we look to others to consume capacity. So that's been a big driver for us. But also, we want to have agility and flexibility while managing our costs. There are some reasons why we might want to host something outside of the enterprise. Like, for example, if all of the users are outside of Intel, there may be a reason that we don't want that type of traffic, especially if it's very unpredictable. We might want to actually host that in a public cloud somewhere. But in general, that's pretty rare. So we try to use our own resources first. So as I mentioned, we're offering platforms as a service and infrastructures as a service. So basically, when you think about an enterprise, there are two types of applications that we want to host. One is that we've bought something. It's like a package solution. It comes from a provider. And it needs to get hosted somewhere. So you get an installation kit, basically, when you buy something like that. And in that case, you have to deploy on infrastructure as a service because you pretty much need control over that entire stack. But the other type of application is custom application development. And that's where Platform as a Service fits in. So we want Platform as a Service to be really the first option for where we land applications, for custom applications. And then, again, if you need that deeper, something more complex, you want to have access to the operating system, the web server, that level, then you go with infrastructure as a service. The other thing is that we have a team of people that we call them, our cloud brokers. And our cloud brokers will help anyone who's trying to host an application, whether they're somebody from a business unit or somebody else from IT. They have something that needs to get hosted in a data center. And they'll help that application owner to make a decision about where to host that application. And our cloud brokers not only cover our private cloud, but they cover public cloud, too, and understand where we have service agreements in place. All right, so let's move on to Platform as a Service. How many people here are very comfortable with the concept around Platform as a Service and what it is? All right, so really good. OK, so we're not going to spend too much time on this. But the one thing I do want to say about it is that our developers that are using Platform as a Service love it. And the reason that they love it is because it really abstracts them from the infrastructure. And so they don't have to, when you're in a self-service model for infrastructure as a service, when you get that self-service, there's a lot of responsibility that comes with that. And if you go to use infrastructure as a service in Intel IT, you are held to the same standards as a systems administrator. And therefore, you have to have all the training that goes along with it, and you have those same responsibilities. So that can be a little onerous on some teams. And so therefore, Platform as a Service is great for them. So the other thing, too, is I point out this key takeaway down here. So this has been our mantra for the past couple of years where we want to make it possible to land applications in less than a day. And I know we've kind of heard that before. And even from an infrastructure as a service standpoint, but with Platform as a Service, there's even more automation in place that really makes that possible. And the other part of it, too, is it's not just the technical, how do you land that application, but it's also the governance that's around it. When you're in a real enterprise, you have certain governance requirements around the application, especially for applications that are externally facing. And as I just mentioned before, we have a lot of these external devices in our environment. And so because we want to be able to support those, bring your own devices, that means more and more of our applications are becoming externally facing. And it's changing the mixture of applications that we have in our environment as a result. Okay, so we started looking at Platform as a Service back in around 2012 when we were trying to figure out how do we want to go about offering Platform as a Service as an IT department. And so one thing that we did to begin with was I did a broad-based survey, which I'm going to show you, I think it's on the next slide. Yeah, it is. Okay, so I'll show you that. And also looked around for where could I find Platform as a Service products or open-source projects or anything, right? Could I find something that we could actually land on our own cloud where we had done a lot of work around infrastructure as a service, right? And so the list at that time was pretty narrow. It's pretty slim, right? Because most of the Platform as a Service solutions you could get were offered in public cloud, but there was not an enterprise version of that. And what you can see here, too, are some of our vectors around what are we looking for, what we're trying to really get out of Platform as a Service. And it came down to these three items. One is around the agility, which I've talked about already. Can you really make it possible to land apps in less than a day? And then also elasticity, we were looking for that in tandem with resource utilization. And then finally, design for failure. So that was one of our goals, too, to really make it possible and easier to deliver applications that had a higher level of availability, right? So our strategy was, after looking at all of this, we decided to standardize on Cloud Foundry. And the thing that we really liked about it is that it was just one Platform that supported a very large set of application diversity. And so let me show you what that looks like, right? So as I said before, I ran a broad-based survey and of course, Intel frowns upon if you try to survey thousands and thousands of people, so you have to kind of do a random sampling kind of thing. And I looked for people who actually had job codes that were software development related and then we kind of did a random sampling of those folks. That's how we kind of came up with our list of who we were going to survey. And then the first question I asked them is, are you developing apps that actually get landed in a data center somewhere? Because we have a lot of developers at Intel, they don't do that. They create products that go on silicon and come on CDs because Intel is an ingredient provider. So just because they're doing software development doesn't necessarily mean that they're gonna land an application anywhere. So I had to kind of narrow it down from there. And anyway, so that was the first question on the survey. If you answered that, you don't land anything, you're out. And then for the rest of them, I asked them about, well, what kinds of tools are you using and what's emerging? And what I found was it really confirmed our assumption that we had a lot of application diversity. And if there had been a center of gravity, especially at that point in time where if you looked at platform as a service, it supported maybe one or two programming languages. Now I think everybody's starting to expand out a little bit so some things are a little bit different, but that was really important for us because especially like IT, we might be able to do a little bit more standardizing in terms of please use these tools and stuff. But when you talk to Intel Business Units, you can't do that at all. Everybody's picking their own tools, you know, run times and application frameworks. And so we as an IT department have to respond to that. And I'm proud of myself this last one because my last question in the survey was, hey, if you like this concept, would you be willing to participate in our pilot? I got a few volunteers that way, so I was pretty happy. So let me talk a little bit about how it works. Has anybody here used Cloud Foundry before? Okay, a few people in the audience. Okay, so this is from the user point of view. So if I'm a developer and I have an application, I want to land it somewhere, right? What am I doing? I've got some kind of development environment somewhere, right? Either it's in a lab or maybe it's on my laptop, wherever it is, right? And from there, I want to connect to the platform as a service, right? So the platform as a service resides basically, you can think about it this way. It's in a collection of virtual machines, right? And for us in our environment, it's roughly 40 virtual machines for every instance of platform as a service, okay? So some of those virtual machines run, where the applications are hosted themselves, but other virtual machines run what I call the control plane around Cloud Foundry, just the other components that make Cloud Foundry work. All right, so that's what you see here in the bottom. You can see, I have a couple different boxes there. Those are kind of all the parts, right? So those are pre-provisioned within our Cloud, so they are available. What happens is the developer then uses one of these different options. They can use an API command line interface or they can use the portal and they connect to that platform as a service that's already there. And then what they do is they can issue a push command, one command, to push their application into the Cloud. And what happens is the platform as a service decides where to host that application. And one thing that we do is we recommend every developer when they go to push their application, you know, please push at least two instances of the app, so then they'll get some inherent high availability as part of that, right? So that's how that works. And then they can manage their application from there. They can issue a command to scale the application, one command, it can scale out or back, and they can start and stop it, they can go look at their logs, they can do all kinds of stuff with that from there. But they don't necessarily know what server they've landed on or what the IP addresses of their virtual machine or anything like that. Okay, so that's the real high level of how does it work? So we've deployed it, we deployed it into our production environment. It is available for people to use. We haven't done a lot of kind of pushing it out to a really, really large audience. I think we're gonna end up doing that, you know, the first part of next year. But it's been around for a while. And, you know, we decided to go ahead and do a survey of some of our top users. So what we did was we looked at about 16 application owners slash developers. And with those 16 owners, they have deployed about 56 applications among them. And so what we found was some interesting numbers, some interesting results, right? So we found that, well, first of all, 40% of them had more than 10 years of experience doing development, which I thought was really, really interesting, right? And then 57% developing on what we would consider to be a next generation platform as opposed to something that was more traditional. And then we also found that 67% of them had already used an infrastructure as a service, right? So they already knew what infrastructure as a service was about as they were trying out platforms as a service, which is kind of interesting. And then about 20% of them did have some issues. And they ranged from all kinds of issues, whether they weren't, you know, they just maybe needed some more training and really needed to understand that, because this is a whole different model, it's a different thought process. So some were educational issues and some of them were lack of some basic services in the environment that they actually needed. So the actual platform really performed really well. It still is performing well, but we still do have some challenges with, if you're gonna really deploy an enterprise application there, it really needs a certain set of services for it to run well. And the most interesting statistic here is the last one, right? Where we asked, does this really speed you up in terms of your productivity? And they all said yes, not one person said no. So that's pretty cool. All right, so this is the standard cloud foundry architecture. If you go to the open source website, you can go see a copy of this and get all kinds of detail on it. But what you can see, as I mentioned before, there are some virtual machines that actually host the applications themselves. And the rest I called it the control plane. So this is really what that looks like, all the different components of it. So you've got routers, so we've got, for example, we've got three routers in our instance of cloud foundry that we've deployed. And then you can see some of the other components here, right? So you've got a login server and the cloud controller is the component that when you connect and you go to push your app, it packages, it builds your app and pushes it out, right? To basically to the first, what we call a DEA, an application where the, is basically a node which actually hosts the applications, right? So the first one that says, yes, I have the right capacity and the right stack is the one that gets it, right? And then the applications themselves are hosted in something called a warden container. And a warden container is, you know, it's a container, you know, just like a Docker container, but instead this is a warden container and it's something that's very specific to the cloud foundry community. It's a big project going on right now that's called the Diego project where we're transitioning over to Docker, so that'll be another option and something else that's called garden. But, you know, warden is what's being used right now in the version that we're running. The other thing that is really, really cool is that there's this concept of build packs. Does anybody here hear of a build pack? You know what that is? Couple people, just a couple people. So build pack is a concept that comes from Heroku, right? And it's basically the packaging of an application, an application runtime stack so that it can be downloaded dynamically when you go to push the application. So that is really cool. So what that means is that we don't have to pre-provision application stacks and have VMs that are dedicated for different application environments. What that means is that when the person pushes the app, we download that application stack on demand. I think that's really cool. And you can get these build packs from the community, right? You can get them from the cloud foundry community, you can get them from Heroku, but we're actually building some ourselves at Intel, right? So we have some special, if you have specialized needs where you wanna abstract more middleware, you can do that with a build pack, which is really great. All right, so the Blob Store, that's actually used for that. So when you push your app, if you wanna scale out, you don't have to have the source code. We utilize the Blob Store and you can continue to push out multiple instances as you need them. And cloud foundry will manage the anti-infinity so that they all don't go in the same virtual machine. All right, then you can see there's some other things down here as well, right? So we've got service brokers. So there's some native services that are available in cloud foundry. So we're using two at Intel. We're using Redis and we're using RabbitMQ, but we've decided not to utilize any of the databases that come as part of cloud foundry. And that's because we wanna offer a standalone databases as a service, all right? And then you see these other components here. There's a giant message bus which connects all the components together. And then the metrics and logging components. All right, so let's talk a little bit about, how does that work in terms of OpenStack, right? So I told you that there are roughly 40 virtual machines that get created in an initial instance of cloud foundry. And this is just a little high level description about how that works. So basically the key to this is we're using a tool called Bosch, right? And that's something that's available within the cloud foundry community. And it is a tool that is used to provision, well, you can use it to provision more than cloud foundry, but it is really being used within the cloud foundry community to deploy cloud foundry. So we're using this Bosch tool with a plugin specific to OpenStack. So think of it that way. It's a deployment tool and then it's got a plugin. So if you wanted to deploy to a different environment, I know that they support VMware, for example. In this case, we want OpenStack. Or if you wanted to deploy to Amazon, for example, you can get a plugin for that. And so there's different plugins that are available. And then the key to using Bosch is something called the deployment manifest. So the deployment manifest describes the system basically. And so then you go ahead and you can run that through Bosch. And so you can see there's a process for installing Bosch so that you have the tool itself is installed. Then you use Bosch to install cloud foundry. And then for us, we do a post-installation step because the Bosch tool doesn't support everything that we want to do from an Intel IT perspective. We have some additional things that we want to put within our virtual machines that get provisioned. And so we do use Puppet after the fact that runs against our cloud foundry installation. So that's the basic overview of how that works. So when we go to spin up a version of cloud foundry, it's creating all of the VMs. It's utilizing the OpenStack APIs directly. And then we do that post-installation with Puppet. So one other thing that's interesting, this last bullet here, is that we don't patch it. So the process is if we need to make a change to the environment, we change the deployment manifest. And we do a rolling upgrade basically. So we'll provision new virtual machines. We'll kill the old ones. And the new ones will have all the stuff in it that we want as opposed to patching the ones that are already running. So we don't do that at all. Another good reason for people to deploy two instances of their apps so that we can do things like that. So that's the basic of how it works. Let me just show you. I have one more slide here around the deployment manifest. So you can get an idea of what's in a deployment manifest. So I just wanted to show you that it kind of looks like this. It's been our experience that it takes a little work to become familiar with what the deployment manifest looks like and how to keep that up to date and editing it when versions of Cloud Foundry change. So we invest a lot of time in maintaining that. And in our version, what we do is we have a lot of comments in here. So from an enterprise perspective, we can manage that. So if you are just starting out with Cloud Foundry, you can use a tool. So there's a Spiff tool that you can use. And it will generate a sample for you. And that's really the way to go about creating this deployment manifest. Use the tool. And so you can see there's a tool and a script that goes together along with this sample stub. And then that generates an initial deployment manifest for you. And then you can kind of go from there and work with it. So we've been all about kind of doing it yourself in the community. And so working with Cloud Foundry, we get into all the gory details there. All right, so I wanted to talk a little bit more in terms of infrastructure as a service. So being built on top of infrastructure as a service, I mentioned to you before, one thing we've been doing is working on our common control plane. So we're starting to converge our clouds in Intel IT. And so I wanted to share this with you. I won't go through the whole thing here, but the idea is that OpenStack is that open standard open source control plane. And we've been much like everyone else, right? And looking at the Cloud maturity model, you kind of go through different iterations of the evolution of your cloud. So our first cloud was very, very proprietary. Had a lot of proprietary components in it. And our version 2.0 cloud, we tried to be as open as we can possibly be, right? And so now we're merging those together. And we're picking and choosing open and proprietary components and doing some mixing and matching. And you can see that there's some drivers behind that that we want. We want to be able to support multiple hypervisors and so forth. So the nice thing about it is, if you see top left, is Platform as a Service Automation. So that sits on top of that. It still deploys through the control plane. And then all of that is abstracted from the environment. So it's pretty cool. All right, so of course we have some challenges, right? And there are some challenges around infrastructure as a service, right? So as we all know here, the open stack community and software is still maturing and that sort of thing. So we do have some of those types of things to contend with. First is around object storage. So you saw the past solution uses object storage. And one thing is, if you look at how many enterprise applications that we have, very few of them use object storage. The reason is because there are legacy applications. So they care about block and file storage, right? But the new emerging cloud applications are using object storage. So we have actually been a driver within our infrastructure as a service driving those requirements. And I think as we transition more from legacy apps to cloud-aware apps, object storage will become more and more important, OK? All right, also the upgrade strategy to move to a dedicated set of hosts. We've had, so because we offer infrastructure as a service, right, and now you can go and provision virtual machines on demand, we've had a lot of people provisioning virtual machines, right? So that starts to constrain the environment, right? And we've had, in terms of the performance, we've decided we're going to separate platform as a service onto a dedicated set of hosts so that we can eliminate some contention problems. So ultimately, you don't want to do that, right? You want your clouds to be big, multi-tenant. You want them massively scalable. And we still have some resource contention. So there are still things that the community needs to work on. And so for the time being, we're going to isolate ourselves onto our own hosts, but yet still use the stacks so that we can converge them later on, OK? Also, I mentioned open source maturity around open stack, but there's also open source community maturity around Cloud Foundry as well, right? But it is becoming more mature over time, and Cloud Foundry recently formed their foundation, very similar to the Open Sack Foundation. Intel is a member of both the Open Sack Foundation and the Cloud Foundry Foundation. And I'm personally a member of the Cloud Foundry Community Advisory Board for the technical side of it. So we are more invested from an Intel standpoint. And also, the one thing that I didn't talk about too much is I said that we can support a whole variety of programming languages and frameworks from one platform, which is really cool. And there's another open source project called Iron Foundry, which supports .NET. So what that means is, so think of that collection of 40 virtual machines. Some of them are running Linux. And guess what? Some of them are running Windows, and it's all treated like one platform. So that's pretty cool. So Iron Foundry is being pulled into the core project. It's still in the incubator right now. And I hear through the grapevine that that will happen in Q1. And all be part of the same project, which will make our lives a little bit easier, because this is an important requirement for us. All right. And keeping up with frequent community updates, this is a challenge because, as you guys know, right? So release management and how to keep up with the community, it's a challenge for us for infrastructure as a service too. And platform as a service, it's even, I think, more pronounced because with Cloud Foundry, you really have to stay up to date in terms of the different point releases. And then finally, having more Cloud Aware apps. So more and more Cloud Aware apps that are abstracted from the environment. That's a big thing. And security is one of the areas that's a key one. And so you heard me mention before, too, that we need some more enterprise services that we can consume. This one's been a key area for me, iDAM Identity Management and Access Management. So our security engineering team is working on that whole strategy of how we abstract those identity and access management services. And we want restful services. So in other words, have that integrated, have the security integrated into the app so when it scales or bursts that it takes its security with it, right? You want the security to go with the app. So that's really important. So some of the older models where you would use like Kerberos or Windows integrated authentication, those don't work well in a highly abstracted environment. So that's why this transformation is hugely important. This has been probably my number one problem of really pushing, like going big with Cloud Foundry, is having those services. So by the way, they're in beta right now. We're testing them. So I'm hoping to have that soon. So I just wanted to say a little bit more about Cloud Aware applications. So what is a Cloud Aware application? And how is it different than traditional applications? We've actually published a paper with the Open Data Center Alliance that, again, you can go to odca.org and download that. Or you can go to the Intel website. And we publish a lot about our cloud there as well, including a pointer to this particular paper. But we worked with several other companies, most notably the Walt Disney Company, to define some design patterns so that we can say not only what are the characteristics of Cloud Aware applications, but how do you implement those? So we created design patterns for that. That's really cool. So you can see these are some of the major characteristics that we expect for Cloud Aware apps. The bottom line is that you're not going to be able to do all those cool things with hybrid cloud that you want to do unless you have apps that are really written for the cloud. And that's the bottom line. So if you want to burst and you want to move things around and you really have to have those Cloud Aware apps. And our vision is this key takeaway right here. We want all of Intel IT's apps to have a multi-platform front end, in other words, run over a whole variety of client devices, operating systems, and web browsers. And then in the back end, we want it to be all cloud. We want it to be Cloud Aware. And then be connected with services, and preferably RESTful services, and have the security integrated into them. So that's the vision for all of our application development. All right, so I can give a whole talk on Cloud Aware apps. So where are we going in terms of some of our platform as a service capability? So this is just to show you where we're going with it. So what's already released are things like having a basic web portal, which is kind of our own web portal, the services I mentioned before, some basic lifecycle management as far as databases as a service, and also kind of working on that whole platform lifecycle, really. So that's available today. And the database as a service, by the way, is pretty cool as well. But it's more of an abstraction in the environment. So you go to the web portal. You say, give me a MySQL database. And it returns a connection string. And so then what the developers can do is they can use that connection string to create their tables and manage their data. But then everything else is managed by the IT hosting team. So what we'll do is we take care of the high availability of the database. We take care of the backups. We take care of all the performance tuning. Takes a lot of load off of that development team. Instead of them provisioning a database in a virtual machine, and they have to manage everything themselves, you see it's a lot less work for them if they just have to worry about their tables and their data. So that's pretty cool. So what's in progress right now? We're enhancing our web portal a little bit more. We're doing some more governance automation so to make it easier to, you know, when you have an app and you want to actually deploy it into production, a lot of times you go in front of a committee and then you have to get an approval before it actually gets released. But basically what we did with governance is we take the platform through the governance process. And then every app that lands on top of it doesn't have to do it. So that's the concept there. And we're also doing some additional automation to make that a lot easier around the governance process. So there are certain things for externally facing applications especially when there are firewalls involved. And we care about things like our brand identity and so forth. So also we're working on fully curated design, fully curated build packs and more resiliency on the platform itself and some additional security hardening that we're doing. And so you can see out in the future we're going to do more work around creating application design patterns around different high availability models. And we're going to be working with our developers again to really educate them around cloud-aware applications. We've been hosting a series of hackathons with our internal developers like one day hackathons. And it's been really cool because during the hackathon we have them work on cloud-aware apps. But then they have to host it on the platform as a service. So see that was my evil plan of introducing them to platform as a service. And it's worked really well. It's been very, really popular until. And more auto-scaling scenarios. So just wanted to say something else about our hybrid strategy. So this is the strategy for our cloud, regardless of whether it's infrastructure or a platform as a service. So the idea is that developers should not have to worry about where they're hosting their apps. So if they deploy through a smart orchestration layer and our control plane, our open stack control plane, is the beginning of that. But in the future, it'll be more comprehensive in terms of, like I said, it's got to work with platform as a service as well. But the idea is that you specify your application. You specify policies for the application. So maybe there are some special security needs or geo-requirements. What's the level of confidentiality associated with the data in that application? So we can imagine certain policies. We want to expose them. And utilize that through our smart orchestration layer. And then have that deployed to the right cloud based on those policies. So if we can abstract that from the users, then we can do more work behind the scenes to optimize things for Intel. So say we want to be a certain percentage on public cloud and a certain percentage on private cloud. We can move that slider bar. And maybe we can even do it more dynamically in the future. So that's really what the vision is there. So for the short term, we expect that a lot of apps will still live in our own data centers because we do have that capacity. But in the future, if we wanted to take advantage of geos where we don't have a data center or maybe there's a special on getting some public services or something like that, we want to really be in the position to be a lot more flexible. And we do expect that all of the app components are probably going to run not in just one cloud and probably not in one service model. So I could have an app that maybe part of it's hosted on platforms as a service, part of it's hosted on an infrastructure as a service, maybe it's consuming a SaaS application. It's going to run across all of those things. So our vision has to encompass all of that. So in summary, like I said, this is our enterprise story. I'm sticking with it. So our direction is hybrid cloud. We've had a lot of success with our enterprise private cloud. We're really proud of it today. And like I said, you can go to intel.com and you can download, there's a lot of, we share, we do a lot of sharing as an IT department on how we do things in Intel IT. We like open standard components. You can see lots of APIs here. So, and utilizing OpenSack and Cloud Foundry together, we think there's a natural affinity there and very vibrant, healthy communities like we're seeing today. And our goal is to make it possible in applications in less than a day. We think that having Platform as a Service really extends the value around infrastructure as a service. And by getting people to start to use the platform as a service, it reinforces those concepts around cloud-aware apps. And we need a lot more of those. So, all right. So, let's see if we can take some questions, right? No, it's not a limit at all, right? It's just kind of like our starter package. We figured that we can host, you know, more than a couple hundred apps in a cloud of that size. But the cool thing about Cloud Foundry is if all of a sudden there's a run on .NET apps today, I can add more Windows virtual machines. And it'll, basically you go to the deployment manifest, add a couple more and have it scale out. You can have it scale out and you can have that scale back. So, it's just kind of like the starter. Where are you gonna start? Okay, any other questions? Yeah? Is it quick? Is it weak? Okay, well, so, you may view it as being weak. I view it as being nicely abstracted. Hello. Mm-hmm. Well, our, yeah, you know, we had a lot of kind of noisy neighbor problems, especially to begin with. And, you know, now we have SDN though too, so that we can create some security groups. But that's part of the reason that we've decided we're gonna deploy on our own set of dedicated hosts in the short term. So, I think as things mature over time, we'll be able to solve that and bring that back together. Because we do want bigger, fewer bigger clouds, right? And fewer bigger data centers. So, yes. We did not use Trove, but we're gonna, I'm gonna take a look at that this coming year. Yes, we did. Yeah, we did. Not for Cloud Foundry, no. So, it's basically, you go to the portal, you get your connection string and you just use it, you know? It's very, very simple. But it's been really effective and it's been really popular too. So, we have a lot of people using it right now. Do we do it on top of? On top of, yeah, yeah, yeah. Yeah, we do have some, we do have some Windows. I think we do have some Windows hosts in there. I'd have to double check on that. Yeah, we're not, yeah, we're not doing that. I don't, we don't think that that's really good way to get performance. But I think I'm out of time because the next group is coming in. But thank you, everyone.