 Hello folks, I'm Doss Kamhout. I work in Intel IT, Principal Engineer. I'm here with Don Dugger who works in our open source technology center and we are here to talk to you about Intel's work with with OpenStack. So first of all I want to give you an overview of what Intel does with OpenStack. Some of you may know we make CPUs, we make silicon, we also make software and we have to have a pretty long standing history of helping in the open source industry. Don's going to give you a lot of details on that but just if you look across the three areas where we're focusing, one's in our contributions, we've now made it into the top 10 contributors as part of Grizzly and Don's going to explain a lot of the details on what that is and what it is that we're doing. The second point is is for my neck of the woods and this is how we actually utilize the technologies. So many many years ago we were one of the large drivers in the EDA silicon environment for moving from from Unix to Linux, so we invested heavily in Linux. We're in the same situation here where we're moving very aggressively forward and utilizing open source capabilities for what we do with cloud computing. We've been running a cloud environment for for quite a few years. My background is in grid computing where we are heavily using Linux and other open source capabilities. So what I'm going to do is give you some of overview of what we do with Intel IT's open cloud. We shared quite a few details in the last summit so I'm just going to give you an update and so we can spend some more time on saying Intel's other contributions. And the third element is a thing we have called cloud builders and these are basically blueprints, reference architectures on how to stand up a cloud environment. So we invite various solution providers to come in and basically prove out step by step. This is actually how you build a cloud so that other IT shops or other system integrators can basically take those blueprints, bring them into their own environment and walk through the steps of actually building a private cloud. So our focus is enabling enterprises and cloud service providers with open source capabilities for cloud. So this is some of the key Intel contributions and again Dawn's going to go into much more details on this. If you look on the left you see the specific contribution, which project it is, what the release and some of the comments. So everything from trusted filters, trusted filter UI. So these are based on a thing we have called trusted compute pools. For those of you that are involved in security of hypervisors, you're probably familiar with things like root kits or being able to determine is that hardware actually secure, is that hypervisor secure. So this technology, which Dawn will give more details on, is basically how do you ensure that environment can be trusted for specific high security workloads. If you work in the enterprise IT space, you would find that in some situations people actually don't virtualize certain workloads just because of the risk that still exists in regards to hypervisor technologies. And if you follow what some of the research environments are doing, they've actually found ways to hit other VMs sitting on the same hypervisor to various attack vectors. So we'll talk about what we do with trusted compute pools. We have a filter scheduler, so this is an intelligent scheduler for Cinder. So you'll see where we're basically investing in a lot of the core capabilities that are associated usually with the hardware. Multiple publisher support from Cielometer always messed that up. Open the Testation SDK, which is related to the trusted compute pools, and Causebench, which is basically a way to do benchmarking against an object store environment. So those are the high level points, and what Dawn's going to take you through is all the details. Yeah, one of the first things that we started to work on in OpenStack was creating something we call trusted compute pools. And the basic idea is Intel has a technology called TXT, Trusted Execution Technology, and that enables you to verify that what you're trying to boot is what you wanted to boot. Okay, basically what it does, it uses the TPM, the Trusted Platform Module, to store unique keys associated with each of the elements that you're trying to boot. And as you boot up, you measure what you're trying to boot, compare it with the keys in the TPM, and only if they match and you know your stuff is good, then will you execute it. So utilizing this, you can verify the BIOS, the boot block, the initial boot code, all the way up through the initial operating system that you're trying to run. So this gives people the confidence that, yes, you are booting the appropriate software, you've got a valid thing, and everything is working, and everything is there properly. One of the other major advantages of this is it's on the list as the compliance issues. This gives you a way of verifying to third party, you know, whatever, that, yeah, we booted properly, so if, you know, something goes wrong, it wasn't because we booted the wrong software, we can guarantee you that. So, and this kind of goes into the details of how we actually implement that. And you notice at the top, we're talking about the things that had to be put into OpenStack proper. There you'll see the yellow things, or the things that we've added, okay, in order to support trusted compute pools, where one of the major issues was the trusted filter, that big oblong thing there, which is just a filter as part of the scheduler filters that basically when you say start me a virtual machine, I will go find the host that that machine is going to be running on, and I will say, is that host a trusted host? And we have, you'll notice in the center, the attestation service, the attestation service is an external process that runs, and it's service that runs that you can query to say, this is a host, is it trusted or not? And we've got various protocols and whatnot in place so that if you look down at the bottom where you're actually running a host, this is where you have to have the host boots up. It runs the TXT, I'm trying, I'm pointing on my screen here and I'm sorry that I don't have a pointer up there, so you'll have to read through this slide and hopefully you can find the things I'm pointing at, virtual pointing. It boots and executes the TXT code to guarantee that you're booting the right thing, and then there's a trousers module, which is part of the boot thing, part of the processes that run, and it talks to the attestation service, and through a well-defined protocol, it'll exchange keys and guarantee that what you did is what you thought you were trying to do. Oh my gosh, I should have a pointer thing. Okay, there we go, now I've got through that, and that's probably the last one that I'm going to be pointing at, I can point at it. So that in kind of a nutshell is the guts of how we do it, and we had to put in the trusted filter, and we also, I didn't mention that, we did put in some enhancements to the Horizon UI so that you can actually see, okay, this is a trusted host, this is not a trusted host, it helps to be able to know those things. Okay, now we're kind of, kind of switch gears a little bit, and now we can try and talk about Causebench, Causebench is a cloud object storage benchmark, we're kind of an engineering company, and to an engineer, if you can't measure it, it doesn't exist, okay, so we really haven't seen too many really great benchmarks floating out there, so we decided to create one. And the first thing we're attacking is the ability to measure how fast your object storage is working. So we've built a framework, okay, that has the ability to deploy an appropriate set of servers and whatnot, and then deploy some benchmark processes that will extract data from the servers and measure the bandwidth and whatnot that you're getting. And you can utilize this to find bottlenecks in your particular environment to see whether or not different hardware is doing different things properly for you and do all that kind of stuff. You know, as I say, if you can't measure it, it doesn't exist, so now you're going to have a chance to measure it, and we will be enhancing that in the future, hopefully hitting other things. I mean, I'd certainly like to measure things like how is the compute nodes doing? Are you hitting those things properly? So we're very big in the networking. I expect to see networking measurements and whatnot coming down the line. Again, this gives you kind of an architectural picture of the way Cosbench works, where the important thing is you have the controller up here, which is running things down through and hitting the storage system. And it's all web-based, so it's very easy to utilize, easy to measure things, easy to find out how the stuff is going. And I should also emphasize that I work for the open source technology center here. Everything we do is open source, okay? So we might have created Cosbench. I don't even know if it's been officially released in the public yet, but it will be. Okay, we're not in the business of creating proprietary solutions, at least in my division. So expect to see that available, and you can grab it, do whatever you want with it, utilize it, modify it. We love modifications. Those are wonderful. Okay, another thing that we've been working on is the filter scheduler for Cinder. Now, for those of you that are familiar with OpenStack and the NOVA scheduler, this will look remarkably familiar. Okay, and point of fact, it's almost exactly the same. The basic idea is you create a set of filters, and these filters are pretty much analogs of the same filters that you'd see in NOVA. But now what we're doing is we're applying these to Cinder so that you can filter, these filters says, this particular storage service is not appropriate for the particular work light I run around, or this particular server is appropriate. And so the filter will make that decision for you. And then we have a waiting function that we'll go through and try and decide which of those storage servers, you know, pass the filters, which of those is better than the other. And based upon that, you'll get a rank order set of services for your storage. And so this enables you to control better how you place your storage, how you access your storage. And hopefully you can optimize your system and get it working in a much better fact. Basically gives you more control over what's going on. And this just gives you an idea of how you can utilize these things to start with set of services and which are the appropriate storage servers that you want to go and supply those services. The multi-publisher for Solometer, I actually got that right. So there. The idea here is you have a set of data that's going into the collectors. And you want to transform that data, and it might be one person wants to look at raw CPU counts, but another person might want to only look at CPU percentages. Okay, so you might want to transform that data into different forms for different consumers. And then somehow you have to get that data to different users of the data. There might be somebody that wants to measure the data for performance monitoring purposes. Somebody else wants it for billing purposes. Somebody else wants to use it for historical trend type analysis and whatnot. So you have to get that data to multiple consumers. And so we put in some code into the Solometer to enable just that. You can take basically one set of data, transport it in multiple different ways, and then send it out to multiple different consumers. Yeah, and to talk about some of the future things that we want to look at, we at Intel are very intent upon enhancing OpenStack. We think it's a really great thing. So we have other things that we want to do to it going forward. Enhanced platform awareness, that's actually something that I'm kind of working on. The idea is we want to enhance OpenStack so it can be aware of maybe some unique features available on different nodes, and so that you can utilize those different features in the most efficient and most performant fashion possible. Kind of as a side note here, I should emphasize, I work for Intel. I think Intel makes the best product, so that's why we're going to win. The enhanced platform awareness is not an intent to say, oh, this is only Intel, so everybody else forget about it. We're enhancing the platform awareness across the entire ecosystem. If AMD can take advantage of this and utilize it, that's fine. We don't mind that much because our stuff is going to be better, so who cares? So I just want to emphasize that a little bit. Another area that we're going into, again in the future, is key management. The idea here is we're very concerned about security. The whole thing about the trusted platform, trusted compute pools, was a security issue. Key management, again, is a part of the sole security thing, and now we want to start looking into what can we do to secure the data that you're utilizing. Again, we have a slightly biased view. We've got some encryption instructions buried in the CPU, so we think it might be a good idea to utilize those things. Again, we think we have the better solution, but we want to enhance the key management just in general so that everybody can take advantage of it. Another thing to go along with the storage stuff we've been talking about is erasure code. Currently in Swift, Swift does a triple duplication of your entire dataset, which is fairly expensive, shall we say, and wind up with a fairly sizable data explosion. Erasure code is effectively the ability to only store certain portions of it and duplicate certain portions of it without duplicating the entire thing in three different places and still give you reliable capabilities so that if one of your storage servers goes away and loses your disk, you can still recover the data without necessarily having to go through a full triple replication. So expect to see more work coming out from Intel in that area. And that is kind of where we are now. I'll give you a flavor for where we're going in the future. Things could change, but that's kind of a flavor for where we're going. And Das will now try and hopefully tell you exactly how we're doing things. Thanks, Tom. So I know there's some new faces. Last summit, I shared quite a bit of details on what we do with cloud, why we do cloud, and a lot of the specifics on what we're doing with OpenStack. This is intended to be an update from that. But just a couple of things. This is our platform solution stack. So if you look at the bottom, there's a few things to look for. Green means that it's something that we have in production today. Yellow means that it's coming and probably next two months or so. And blue is a little bit later, probably with Quantum. We're going to pull that in fairly quickly since we really need it. So fundamentally on the far right we see is the release cadence. And this is something at least some of my enterprise IT peers used to be kind of concerned with the fact that OpenStack comes out with a new major release every six months. We actually think that's a really, really good thing because we need the pace to be very, very quick. I recall six months ago there was just a thought that maybe Cinder could have multiple storage back ends and now it's a reality. So the reason that's a reality is because RAPTIS is very fast cadence. There are some in the environment that say six months is even too slow. If you look at some of the public cloud service providers that put out 150 features in one year, something like OpenStack is it builds its community and needs to move faster and faster. But this is our overall stack. Our compute storage and network are obviously the physical layer that we have. We usually refresh this, meaning we get some new generation of equipment every 12 to 18 months. On the OpenStack side, you can see what we're using. Pretty straightforward from perspective of we have compute. We have OS images. We actually don't expose a horizon right now. Most of our exposures through API CLI and I'll show another diagram later. And then we built our own manageability stack. This isn't core to OpenStack and we don't think OpenStack is everything you need to run an open cloud environment. It probably doesn't make sense for it to be everything. It's good to have other open source solutions in the industry. But to show you some examples, we use a concept. We watch everything. We make decisions and then we have an actor and then a collector. So keep it very straightforward. We're watching the environment from down to the very core infrastructure up through the virtual machines into the application layer. This is sending events to a bus. We have a decider which is basically a fairly young child with very basic understanding of logic. But that's good. That's the start out. This is internal code right now. This is something we would like to see an industry become open source in the near future. And then we have an actor which is basically puppet. This is the thing that takes the action in the environment and makes sure that the configuration state is the same. And then recently a collector. Most environments once you start having to audit it, you clearly need to see everything that's going on and have a trail in history of that. We're using Mongo and HDFS. So again on the very top, our consumers are generally software developers. So when we first did our first enterprise private cloud which has been running since about 2010, our focus was initially on application owners. So in an enterprise IT shop, you usually have a lot of people that buy off-the-shelf software and run it on infrastructure. So what they wanted was a GUI. But when we made the decision a year and a half or two years ago to do OpenStack, we shifted our focus really to software developers. And for them what they need is an API, a CLI. So usually the GUI for us is always trailing which is why we haven't even exposed Horizon to anybody but our friends and family. But this is the overall platform solution stack. And probably a good point to add on here too is we also do run Platform as a Service. And so on top of OpenStack today we're running Cloud Foundry and Iron Foundry in production pilot which are also to open source capabilities. So this is just our current infrastructure app status. We've been running in production I think since August of last year. We run in two data centers and on the left we see we have the software development type people and actually our utility abstraction layer that we built internally which has API and a CLI. And the reason for that is we have to work against not just OpenStack. We utilize public cloud service providers as well. And as we all know APIs are not the same and therefore usually need some form of abstraction layer. We looked at quite a few of them and decided to... We felt that the industry was going to need to create one fairly soon and we took the technical debt on ourselves to build an API CLI that allows us to basically take the normal approach as you would to an infrastructure. I want a provision, I want to change it and I want to destroy it. We interact with load balancers whether it's in the public cloud environment or whether it's hardware load balancers or software load balancers in our internal environment. We give the framework through the actor to do code package and releases. We deploy platforms into our environment that are fully featured and then the application owners have the opportunity to use things like the public framework to push their own code. In our internal environment, we're running OpenStack Essex today. We were about to go to Folsom, but after this week our team is highly interested in bypassing Folsom completely and jumping straight to Grizzly. So we'll probably make a decision within this next week. But for us with all the advances that happen in Grizzly, it's probably the right choice to jump forward, but we'll report back the Havana face-to-face summit on what we did. And then just to point out on the bottom right, our consumers are either actually end users that need to hit the software, not the hit OpenStack, but hit the software that's running on OpenStack. So these are people that work at Intel generally that want to get to their apps and data. So in some scenarios, they're hitting a global load balancer, which basically is intelligent DNS, which routes them to their closest proximity. So if they're on the west coast, they end up in one of our two data centers. If they're on the east coast, for instance, they end up at a third-party cloud. But the global load balancer takes care of that. And also it helped us start to push forward the concept of active-active to our software developers so that in reality we should be able to just shut off the data center completely, and the consumers going through intelligent DNS are able to stay running. One other thing I just want to point out, and the details aren't really that important, it's just that the point earlier that not everything has to come out of OpenStack, and I think it's a misconception that people believe everything is in OpenStack to run an entire cloud environment. I think more and more can come out of OpenStack, but this is just... What's core from OpenStack? What's outside of OpenStack that's open source? What's outside of OpenStack that's homegrown today? And what's outside of OpenStack that's proprietary? So for instance, right now our approach was to go open source for almost everything. Though our application performance management tool is not open source that we use, but that's okay. But the focus that we move forward is basically everything stay open source. And then as you notice, there's quite a bit that's outside of OpenStack that is open source. For instance, our web fabric and database fabric is based off of platforms of service solution, Cloud Foundry and Iron Foundry, and we don't believe OpenStack has to take care of that. So basically we have a layered environment of a combination of OpenStack and more that allows us to run our environment. So how many of you are familiar with writing cloudware applications? How many of you are software developers? Okay. So fundamentally there's a key difference between how IT applications or an enterprise shop, how applications were being built compared to how they should be being built. And we call them legacy applications, but unfortunately legacy applications are still being built this week and people are buying legacy software. But fundamentally what we're trying to push forward is very similar to what we do in grid computing. It's very similar to what you'd see. Many of the shining stars that use public cloud, how they approach things like design for failure. If you go and look into any depth on, say, how a Facebook or Google operates their infrastructure related to their software, it really should start clicking in a software developer's brain on how to actually write differently. So one thing that we do internally now is we do codathons, which are basically a one-day session. We pull a bunch of developers into a room and we walk them through how to start building cloud or applications. So give them access to platforms of service, give them access to infrastructures of service, and walk them through some design patterns. So one of them I just wanted to share, which we're pushing heavily, is how to actually build to an active, active model. So the subject here is deployed to N plus one clouds to basically as you add on more environments, more regions, more zones, your SLA for your service should be able to go higher and higher, which is very similar to what you see with a lot of consumer apps. I know, I saw Gmail was down this morning for some people. Did anybody get hit by that? Nobody got hit by Google Docs down? Okay, those are rare scenarios. They don't happen very often. I think the last time I saw Google have an issue was because somebody messed with DNS in some country, and it went up the chain and Google then had a DNS problem, but it was localized in that area. And fundamentally though, it's how they approach building software. If you go inside of an enterprise and you see all the software is built to assume that the server is always going to be on, I think as we all know today, that's not necessarily the case if you're dealing with ephemeral storage or if you're dealing with a situation where you don't have resilient block storage. People often don't write with a concept of eventual consistency. So eventual consistency is, you know, opposite of causal or where a lot of people spend a lot of money on replicating data, so it's constantly in sync. We're saying, you know, eventually it'll catch up and you deal with your code in such a way that transactions are depletant and they aren't hitting... They're not causing issues by hitting into multiple data centers. Today, we generally help with the database replication for our teams. As we do more and more platforms of service, they can just get eventual consistency out of the box and they can do less focus on understanding the deep basis of how those things work. But I think we're far from that. Is anybody in this room familiar with things like Paxos? We have a few nodding heads. Any other hands raised? Yeah, so Paxos, so this is like the basics of where we want people to go, but if you look at complex transactions, you have to enable concepts like Paxos which ensure that your transaction is really, truly consistent before it returns to the end user. This has been around for quite some time. I know Facebook has shared a little bit that they use that, but it's fundamentally that you can't... If you're really going to build applications that are global scale, that are truly resilient, you have to think differently. Utilize the cloud to your advantage to create and destroy things on demand, but to really start building with a completely different approach. So this is when we shared six months ago. I think I also shared... I might have left it off... how we compare to public cloud environments. So when we look at the various functions or capabilities that we need on the left, we basically say, hey, where are we at with our Intel IT Open Cloud, our private environment, and where do we need to be? There was quite a few things in red six months ago. Things are better. It doesn't mean everything's in production yet, but we're slowly making... well, actually fairly quickly making progress and what we'd like to encourage others that are either in an IT shop or are committed to helping move the stuff forward is looking from these areas in red and saying, what can we do to accelerate them? So fundamentally, we start out with the basics. Give me compute infrastructure as a service. This is through Nova. No change there except probably moving from Essex to Folsomere Grizzly. Object storage was a tough one because actually for a lot of software developers, they're not familiar with object storage though. I don't know how many of you have read how the NPR apps guys are using object storage today. Has anybody seen that article? It's amazing. They have one server that does cron jobs and all NPR's applications are just object storage. There's no web presentation layer involved anymore. So the point being is innovative things are happening as people are consuming these services at a greater and greater scale. So the focus was get this into the environment as quickly as possible. We have Seth in our labs today. I think that's one constant I've heard in almost every talk I've been to about infrastructure. Somebody's trying Seth. It's working well in our labs today and we intend to get into production in the next month or so with hopefully no issues. We're going to rely on it for two functions. One is as the Swift API back end as well as the Cinder API. On the list there's a couple other areas that are still an issue. Auto scaling is a pretty major problem for us. We have a very basic version working right now where Nagios can tell the decider that there's a situation where CPU is being overutilized until puppet the scale out. We're pretty interested in the heat work though it's not to the point that we can use it for our hybrid environments and we would like to see it progress though. In the meantime we're investing in our own solution internally which we hope to be able to share hopefully by the next summit. Nagios so if you're familiar with when I say missing APIs this fundamentally means that everything must end up being exposed as APIs. We're actually pretty far away when I say we, I mean the open stack community we're still pretty far away from the level that we need to be when our capabilities expose as APIs. It's how software developers expect to operate and I won't name names but there are public cloud service providers that are really pushing the envelope on what's possible in regards to the feature set but we do believe that open stack community is catching up this is why we're utilizing it and it has a lot of opportunity to move very quickly. Hadoop has APIs but if you really want to do some analytics it's not just about exposing Hadoop it's do you have a place to put your storage how are you going to launch your compute what type of orchestration you have and I think those of you heavily involved in open stack realize there's really not even an orchestration mechanism in there so a lot of us had to look at other tools so it's really how do you build an entire open cloud that enables many of these capabilities at scale and that's competitive what's going on in the public cloud space so just back a little bit to the app architectural guidance this is just some of the rules we have I also work in this thing called open data center alliance which is basically helping enterprise IT shops move faster to cloud one of our peers in that group is Disney they've been a pretty strong pusher in regards to cloud based technologies in the private cloud space as well as some of the hybrid but they actually have a white paper out there if you're not building cloud aware applications today they've published best practices on how to approach building cloud aware applications this is just a high level view that we share but some of these concepts should be getting ingrained in everybody's head now like design for failure assume and actively test for failure all data centers at some point in time regardless of how many levels of resiliency you put in, everything fundamentally breaks we had a joke internally, we had a great data center and a squirrel took it out but these things happen so you should always assume failure and build your application in such a way so stateless compute scaling out instead of scaling up a lot of enterprise applications only understand scale up so you hit a ceiling so driving a real focus on scale out which requires also that the application understands much more automation and also requires that you expose as many APIs as possible in your infrastructure so you can do tricks like that you talked about eventual consistency and then the last thing for us is the DevOps NoOps model we kind of stop using NoOps internally because it's scared folks but fundamentally getting a DevOps model in place so people can at minimum get developers and operations together into the same or better yet give developers the ability to actually own their code we find if they get a page at 2am about their code they never get that page again I also want to just give some top priority items for us so we do run enterprise IT shop and running an enterprise IT shop means lots of requirements on getting things pretty resilient I'm not telling everybody that runs an app inside of Intel that they have to switch to cloud aware that would be ridiculous it wouldn't happen fast enough I'm sure if any of you work in enterprise space you probably have some code that was written 30 years ago and nobody's touched it and it stays running so there's a lot of models where we just have to keep the legacy up and running so we've made a cognizant decision that we weren't going to just force everybody to cloud aware and pretend that the legacy environment doesn't exist we fully embrace that this environment will exist for a long time and we have to figure out how to make open stack work in this environment so we do have a list that I've shared probably a year and a half ago that said here's all the items in open stack that we have to solve but these are some of the key ones so fundamentally block storage sounds straightforward but being able to enable something like live migration allows us to do maintenance in our infrastructure and keep that VM that always has to stay up online and functioning it gives us that persistent data for the situation where people are not able to change their application code and deal with a persistent back in and compute that stateless so we're trying to be very pragmatic I haven't seen this with everybody in the open stack community but I do believe if open stack's really going to push forward into enterprise space some of these underlying assumptions about infrastructure and the relationship the legacy applications need to be handled and implemented federated identity across open stack environments and then beyond so again we do believe a hybrid cloud is going to be the norm we don't get everything we need from our private cloud environment we run 68 data centers across the globe our Linux grid is approximately 60,000 servers so we have a fairly large scale inside of our environment but there's locations where I don't have a data center so for instance if I need to be online in South America in a month there's no way I'm going to build a data center in a month so we utilize public cloud as appropriate to consume capacity as well as in some situations features that we don't have in our private environment third is a switch away from Nova Network so right now we use VLANs and Nova Network and security groups to deal with our infrastructure especially for our web apps we're pretty excited about quantum and the shift STN so this will give us a lot of flexibility as well as the ability to one of our big delays today is just getting our network operations team to expose a VLAN so it's a human element and we take a parade of approach and basically look at where is there a delay in human interactions causing a delay for our software developers and networking is a huge element and once we get quantum online with OBS and SDM plugin there will be a game changer for us for how do we expose networks not even getting into the very low level SDM stuff just the high level logical elements are going to be very important load balancing I didn't get a chance to go through the load balance as a service talk here this time I'm hoping that we've made huge strides but fundamentally there's nothing that operates in an environment that doesn't use a load balancer applications that are resilient must have a load balancer that's how it works with their hardware or software this is fundamentally an aspect and most enterprise shops probably have a lot of hardware load balancers in their environment that they need to be able to get an API in front of them so they can able full app life cycle so create vips, destroy vips so this is a major focus for us because it's also a bottleneck in regards to getting these cloudware applications online and completely self resilient implement grizzly faster so another big item for us is rolling upgrades for any of you that have done an open stack upgrade yet since you don't have persistent block storage and you may have applications that are not cloud aware they're generally going to experience some form of downtime you want to move to a model where there's zero downtime, zero business impact there's lots of different ways to approach this whether it's things like anti-affinity and your scheduler so that your applications always end up on different nodes more intelligence in your orchestration so that as you do rolling upgrades you make sure that not every node behind that load balancer are affected but fundamentally how do we really enable true rolling upgrades so that the customer and software developer and then more importantly the end user hitting that software never experience a service outage and then the sixth point is what Don talked about with trusted compute pools we run a lot of high security applications like many enterprise IT shops do if you look at some of the reasons why people don't use public cloud massively today in the enterprise space or where you're regulated or audited it's because of security sometimes performance and total cost of ownership so for us even inside of our private cloud we have some workloads that we don't put on hypervisor and we actually benefit in a lot of situations running on hypervisor so trusted compute pools gives us that ability to ensure that our sensitive workloads are landing on the right environment and Don didn't talk about this one other feature in it but things like geotagging become very interesting for us too because we operate in some countries where that data can't leave so when you operate in those environments you have to make sure that you do some sort of geotagging or you have a lot of manual processes involved to make sure your data always stays in that one location so these are the top priority items for us there's more I'm sure but I think that's all I had so well Don I will open up for questions and if there's any from the room using the cells capability as far as I know right now the cells isn't really ready for what we need I'll ask my guys later Hey, Winston, did you look at cells while you're here this week? Is it ready? It's getting a lot closer I understand that Yeah, it's getting it closer X-Face is actually using it in production right now so that doesn't mean it's ready for you but That's right, we're very curious on cells moving forward when we were all here six months ago I think even Mercata Libre showed how they have to do a lot of advance scheduling across their clusters and they would love cells they had to build their own code to handle it so the faster it can be done the better Question? Open source platform as a service? Yeah, so on top of OpenStack today we run Cloud Foundry and Iron Foundry which are two open source solutions we have a lot of .NET developers so it focuses in on on that combination but we also of course we have a lot of data and other That's happening so right now we have very basic interaction but when we release our new internal abstraction layer with orchestration it'll configure global balancers as well right now somebody actually has to hit it directly to remove the data center So a certain question Do you foresee TXG transformation or expanding? I'm not totally sure that that would be on this truth that's not really exactly where it goes I mean I can imagine that those things might want to access the TPM for some reason but I really don't see where TXT would necessarily fit into that Well we need to just go down and talk about provisioning services on the client and the storage of the systems as the ultimate controller of the world and that's what I'm talking about You know maybe I do too TXT is kind of specialized it's really geared towards the boot environment not necessarily Don't have a good answer I'm afraid Any other last questions? First one up front Yeah, you Go ahead So what are your items on there? So I'm not a super expert on authorization but I'm pretty sure OAuth and OpenID have been investigated as options Winston do you recall if we're leaning any direction on authorization yet for hybrid and if everybody can catch that what we're doing is multiple keystones back with OpenDJ to ensure consistency across at least in the open stack environment that we have connected identities Our intent though is infrastructure service, platform service should all be in a hybrid fashion working with federated identity I don't think anybody's cracked this and we also we deal with the software as a service space too where we consume services that are completely outside of this ecosystem where we also have to have federated identity so I think there's lots of options out there just we haven't pinpointed the best one yet Pardon me OpenDJ is like a back in the LDAP It's open source LDAP solution There was another question in the back On auto scaling I had two questions basically We did a billing stack as far as a driver for cost based auto scaling I worked on an app that we did in Amazon and we left auto scale up to the application itself so it knew if my load is high and I'm doing this function I need more servers performing that function and they can spin them up itself Yeah, so our actual approaches applications should define it themselves like in Amazon you have auto scaling group and you set a policy that says monitor for this type of scenario and then take this action so that's fundamentally what we do today with Nagios we look for a threshold to be exceeded by CPU we tell our decider hey this has been threshold hit and check to see if everything in that collection is hit if so add nodes so I think most auto scaling concepts work in that fashion so yeah we expect many people to set the policy to set the policy and to be able to have their application work in such a way that it can auto scale up but the first question is billing stack Yeah, so one of the things that we found in Amazon is using their model we get a cost overrun that released and justified by the ROI so we need to have a model where we can say this app is only going to generate this much return How do we cap that? So your cost to sales and then yeah yeah yeah unfortunately not everything we do is associated with money sometimes just altruistic but yeah that's a good model in the grid environment we look at job slot utilization correlate that to giant chip projects and use a wall of shame wall of fame methods to help drive projects to right size but we do that when you have specific about a percentage of information Thank you folks