 All right, welcome. Hi everyone. Thanks for sticking around late. My name is Greg Bantz. I'm with Intel IT. I'm in the engineering group within the hosting portfolio. And my responsibility is automation and enterprise integration. And I've got with me Shridhar. Hi, I'm Shridhar Mahankali. I'm also part of Intel IT Engineering. And within that, I focus mostly on the cloud infrastructure architecture. And I'm grown up in the networking side of the house. So there's a lot of networking back down. But right now, look at the infrastructure architecture. All right, standard legal disclaimer and on to the agenda. So tonight we're going to talk to you about Intel's IT cloud transformation and journey. Why we selected OpenStack for our control plane strategy. Our control plane status and plans, automation framework, workforce transformation and call to action. We'll have summary and wrap up at the end since we're the last presentation. We'll probably stick around and talk among ourselves a little bit longer if folks desire. Is my mic overdriven? Okay. So our IT cloud transformation started back in the 2009 timeframe when we made the shift to virtualization. So we brought in a virtualization platform and it started to transform our hosting business from one of 90 plus days to land and application, very siloed in terms of capacity, extremely manual in terms of how you interacted with IT as a customer. And then variability in terms of reliability. In 2010, we turned on our first implementation of our private cloud at the end of that year, right? So we drove through a PDV process to pick up workloads, running on physical platforms, move them to virtualize and got from a 10-day native virtualization kind of throughput down to less than an hour for folks through self-service to come through and request their virtual resources. In 2014, we're driving toward a hybrid cloud, converged cloud strategy. As our enterprise sits, 75% plus of our enterprise is virtualized. We offer on-demand compute network and storage. And we've got full self-service, service across fulfillment with four nines availability. In the last two years, we've actually done quite a lot in terms of cloud enablement. So it hasn't all been an IAS virtualization story. We also have some SaaS success stories. So some limited applications starting to come into IT and onto our customers in a SaaS model, right? And these are hugely disruptive, but in a great way, right? It transforms the face of IT. Also, in the platform as a service space, we've turned on Cloud Foundry as our past offering last year. And we're driving our environment such that our end users, our developers can conceive of an idea and have it deployed and running in production in less than a day. And then in the IAS space, we have taken what was our proprietary and internally developed automation control framework and are shifting toward OpenStack for a control plan. And that's the premise of this discussion here today. Go deep. So Cloud 1.0 for us was all about velocity reduced instance, sustained operations, getting to less than an hour for VMs, really minimize our incidents in down times, and just run IT, right? But now in 2014, it's a very different landscape. We are doing idea to production in less than a day, zero downtime, our customers should never experience an outage. And we're doing this in the face of flat budget and down head count, right? The point of this foil is really just to illustrate the current environment. So we're still pushing across. But on the left-hand side, you can see, you know, yes, we've got this proprietary automation that does fulfill self-service for some of our environments, but other environments are still stranded and quite siloed and highly manual. So with our OpenStack control plane, we will be able to extend similar functionality across our entire environment, transform the way that we do our network architecture security architecture and really pass on all of this goodness to our end users. When we went through the process of selecting OpenStack and revalidating our control plane strategy with our CIO, we looked at it in terms of what Intel calls QVAC, quality, velocity, efficiency, capability, right, as our four main vectors. So in a velocity vector standpoint, what OpenStack gives us is a very forward-leaning means to service development delivery and operation. It is also natively geared toward agile methodologies, DevOps, continuous integration, continuous delivery, continuous deployment, right? These are things that are exactly where we're trying to take IT because they align with what our end-users and customers are doing. So instead of feeling as though CICD and agile are things that our vendors are doing or things that our customers are doing and are being imposed at IT, we're trying to drive it from the infrastructure of capability-wise OpenStack is really the perfect automation framework. It's defined by its APIs. This allows us to not only stitch it into our environment in whatever way we have to, but it gives us an extremely rapid way of delivering line of business requirements. And then, again, from an efficiency and quality standpoint, kind of back to the DevOps tool chain, right? So we leverage the same tool chain used by the OpenStack community used by our end-users and customers to deliver IT infrastructure. So what this represents to our customers is really an up-leveling of capability. They are able to do things across the environment regardless of what hypervisor they're sitting on once we've fully rolled out this control plane strategy. So it's broken into phases. Phase one is put the control plane over our existing infrastructure, giving the customer start, stop, delete, snapshotting capabilities, self-service volume, create and attach, resizing functionality. These are all things today that would require, in the legacy environment, a manual ticket. I need more BCPs. I need more RAM. That takes a couple of days to percolate through the system. Phase two that comes on a little bit later in the year is importing all of that metadata that's sitting in our existing cloud environment into OpenStack so that customers who have VMs that were built by hand on that virtualization platform or built by our previous generation of automation get it all natively through Horizon. So we're driving towards Horizon as the front end for all of our operations. Turn it over to Shridhar. Hello. So what I'll do is over the next few slides dive more into detail in terms of what our infrastructure architecture looks like. Mostly I focus on the infrastructure as a service aspects of it but I can touch on some other platform as a service capabilities that we're going after as well. If you look at our virtual hosting environment, where it was about around 2011, so our journey started in 2009 but towards the end of 2012, essentially we had around 75% of our office and enterprise environment virtualized. Right now we are running about close to 17,000 VMs on top of this infrastructure and where we started, where we were in 2011, essentially if you were to categorize our environment you can categorize into basically three primary vectors. There's an internal facing environment and a significant portion is non-onclaved and then there is a regional portion that is onclaved and then there is an internet facing element for that. And as you go through that what you'll see is there is an increasing amount of segmentation and security as you go from the left to the right. So I'll start to walk through some of the details from the network app. So in terms of you look at our internal environment, the non-onclaved piece of the environment, you have a physical network, you are typically leveraging network services like load balancer which is more commonly used network service in that environment and then what you have is a large shared networks that all the application tenants share. Primarily because it is not as much of a need for oncleaving in this space so you use shared networks to make it easier from that perspective. We do use from a compute perspective we are using a proprietary hypervisor and we run the proprietary virtual switch on top of it and the only network service that we are virtualizing there is the switching capability. And backing this all the VMs that are landed on here are using a proprietary storage, a scale-up storage that underneath and then what we have on top of it, yeah go ahead. Basically oncleave all it means is that it is a segmented portion of the infrastructure that is at the core level is separated by a firewall so that there is more security to basically for the applications that are being hosted there. So typically Intel has these four different classifications of data. So there is public data, there's Intel confidential, there is restricted secret and secret. So as you go towards dealing with a higher level of confidentiality you have to apply more and more security controls. So that's what you're seeing as you go down this scale. And even within the oncleave there are actually the additional segmentation that we have. I'm not showing it here for simplicity purposes. So there is an element that is designed to host Intel confidential and lower data and then anything that is restricted secret it has even higher level of security controls that we apply and that is actually physically segmented today. So that's one of the things that you're seeing right here is the differentiation in terms of as you go to is you're seeing more and more granular network segmentation as you go into that space. Because you have to segment one application from the other with the theory being that if one application gets compromised to prevent it, that compromise to extend to the other environments you want to lock it down into a container. And in the non-oncleave environment indicated that there is some custom automation that we have written that allows you to provision VMs in a self-service manner. But what we were doing there is primarily the provisioning aspect of it but once it is provisioned there is less control that folks have in a self-service manner on top of those VMs. And then as you go to the oncleave space because of a lot of the segmentation and having to make a decision in terms of which network do I land a VM on there is a lot we essentially were doing manual provisioning there relying on a cloud administrator or one of our hosting administrators to may have provision VMs in that space. So this is one of the things that we are trying to address and have a consistent interface as we go along our journey. That's where OpenStack comes into play. So what I'll show you is different ways and different flavors of what single control plane means to us. One is from the perspective of having a consistent interface across our environments where you saw previously somewhere behind a custom portal, somewhere behind manual provisioning. And so there was differences in terms of the capabilities across those environments. So what OpenStack is giving us which we've been evolving our OpenStack based cloud environment over the last two and a half to three years. And here's where you see we are using the open APIs to expose the underlying infrastructure in a way that not only users can use from a GUI perspective but developers can actually use those APIs to run higher level structures, automation structures on top of that to basically deploy groups of VMs or do network cell service, do storage cell service on top of that. So here what you're seeing is again we're differentiating a non-enclave and then an enclave environment. But one of the things that we're doing is we're collapsing what you saw as a non-enclave environment into the same compute pool. And you're able to offer the same level of capability across that. So what we are moving towards is just an internal facing and an external facing, internal facing environment. So again, a few things to highlight here as you move onto the physical network. On the enclave side we are changing our security architecture which I'll touch on in the next couple of slides. In terms of key transitions that we are seeing in this space is we have been using open source technologies. So you see KVM as a hypervisor using open source storage in this environment. Essentially we use CEP in that space. There is an image repository that is exposed by a glance. So all these capabilities that is allowing us to do things in a more automated manner which we were doing more manually before. And OpenStack is the control plane that sits on top of all of it. So where we are headed to in 2014 is we want to extend this OpenStack control plane not only across these new environments that we've been working and advancing over the last couple of years but to have it across all of our virtualization environment which is across all of the 17,000 VMs that we have by the end of the year. So what that means is we are transitioning from where our OpenStack-based environment is purely using KVM as a hypervisor to a hybrid hypervisor environment. So essentially use the same control plane but be able to provision across multiple hypervisors. So this goes along with our strategy where for all of our infrastructure components in general we look to dual source, the servers, the network as well as the hypervisors in this case as well as the storage elements. So it fits in that type of model where you can have a consistent API but you are actually able to change infrastructure solutions in the back end while the customer's experience hasn't changed. That's really the key. So again we are using the same API as we talked about and as I indicated we are moving in a direction where we are able to support multiple hypervisors. We are able to support multiple storage solutions so both scale up and scale out storage from that perspective. And another thing that you see is the use of the trusted compute pools which is allowing us to basically collapse those internal enclaves and non-onclave elements. So on the same common resource pool where we were segmenting the compute resource pools previously we were not getting the best use of the resources whereas by combining those and using trusted compute pools you are actually able to land your secure workloads on secure systems or trusted systems and that's what allows us to use a common resource pool there. Actually one more thing I want to highlight is we are actually also using Heat and Murano as high level APIs that allow us to provision collections of systems where we can create a policy in terms of if you want to deploy a multi-tier application with web server application and database and in addition it needs to have a certain type of security policy implemented all that you can define in terms of in a heat template and be able to deploy it that way. So that's another way where we are leveraging OpenAPS. So the other thing that we have done is as we have evolved we are changing the security model in terms of how we had implemented it before OpenStack our journey with OpenStack and how we are going forward. What we mean by that is where previously our parameter was primarily our access control was primarily being implemented on a hardware-based firewalls and these are monolithic firewalls that we were implementing the access control on. We are moving to more a layered parameter or distributed security model where the controls are being implemented on the hardware firewalls more towards the edge of the data center. Then you have what we are calling as a tenant and zone parameter. So essentially you define security zones and then you have tenants within that. So each tenant at the tenant edge we are applying controls on the hypervisor and then within the intra zone using security groups to create segmentation within the tenant. What that allows us to do is especially when you look at internet-facing environments we used to have separate physically segmented pools of compute for say a web server layer, a database layer and an app layer. All those we are able to collapse and actually are able to run it on top of the same same resource. So this kind of gives you a view, a conceptual design of what that looks like. So what we are separating out is there is a shared infrastructure and hosting services layer which provides services like say LDAP capabilities, identity, patch management and things of that nature. DNS being another example is a common services that we will have in a hosting environment that we provide in a shared manner. And then you have the individual tenants and by virtue of creating an SDN layer on top of our environment we are able to create richer model, richer network models where on self-service each tenant can get their own private network space and create their application within that allows us to basically segment them from the other applications. Previously we had to do this manually and that used to cause deploying an application when the customer engages to actually get them live. It could take several weeks by the time we provisioned the network, provisioned the VMs as well as the security associated with that. So all that is now cut down where they can implement this in less than an hour or less than a day if they have all their pieces together. So basically everything is done in a self-service manner from this perspective. So another element in which we are looking at a single control plane is by using an orchestration layer, by using OpenStack to not only provision to our private cloud environments but also to a public cloud environment. We've done implemented hybrid clouds against some of the public cloud providers out there but one of the key changes in strategy that we are going after at this point is to interact more with cloud providers that expose an OpenStack API and the benefit there is that we don't have to write an additional abstraction layer or leverage another abstraction, another solution for creating that abstraction between a public and a private cloud. So that basically allows us to optimize and leverage the automation that we already have but be able to provision both to public and private. So our first stage is to implement OpenStack across all of our private cloud environments and then as we progress towards the latter half of this year and early next year this is where we're going to essentially expand that to public clouds as well. What we are doing right now is doing the proof of concepts with the couple of vendors to essentially experiment and refine our strategy from a hybrid cloud. And with that I'll hand over to Greg to walk through our automation framework. So for us just in general terms cloud itself is an inflection point for how we do hosting. And OpenStack is the perfect time for us to be really doubling down on driving some cultural workforce and business transformations across the board. And the tendency is to kind of attach to fancy buzzwords, CICD, Agile, training on tools, these all tend to cause people some target fixation. And really for us to be successful in a broader cloud sense and in an OpenStack sense we need to acknowledge and act upon some additional dimensions. Intel IT is an established IT shop. We have pretty mature processes for the way we used to do things. And running a hybrid cloud strategy mandates that we transform or become irrelevant. So team structure and composition, just the way that we've historically put teams together in IT. You have your hypervisor expert, you've got a storage expert, a network expert, your windows and your Linux guys, and throw them together and kind of expect to have a high performing team. And that's worked in the past, but in the cloud future, running cloud at scale, we really need to drive toward these pluggable resources. We call them T-shaped resources where they're able to go very deep on some technology, but they're also very broad and quite pluggable. We can use them around the infrastructure as needed. Our software engineering processes, Intel IT has historically been a waterfall shop. Driving toward the agile end of the spectrum is hard. I hear all the time, oh agile doesn't work, it's not going to work for us, but it has to. And it does work when done right, it's just painful. It's painful to get there and you have to have some faith that you're going to get there, you're going to reap the benefits. Workforce transformation. We've got a pretty mature workforce in terms of skills. They're able to do their jobs very well, but in this cloud model where we're changing the way we build teams and we're changing the how of delivery of cloud infrastructure and cloud applications, it forces us to transform the skills of our workforce. So bring them ahead, get away from process and tool-centric skills and drive toward the software engineering discipline, the large-scale system administration end of the spectrum support models, level one, level two, level three. These are established and quite mature, but they also introduce handoffs and some accumulated time as we try to work through problems. So driving to DevOps models where they make sense. They don't universally make sense, we're still getting our heads around where we will use them, but it will be a spectrum of support models from post-implementation hardware support, kind of the bone standard all the way up to the really highly agile, highly innovative end of the spectrum at the top, the DevOps end. And for us to be able to do this, we need to measure ourselves a little bit differently. As an idle shop, we have KPIs and these are how we measure ourselves, but in a DevOps world as we're driving toward the software engineering end of the spectrum, then we need to start bringing in some of the metrics that are relevant to software. How many releases did we do over such and such a time? Also including think things that are more mundane, like how many servers per admin are we supporting? These are the additional metrics beyond the KPIs that are kind of the standard idle playbook. So the metric scorecard is important. Being able to measure yourself is crucial. Being able to fold in these new metrics and have them become normal is very important. And then red is good. When you have a dashboard and there's some red on it, people kind of lose their minds and get really concerned. But for us doing DevOps, delivering cloud infrastructure, red is good because we know where the problems are. We know where to focus our energies. And then in our release and quality assurance realm, these are no longer separate teams of resources bringing a bucket brigade methodology to QA and what gets released, right? In a test driven development shop, everybody's doing QA. And they're doing QA first and it's just part of the software release process through automation, through CICD. So actually automating deployment. We've got a hugely complex environment. We've been guilty of building automation and deploying it by hand in the past. And then expecting the environment to remain static, not drift, not break, not cause problems. So how do we continually improve the system, deploy it to a specific specification and be able to pound out clouds as frequently as we have to, whether it's in a lab environment in a dev environment or the production environment itself. So continuous integration and delivery really is delivery of infrastructure, right? And we are at the maturity level where we're starting to shift to continuous delivery. We're not sophisticated enough where we think that continuous deployment is going to be a thing for us in the infrastructure space. But as we mature over time, we would get there and the bedrock is laid through continuous delivery. The automation still pushes the software updates out to the environment the exact same way. It's just a human drives it. And then longer term for us, infrastructure CI is really what we envision metal as a service. Metal as a service plus plus. So the lower left hand graphic kind of depicts where we are today. What we're turning on this week in an environment. It's deploying our own cloud infrastructure through automation, CI CD, nothing gets to the production environment unless it flowed down this automated path to production. But we've got additional use cases that are targeted for later in the year when, or not later in the year, next year when things like Ironic get a little bit further down the path where we will have targeted use cases to be able to deliver end using applications on metal to customers, right? And then ultimately we will find ourselves back in our software development and validation lab environments where customers can consume the same framework to deliver products to market. And really for us metal as a service is a game changer, right? We've got a huge silicon design environment. A bunch of us working in the cloud space today came out of that environment, came over to the enterprise space to work on this cloud stuff. And we really see that we'll link up eventually where we're talking about our industrial workloads that are taping out SFC projects or validating software projects are just running on a platform, running on infrastructure. And it's really no different than cloud applications, running on a platform, running on infrastructure. So metal as a service becomes a game changer once it's as easy to get metal as it is to get virtual and mass. And we could take the pets versus cattle kind of meme and drive that to the foundation. It changes everything that we do. We're on to QA. 2014 focus areas, rolling upgrades, no tenant downtime for resources of services, connection into all existing infrastructure. So a single control plane is our biggest focus for the year. Disaster recovery between sites for VM tenants, restart a VM when a host fails, hybrid cloud enabled through Horizon. And then using OpenStack to do additional work, right? So back up in recovery, we still have kind of the traditional media master client kind of architecture for Bar. We need to drive that forward. Bare metal provisioning, as I mentioned, some of the higher order load balancing firewall as a service type automation. And then, you know, we will, we've got some proprietary code around databases, service load balancers, the service that will be displaced with OpenStack going, going forward, things like Trove, right? So in summary, our direction is federated, interoperable OpenCloud. You know, the single control plane provides a compelling glide path to get us there, right? It doesn't displace or rip and replace all of our past investments. It really up levels the capability that we deliver to our customers. So it's highly compelling in that regard. And then in order for us to run cloud of scale, we're still going to be focused on IT culture. Our skill set continuing to tweak our business process, so, you know, including governance, right? And then down to the technology discussions. So what we are doing is we have implemented an SDN as an overlay network on top of our hypervisors. So and then we are leveraging two different models for networks where we have those non-onclave environments. We are still using shared networks with VLANs extended. And then where we have more secured environments, there, the model that we're going after is a pertinent router with private networks behind it. So that's how we are getting the segmentation from a network perspective. The next thing that we are actually working towards is, so this addresses the virtual environment. For the physical environment, as we want to introduce a similar level of abstraction that allows us to simplify our underlying network fabric. So along with the ironic or whether we introduce an open v-switch on the physical switches, whether we do open flow. So that's an area of research that we have that we are investigating so that eventually we are able to abstract even the physical servers from the network. Okay, thanks everyone. Thank you folks.