 Cresci's laptop machine. We should be up and going. Can you hear me in the back? Okay, great. Thank you. So welcome. My name is Vish Netkarni. I'm part of the Intel data center group and I am delighted to basically present the sponsored track to talk about Intel and our OpenStack initiative. Before I proceed, I was asked to make sure everybody in the room is getting your passport stamp for attending this session. There's a raffle draw happening and I think you'll get Intel compute sticks with Linux Ubuntu on there. So when you complete the session, get a stamp in your passport and if you don't know what that is, talk to my friend Krish over here and you need, I think, three stamps for one raffle. Okay. And for those of you who haven't checked out our booths downstairs in the marketplace, booth H3. So please stop by and check it out. Okay, I wanted to kind of set the stage for about five minutes and then hand it off to my esteemed guests. I'll introduce them in a minute. Our job here today is to basically talk about Intel Cloud for All and the efforts that we've worked with our partners. I'm delighted to join today on stage by Nick Barsett, is the director of product management at Red Hat and I will have a formal introduction and then Jim Sankster, who is a senior director of Solutions Alliance at Morantis. Both gentlemen will be discussing at length about their efforts in the Intel Cloud for All and some of the collaboration work we've done together. So as you know, cloud architectures are enabling growth and innovation, but the broad adoption of the technology by enterprises and cloud service providers is still lacking. Cloud computing has become a tremendous driver of growth, but yet we see that enterprises in particular are kind of struggling to adopt the cloud technology. In fact, we see that the industry is not moving fast enough. We have a situation where traditional enterprises who basically want the same agility and efficiency as the large public cloud service providers are not able to get it. So what we've done as an Intel is we've announced an initiative called Intel Cloud for All Initiative and we've made a series of announcements this year starting in June when we basically announced the on-dram program partners with Cisco and Dell and then later in the year in July we announced partnerships with Morantis and all through the way all through the year we've announced strategic partnerships with various companies to basically unleash these clouds for enterprise for both enterprises and Tier 2 CSPs. So the basic pillars behind the Intel Cloud for All is really around three basic items. It's invest, optimize and align. Basically what we're doing is as I mentioned we're investing in creating enterprise-ready easy-to-deploy SDI solutions. SDI for those of you who have not yet heard is the Intel software-defined infrastructure. It's the foundation for how we expect to deliver enterprises the goodness of the cloud model and being able to deliver services through their data centers. We as I mentioned we have taken partnerships with our lead vendors and leadership partners to deliver solutions to the marketplace. We're also optimizing for high efficiency across cloud. Basically we are trying to work with our partners and Nick and Jim will talk about it. Bring features that enterprise customers care about. Think about high availability, scalability, resilience, reliability, upgrades. There's a bunch of features that enterprise customers care about and that's part of the optimization of the code. And lastly we are also aligning our efforts to accelerate cloud deployments. Intel is one of the leading contributors in the OpenStack Foundation. We are part of multiple groups and we provide our contributions and leadership through those forums. My first speaker today is Nick Barsett, director of product management. I'm delighted to welcome him on the stage. Intel and Red Hat have had a long history in creating OpenStack partnership, especially going back in the Linux days where you guys have been the leaders in OpenSource system. So I want to invite Nick to talk a little bit about what Intel and Red Hat have done in the OpenStack system enterprise. Thanks a lot, Vish. So I guess I need to switch to the other deck, correct? So, finally enough, this partnership between Intel and Red Hat is not something that started yesterday. A little bit more than 10 years ago, I was working for Intel and I was actually working on a program with Red Hat. One of many. And as soon as we get the slides, we'll have the detail of them or maybe not. So that we gain a little bit of time, I'll talk about them. So if you remember, Red Hat was founded in 1993. Ever since, there has been a very close joint interest in first making people move to Linux, then in making people adopt virtualization and more lately for people to adopt cloud. As Vish mentioned, we've been working on the Intel on-ramp program, which first demonstrated the OAT and TXT function for added security in cloud deployment. And we've been operating a test flight platform for the past year or so, demonstrating that two years, thanks two years, demonstrating that more recently, we announced the version of this same program, the on-ramp for the enterprise to make it easier and more efficient for enterprises to use and deploy and maintain OpenStack. And you're going to see later in this presentation that a little bit more has been and is being announced today. So it was a slide I was waiting for, but now I've said everything I wanted about this slide. So for the past 20 years, we've been working together in making these technologies be safe and secure for the enterprise to consume. And as we've been doing it for Linux, we are doing it again for the Open Hybrid Cloud. So three key things in our enterprise-dedicated program. First, make it really easy to deploy. Second, provide a way for the legacy workloads to have a home and OpenStack. Even though OpenStack is not very friendly for stateful application, we've found a way to provide high availability for instances or guests in the case some compute node was to fail. How do we do that? Simply by combining the technologies that are available in Red Hat Enterprise Linux and the one available in OpenStack. So Nova plus Pacemaker Remote allow us to automatically evacuate and restart VMs in case of a node failure. And something that is still not completed, but on which we are still working today, rolling upgrades because when you have a very large cloud, and we really hope that our customers are going to have larger and larger clouds, you don't want to upgrade everything at once. It causes too much of a disruption. But when you do such an upgrade, you cannot, at this point, have different versions of different components running at the same time. So we've been working with Intel on implementing versions of objects that are going to allow us when completed to provide this version mismatch to versions running at the same time in the cloud. We've also been working on making the ecosystem a little bit more vibrant, making sure that we are working with partners to deliver the value that we are creating together upstream. And we, of course, have a great go-to market plan. But this doesn't stop at OpenStack. There are multiple communities in which we are jointly investing upstream and downstream in order to deliver the value of the open hybrid cloud. For example, Intel has been one of the first sponsors of the DPK program that is, I guess, now a foundation, I think, if I'm not wrong, of which Red Hat is one of the members. Same thing for Open Daylight, same thing for Open VisWitch, same thing for KVM, that's a much older history here. All this is combined efforts to ensure that we are delivering in an open-source way all the components that are necessary to deliver real value. And this, when you look at the components that we are talking here, can apply both to the enterprise or to telecom operations. We are also ensuring that we are delivering these things that we are creating upstream to the real customers by including it in real solutions. The first one of the solution part of the on-ramp for the enterprise program is the solution that we are building with Cisco. One of the things that can be a challenge when you deploy OpenSack for the first time is the configuration of the hardware. How am I going to configure the hardware so that my OpenSack deployments happen correctly? By having reference architectures defined with hardware vendors such as Cisco, we are able to cut down on this complexity. By adding the value that is specific to Cisco, by adding the value that we've jointly developed with Intel, we are delivering a turnkey solution to customer that integrates with N1KV or ACI that is deployed in a matter of a day, that is integrating the UCSM functionalities and much, much more. With Dell, same problem, same type of solution, but here adding the specific of the Dell components. We've been working with Dell on a program that we call JetStream for the past two and a half years, if I'm correct. We are now at version 4.01, just about to release 4.1. Dell has been building reference architecture with us and was one of the first partners to join the on-ramp for the enterprise program when they saw that we had solved this HA problem that I was talking about. In fact, if you were there at the last summit, Dell was the first partner to be demonstrating this VM high availability functionality on their booth in Vancouver. We've been co-engineering this solution together. We've been building our support capability and we've also been building the deployment experience by cross-training our team. We now have a very strong validated architecture that is also available in a matter of a day when somebody requests it. As I mentioned before, the on-ramp program started with this test flight program two years ago. In the past two years, more than 1,000 attendees joined the 40-plus workshops that we organized. We had hundreds of test flights happen on the hosted solution that we are operating, demonstrating among other things TXC. We have more than 50 deals, meaning customers that I've decided to deploy after doing the test with some of the largest companies in the world. We've signed up with a very large number of value-added resellers throughout the US. This is really concrete results of these actions that we are doing together. Today I'm very proud to announce, and that's today's big news. Lenovo is joining the on-ramp program. From today, Lenovo, Red Hat and Intel are going to be collaborating at building a reference architecture for OpenSack. We are extending an existing relationship to provide the Lenovo-specific reference architecture for easy deployment, secure, reliable OpenSack. This is going to be first on two types of servers at Lenovo, the 3650 and the 3550. This is of course going to be using all the tooling that we've been working on upstream, known as triple O, for the deployment. It's going to be implementing the management that is specific to the Lenovo system, and it's going to be delivered worldwide pretty soon. Another announcement that was made today that is also quite exciting for me is the program that is called the Intel Network Builders Fast Track that Imatsusou announced in his keynote this morning. As you certainly know, we've been working very hard on making NFV network function virtualization happen at various tail codes. One of the key elements there is providing functionality that are exposed in hardware up the stack all the way into VM and even pretty soon to containers. That takes a form of partnership around DPDK as I mentioned earlier, but also in the integration of this into OpenVswitch, so that you can still benefit from the functionality of OpenVswitch, not just do a pass-through when you need fast packet transfer. This program is not going to stop there. This program is to go even beyond and allow for the most demanding use cases that we are seeing in the NFV space to be delivered to our joint customers. As you can see, we've been quite busy working with Intel, and I'd like to thank everyone at Intel that is present in this room for their support in our great adventure. And on this, I guess... Great, thank you very much. Don't go far, Nick, because we'll do questions afterwards. Obviously, do you want to do that one slide as a transition or should I just jump in? Great, okay. As we're going between the slide sets here, a little bit of background into what we see is the overall spectrum here around ease of use, deployment, and solutions in and around this space. If we think about this, the way the market adopts OpenStack has been changing over time, and the more this moves mainstream over the course of the end of 2015, 2016, and beyond, the more important it becomes to make this easier to use, easier to deploy, and more packaged for customers to deploy. If we think about this on the left, the far left of this spectrum is where the bleeding edge of the early adopters love to be. Many of you here have long been community members and love to be on the bleeding edge. Do it yourself. Go straight off upstream. Community support or support it yourself, and that is fundamentally important to taking OpenStack and moving it forward, and we never want to change that. As we move more into the middle, we've seen a number of different companies, Red Hat, Morantis, and others offering distributions that package all the goodness and harden what OpenStack is about, offer that up to our customers, and in turn, provide support. In the case of Morantis, we also provide fuel to make it easier to install. I'll get into that in a few slides, but that is a way to take a technology, and particularly one that's moving as aggressively as OpenStack does, and make that much, much easier to consume as a customer, and then, most importantly, in an enterprise environment, support it over the long haul over time. And then on the far right, we have appliances, delivered turnkey, completely packaged hardware, software support all together, and certified by these partners that I'll talk about, which becomes fundamentally important. And if we think about that, it's very important to have choices across this entire spectrum. So as we see customers moving increasingly toward the early adopters and more and more mainstream, we expect to see more business grow on the right, but that doesn't mean we see the left ever go away. We're still going to be seeing those aggressive people on the bleeding edge that want to just keep pushing OpenStack forward, and that's terrific. But it's a trade-off. If you think about this, there's an inverse proportion. On the left, you're doing most of the work, but you also have to have quite a large engineering organization to pull that off. On the right, you're consuming lots of that from one or multiple vendors that are bringing together a solution, and you don't necessarily have to have that large team of support. And that's part of why, as we think about that over time, customers in the mainstream don't necessarily have a dedicated OpenStack engineering organization. So if we think about OpenStack, where it is today, and how we can simplify that experience, the deployment, the ease of use, at the infrastructure level when it comes to Mirantis OpenStack or what we call MAS, that's why we've developed Fuel. And Fuel is part of our distribution. It's an open-source technology in and of itself. It is the number one dedicated purpose-built OpenStack installer. Now, there are other tools out there that people definitely use, Ansible and so forth, things like that. But when we look at a purpose-built tool just for this, it has a GUI, it has a command line interface, you can use either. But this is how we can configure, how we can deploy, and how we can manage an OpenStack environment very, very easily. And so that's why we invest very heavily in this environment. So you can go straight out of the box with what we call Fuel plugins into different infrastructure technology. Storage devices from storage companies, different server or software-defined networking technologies and so forth are all then automated through the use of these plugins through Fuel. And then once you get OpenStack up and running, there you go and you have the normal interfaces and so forth. So that's what's taking care of all the infrastructure below OpenStack. The layer above that and really the whole point of once you have infrastructure as a service running, what are you doing that for? You're doing that to run applications. At the next level, ease of use at the application level, that's what Morano is all about. And Morano is a part of OpenStack at this point and it's something that was brought to the community from Morantis. It's also part of our distribution. And as of the announcement back in Vancouver, there is the OpenStack community app catalog that then provides access to all these different technologies by going straight to that app community and being able to automatically deploy database management systems or container engines or dev test tools like Git, Garrett, Jenkins and so forth. There's a large set of different Morano applications that you can choose from. You can also create your own and these are multi-composite applications. So through what you can do in Morano, you can not only use applications that are available to you from the community, you can also create your own applications and have them available for your end users in that application catalog. So that's how that can be at the higher level at a PAS layer or an application layer, ease of use to be able to provide that to your end users and you can give them as much or as little capability to be able to utilize that technology. So these are the two ways that we use this technology to provide either infrastructure application level simplicity, but now let's take that together and extend that into how we take that to a complete turnkey solution. So if we talk about turnkey OpenStack ease of use, that's bringing it to a rack level. And I mentioned that we talk about it from a hardware standpoint of the servers, the storage, the networking, any of the software-defined networking, software-defined storage all together with the OpenStack technology. And what we do at Morantis for that is a program that we call Unlocked Appliances. We announced it, we're doing a technology preview in Vancouver, we announced it in July, we had our first solution at that point. So we have a portfolio of Unlocked Choice and are building this out over time. This is more than reference architectures, Morantis does have a number of reference architectures and yes, these are built on the foundation of reference architectures, but they take it steps further. Those steps further include full automation software build in the factory through certified partners and then those partners once they deliver it on site to a customer, it's integrated into the customer's data center and network, power into the wall and then it's certified on site for the customer to reduce their risk and deployment. That certification is a fully automated test harness. So when we're done with that certification process, we know we have a fully operational up and running OpenStack cluster to the spec that was designed in the factory, not only performing, but performing at scale that it was intended to perform at. So that's the idea behind the program as we've been building that out. So I'd like to give a little bit of a future reference to technology preview of an upcoming appliance. It'll also give an example of what one of these looks like. So very, very typical. We've got top of rack networking. We've got both compute foundation and storage nodes all integrated into a single rack from this technology. In this particular example, we're using a risk to switches for the top of rack on both management as well as data plane. For the compute and foundation, we're using the super micro super server 2028TP. And on the storage side, we're using the super storage server 6028R. In this case, it's an NVME based storage subsystem optimized towards CEP. CEP is being used for the software to find storage in this case. And by using Intel's based SSDs connected directly through NVME, the highest performance possible in terms of an interconnect between the storage subsystem and the rest of its node there. When we're talking about the high performance, we can pull out of the right journals in the CEP environment. So we're always trying to take advantage of some of the unique and advanced features from Intel as we're bringing these out. This is one example of that where super micro is on the forefront of NVME offerings that can then be brought into your data centers. From the next standpoint, I want to also come back to some of the things that Vish started with when it comes to Intel and Cloud for All and that overall initiative. And it was mentioned that Mirantis became a part of that a few months ago. So I thought I'd share some of the things we're working on. First, let me give it in the context of a lot of what's behind Cloud for All is really making OpenStack the most suitable cloud platform for the enterprise. And that may be a big statement, but that's really what we're collectively trying to do together. And I gave some examples here of what Mirantis has been doing in our previous and most recent release, 6.x and 7.0 of Mirantis OpenStack and the enterprise features we've been doing. And this is previous to the Cloud for All initiative. But from Mirantis OpenStack we support 200 nodes out of the box. We can go to many thousands of nodes outside of just out of the box engagement. But inside, just opening up the box, what you can download and use, 200 nodes supported out of the box. We've integrated so we can have KVM side by side with VMware as well as NSX-V into a hybrid environment that is very common for enterprise customers. We're today supporting vSphere with a fuel plugin. Very soon we will have an NSX-V fuel plugin as well. As I mentioned earlier, that makes it very, very easy for you to deploy as you're rolling that out in your environment. We've also recently introduced a hierarchical multi-tenancy environment through Keystone. So it's not only just multi-tenancy, but multi-tenancy within the multi-tenancy. So you can go cascading all the way down to support a very large organization. And also enhanced our in-place upgrades to be able to go from one version to the next to the next more elegantly as well as implementing rollbacks. However, we're very busy at work with Intel in how do we take this technology and really push the enterprise agenda moving forward, not just selfishly amongst ourselves, but in the community. This is all work going on in upstream together led by Intel and Morantis. And these joint enterprise features you see on the bottom are really what's defining that in future releases. So enhancing more rollbase access controls, but the other bottom two are a little bit more near and dear to my heart and want to talk to them about them a little bit more. Many people feel that VMware is just this untouchable thing in the enterprise and OpenStack is just never going to get there. We don't believe that to be the case. We're taking strides to try and reach performance and feature parity between what you get when deploying OpenStack with what customers enjoy with VMware. So specifically live migration performance and reliability. So what customers have been expecting and enjoying with VMware when you use vMotion and DRS and all those types of capabilities. These are the types of enhancements that together we're putting in upstream and working very hard on to make that. So in an OpenStack environment, it behaves the same, if not better by the time we're done. Also in an HA environment, so better handling of failure nodes and improvements of projecting when and how we can move things around to accommodate those failures. Very similar to say VMware HA feature parity is what we're trying to achieve there. So some examples, we have a full road map that we're working on together and again worked collectively together in the whole community in an effort to make this very, very suitable for enterprise class customers and an overall end delivery of enterprise grade OpenStack for everybody. I think that's suitable to the name for cloud for all. At that point, Vish, why don't you come on up and close it down and then we can do some questions at the end here. Thanks, Jim. So can you believe it? We are actually almost on time even with our technical difficulties. So I'd like to request Nick to come up on stage and we'll do a few questions from the audience and let these gentlemen answer them. Questions? Well, you guys are convinced. Wow. That is amazing. Okay. Going once. Okay. Before you guys go to a happy hour, a quick plug for those of you joined late, please participate in the Intel passport program. You need to get a stamp for attending this session. My friend Krish is going to hand out some of these passports. There is a cool raffle for you to enter. I think raffle is having drawn every multiple times in a day and there are like 12 cool prizes. Intel compute sticks with Linux Ubuntu. So do stop by our cube. It's cube eight. It's boot age three in the marketplace. And I want to thank you for your time and for attending the session. Thank you so much to you and to our speakers.