 All right. Looks like we are live. How is everyone doing? You can hear me in the back. Yeah? Good? All right. Perfect. So my name is Irvin Sony. I'm lead product manager for VMware's efforts with OpenStack, including the VMware integrated OpenStack, which is our distribution of OpenStack. With me, we have from Yahoo Japan Akira. You want to introduce yourself? Yeah. My name is Akira Kamiyo. I'm an office task engineer in Yahoo Japan infrastructure team. Okay. Sounds good. So here's roughly the agenda. We're going to keep it completely non-technical. We'll talk a little bit about decisions around OpenStack, what kind of factors come in. Just mostly keep it at a more of a business level. But feel free to ask any technical questions. There are more technical sessions on Thursday. But here's roughly the agenda. We'll talk about the three important decisions that probably everyone of you who's thinking about OpenStack will have to make. We'll give you my personal opinion on some of those decisions and how to think about some aspects within those decisions. Then we'll describe VMware's approach to OpenStack and the success that we have seen so far with that. Then we'll go over the Yahoo Japan and their perspective on OpenStack and VMware. So let's get started right into it. OpenStack, from its genesis, if you look back, was inspired very much and it still continues to be from Amazon. The Amazon Web Services, if you can look at the services, not only the old mature ones, NOVA, EC2, EBS, Cinder, VPC, Neutron, Swift S3, very compatible. So the first question that you have to decide is, why are you building OpenStack Cloud? Because one thing I can guarantee you is that Private Cloud is not going to be easy at all. It doesn't matter which way you do it, doesn't matter which products you use, doesn't matter which route you go. It's just not going to be easy. So if Private Cloud is not going to be easy, you have to have a very, very compelling reason why you are doing Private Cloud. Right? That makes sense. So the first thing for you to decide is seriously, why build OpenStack or any kind of Private Cloud for that matter, because OpenStack is just a piece in the Private Cloud. Here is some high-level trade-off. When you build a Private Cloud, you own it. You are not doing a pay-as-you-go model. You can think of it as renting a house versus buying a house. At some point, the economics play out and you should buy a house if you're going to live in a place for five, seven years. If you're going to run the same workload in and out day in, day out, maybe just build the Private Cloud. But that's up to you to decide. Right? That's the high-level trade-off. That's the first decision to make. Why build an OpenStack Cloud? Sounds obvious, but a lot of people jump head first feet and then think about, why were we doing it anyway? Second thing is OpenStack itself needs a definition. So let's first define OpenStack. OpenStack is not everything. OpenStack is a framework of a bunch of Python code that delivers various APIs and services, and these things have been built in a vendor-neutral fashion. So VMware can plug in their drivers. If you have a product, you can plug in your drivers. It's a pluggable vendor-neutral framework that gives APIs and services. Right? So far, so good. It does not come with infrastructure. Whether it's KVM, vSphere, whatever you pick, you have to pick an infrastructure and you have to pick all three of them, compute, network, and storage together. You have to think of them together. Don't just think about hypervisor or the server or the network or the storage. You have to think together. This is cloud, not virtualization. Virtualization was 2008. This is 2016. It's about cloud, right? So you have to decide what infrastructure you're going to use. That's your second important decision, right? The third important decision is, how are you going to go about building this OpenStack Cloud? And there are at least three, four different approaches. Whether you're going to do it yourself, pull the code from upstream, build a cloud, run a whole bunch of scripts, hire a whole bunch of engineers, or you want to go with some consultants, give them a whole bunch of PSO money and get it up and running. Or you want to go and buy a whole bunch of products and build it with those product companies, right? I'm not giving you any guidance on which one is cheaper. I'm not saying this is right, that is wrong. There are success stories for every approach and there are failure stories for every approach as well. But you have to decide which approach you're going to take because it has repercussions down the lane, right? Your success will be heavily dependent on what approach you take. So these are some of the thoughts from my experiences working in this OpenStack space for the past three years. And these are some of the examples why people say that we want to build OpenStack Cloud, right? Some are obvious. I want to increase my developer productivity. Yes, it takes several months to give them a VM. I want them to have a self-service model so that they can call NOVA API and get a VM just like they do against EC2. Great, makes sense. I want to save money. Where do you want to save money? Where are you going to save money? OpenStack sits on top of infrastructure. Whatever infrastructure you are buying or we are planning to replace for cost reasons, you should go ahead and do it. OpenStack has got nothing to do with it. It is sitting on top of your infrastructure, right? So let's just be clear. Where are you going to save money? And you've got to really go double click and think about it. I'm going to save money on people. I'm going to save money on licenses. I'm going to save money here. Let's be clear. Where are you going to save money? Ask yourself the question, right? The third one is really the one that pisses me off the most about the vendor lock-in, to be honest. Ask yourself the question, which OpenStack deployment in the past five years is free of vendor lock-in? There are several hundreds of OpenStack deployment. Which one of them is free of vendor lock-in and why is it free? First of all, is there such a thing like free from vendor lock-in? You go to AWS, you are stuck over there. You build your own OpenStack cloud. You are stuck with the people who build the cloud. The day they leave, you are left in a lurch, right? It does not work out. There is pretty much some degree of lock-in if not entirely the lock-in in every approach. That's just my personal opinion, but you gotta go think about where is this lock-in or no vendor lock-in coming from, right? What am I gonna do that will make me free of these vendors who are trying to sell me OpenStack in the first place, including VMware? But once you have decided, and let's say you decided it for some good reasons. For example, I want to do it for developer productivity. Clear enough reason. Still, why would your developers come and use your OpenStack cloud when they can easily go to AWS and get a much higher SLA than your private cloud would probably deliver in the first year? You will run into issues. As I said, cloud is difficult. So you gotta think again, right? Even if you have good reasons, think again, why are you building an OpenStack cloud? In fact, why are you building any private cloud? You gotta go think it again and again and again, right? Be very clear, why are you building it? And especially of these fluffy reasons, no vendor lock-in or I'm gonna save costs. Let's go do the math on how we're gonna save costs. All I want us to do is go into doing OpenStack with a clear sense of reason why we are doing it. Because the most success that I have seen, even if not talking people who are doing it with VMware or without VMware, is that they have a clear sense of reason why they are doing OpenStack and there is a clear path. Like PayPal is doing OpenStack because they want to run the PayPal e-commerce platform on top of it. Great, go ahead, do it. Makes sense for them, right? So that's the first very important decision and here are some food for thought for you to consider, right? One easy way to figure out whether you should build OpenStack is again, go look at, do you have people going to AWS? If yes, then you can potentially get them to use OpenStack. OpenStack has some services that can mimic an AWS kind of behavior, right? But if you don't have people going to AWS or if you don't have clear idea, then let's go back to the whiteboard and think about it. Okay, let's say you decided that you're gonna build OpenStack. This is my favorite slide. This is called overflowing choices, just the death by choices in OpenStack. There are 3000 infrastructure combinations based on the support matrix from Nova, Neutron and Cinder. Anybody deny with that? There is just that many choices. Only very few of these combinations and exact versions of these products actually work with each other, right? I know because I work for a company that makes hypervisor, so I know very few products in this stack work with that hypervisor, right? Similarly, there are very few networking products that work with the other hypervisors in a reliable, proven manner. So here is something to think. Which one of these products or combination of the product are you gonna use? Again, let me iterate. It's not just about the hypervisor. Cloud is more than hypervisor, right? It is network, storage and compute all working together along with your operations and your expertise. If you can't run the data center with the set of products you are choosing for OpenStack, you can't run OpenStack cloud on top of it. OpenStack sits and it will give you more set of services to worry about. It will give you Nova, Cinder, MariaDB, message queues, load balancers. You will have more on your hand, right? So you better use the products that you know how to operate that will reduce the risk and give you time to focus on OpenStack. That will increase your chances of success with OpenStack. And in our case, it is the VMware customers who know how to run the VMware data center and we target them and we give them OpenStack solutions so that they can be successful. Makes sense? Anyway, the bottom line is pick the infrastructure combination along with exact versions of those products that you know works in production. OpenStack is not a magic. It cannot fix your underlying data center problems, right? It just really sits on top of it and calls the underlying products from Nova driver, Neutron driver, Cinder driver, things like that. Okay, you decided you're gonna do OpenStack, you picked the best products that you're gonna work with. Okay, how are you going to build OpenStack services and what architecture you will use? What approach you will use to build that OpenStack? That's very, very crucial as well. There is at least two different approaches, two categories of approach. One is I'm gonna download it or I'm gonna get a bunch of concerns, I'm gonna completely customize this and make my own unique OpenStack cloud. Great example of this would be PayPal, Comcast, all these folks, they have built a very, very tailored OpenStack for their own use cases, which is perfect, yes, it's a technology, you should use it that way if you want to. But the problem with that is if you're gonna build a very unique cloud, you will go into that rat hole of maintaining that unique cloud, you are responsible for maintaining that cloud after that. You will start digressing, you will start incurring headache of maintaining that patch, if it is not part of upstream, you will have to maintain it each and every time you upgrade and patch, right? That's the trade-off. What's the benefit? You got the flexibility, you made the cloud tailored to meet your needs, great, but it comes with a cost of headache of maintaining it over time, right? The other approach is more of what we as a VMware believe in is more standardized approach. Your OpenStack cloud and my OpenStack cloud, they will have the same architecture. It will have 99.9% same configuration. If you run in an issue, I can simulate in a lab and I can fix it for you, great benefit. VMware will be able to support you in that. What is a big downside? I cannot customize it for you that much. You come and say, I want to use that different product, I cannot do it. I want to use this 10 different configurations, I cannot do it, right? So you see the trade-off, right? You can do a custom tailored cloud for yourself, make a unique snowflake, or you can go with a standard cookie cutter approach, right? Like what I'm gonna explain where the VMware integrated OpenStack. So that's the kind of trade-off. But this is an important trade-off because on one side, you will have to babysit each and every change that you do. On the other side, you will be limited to the choices that you can and cannot do, right? And our general stance is don't build unique snowflakes. That's just, it just incurs a lot of headache over time. And you think like, okay, what's so wrong about me changing five different parameters? Right, as you go to NOVA line number 75 to 80 and change all the five parameters, no big problem. Well, the problem is you changed it and now when you call the vendor support, they don't know what all you have changed. So they have to figure out what is the repercussion of the configurations you changed, right? It becomes very, very difficult. I have lived through this hell all of my 2014, so take it for what it's worth. Don't build unique snowflakes. And if you don't trust me, then this is a quote from big companies who are spending a lot of money in customizing OpenStack for several years and then running an understanding, it's really hard to maintain the changes. Every change you do to make the OpenStack unique to your cloud, think that it will come with long-term babysitting headaches. Every change is like a child that never goes out to college. You have to pay for that patching and upgrade. Every time you think about it, right? So we'll use one more quote from Jim. So let me summarize what we did. You decided why you want to do OpenStack. You picked the best products that you know how to operate and you picked the approach that works the best for you. We showed you some of the trade-offs, right? So once you have decided, here is VMware's approach to OpenStack and what we have seen in the past three, four years that we have engaged with OpenStack. So another comment from Jim is there are two main things that need to happen for OpenStack to become mainstream in large number of enterprises. First thing is you have to simplify operations. It sounds so brain-dead. Yes, you have to simplify operations. Otherwise, how will people run this in their data center if you can't simplify operations? But guess what? Operations most of the time is an afterthought. People are in so much bogged down in building OpenStack that they don't think. How are you going to patch it, monitor it, troubleshoot it? How are you going to upgrade it? Nobody thought about it, right? The second one is kind of a revelation from a coming from Red Hat side is that you have to have high availability features inside your cloud. Why? What happened to all the kettles versus pets? Kettles versus pets, whatever that is, right? The cats and dogs and cows and buffaloes. Well, actually the world is not that binary. If you look at your application, it's not so clear that everything is pet, everything is cattle. Now, some things are databases, some things are Apache web servers, some things are message queues, some things are stateful, some things are stateless. Your developer does not care. They want SLAs. When you go to Amazon and say I want to run a workload, Amazon does not ask you, are you going to run a cattle or are you going to run a pet? No, they give you the infrastructure with NSLA and that's how it works in the cloud world. You don't ask them the question, you should go build high availability in your app, otherwise you cannot use this cloud, right? That's not how it works. So that's the part of the workload type. Now let's say even if all the applications in the world were kettles, right? They were built with the best high availability in the application layer itself. Tomorrow you have to go and replace racks of server because it's aging and you have to put in new servers, right? So if you have to do that, you have to put these servers in some sort of a backup mode, move all the workload and then remove and bring in the new hardware. How are you going to do this? That's why I said, pick the infrastructure that you know how to operate, right? Pick the infrastructure, combination of infrastructure that you know how to operate in the data center, otherwise it's gonna be nightmare. So here's VMware's approach, following Jim's very good advice, although we were already quite ahead on his advice. He gave it in 2015, we have been working on this advice since 2012. So what's our approach? What's our philosophy behind VMware integrated open stack? Number one, simplify open stack operations. If you and I can't operate it, forget about it, let's not do it, right? And here's a good example. Yesterday we were doing a training of some 30 odd new open stack people wanted to learn VIO. Over lunch in 30 minutes, these people deployed complete production ready open stack, of course in an environment which was lab setting, but this is a complete production ready environment. 30 people deployed it in 30 minutes and guess what? They were not even paying attention, they were eating their lunch. I'm not kidding, seriously. They were focused on bento box. They were not focused on their VMware box. No CLI was being run, no 48, 50 steps of puppet and these things. Those are all error prone. If you're gonna incur that amount of complexity in building a cloud, it will take six months. Again, the developer is gonna go to AWS, he's not gonna wait for you to build this thing for six months, right? It has to become brain dead simple for the private cloud to succeed. And we are trying to get there. I'm not claiming we are there. The second aspect is around those high availability features. And this is not just saying that VCRHA is supported, but all the things that VMware has built over time so that you can move your workloads, you can evacuate your hosts, everything, like from the data store abstraction that we have built from the DVS abstraction that everything is applicable and works in open stack environment. So for our VMware customers who know how to run a VMware data center, it just works. Open stack just sits on top of it, right? So that's our philosophy and we want all of this to be production ready and supportable by VMware. And that's why we give you a fixed architecture. We don't let you digress that much. I mean, we allow you to configure some things, but we for the most part don't want you going into a snowflake rat hole kind of situation, right? So we launched our distribution. This is a summary description of it. That big dark blue box is what is VIO. It contains standard open stack, deaf core compliant open stack, right? The same standard API, same services, no secret sauce in there. It comes with an installer, which is an oncible and based installer. It just goes ahead and creates a clone of VMs, installs all the open stack services. There is no license cost to it. If you want the VMware support, it's a nominal $200 per CPU per year. And as I said, this is a very cookie cutter approach. If I go ahead and deploy this in 10 different customers, I know each and every one of them are running the same standard architecture. I can support them very easily, right? And this has not happened overnight. We have been at it since for three, four years, as I said, and the last in 2015, we launched the version one. It solved a lot of operational problems. Seriously, you can deploy complete production open stack with VIO in 30 minutes. Not only that, it comes with patching. It comes with how you will add capacity, how you will remove capacity. It comes with fully automated upgrade in 2.0. One of our customers just did the upgrade we didn't even know about it. No PSO involved, nothing. Went from ice house to kilo in about one hour. No questions asked, right? And if you wanna learn how that happens, there is a Thursday session. You can ask me questions if you have time permitted. I will show you. I mean, this is the key. And it's all Devcore compliant APIs. It's the same open stack APIs and services that you get anywhere. And here's the best part. If you write your applications against these APIs, you can take it to other open stack clouds. You are not logged in. This is where the no vendor lock-in comes in. It's at the API level. It's at the level that you're talking with the open stack APIs. You write all your applications, you take it to other open stack cloud, it's perfectly fine. Your heat templates that work against VIO, they work against any other open stack cloud, right? That's the Devcore compliance. That's why API compliance is crucial. That's kind of an advantage against AWS, right? If you write against AWS, you can't go anywhere else other than AWS. Here are some of the success stories for VIO. With this approach, we have been highly successful. I'm not saying that we have not seen issues in production and that I've not lost sleep and my engineering team has not lost sleep over it. It has been successful in the sense that we have created a model which can be repeated again and again, right? All the learnings that I get from every customer, I can give it to you. So now when you take our VIO, it's much more enriched from all the learnings that we have. It's the same standard approach. I didn't create 10 different open stack deployments. I created one deployment 10 times and I take the learning from it, all the bug fixes and I repeat it to the 11th guy, right? You see how fast we can learn with this approach? It just really helps us grow much faster and deliver results. And look at the impact of it. 10 weeks to going into production from scratch, no knowledge about open stack. Four guys operating entire 5,000 plus some RBMs that they have now, running a full production e-commerce website, leveraging core features of vSphere HA and doing vMotion and DRS live underneath open stack, right? Very powerful cloud. The other customer, as I mentioned, the customer number three, they migrated from Ice House to Kilo without anybody knowing anything, including us. We got to know about it when they ran into an issue three, four days after upgrade, right? Customer number two, again, the same they had to replace a whole bunch of storage and server. So they used vSphere features to evacuate the host and deliver it. In nutshell, it is a really fast and a reliable approach. It's really one of the fastest approach to open stack. It's very reliable and the biggest of all is it's operationally complete. The operations is not an afterthought for VIO. It is the primary value prop. It is a primary focus that we deliver. So with that, I'm gonna hand over to Akira and he's gonna give you guys some background on Yahoo Japan and how they're using open stack, right? Okay. I'm here today to talk about open stack and VMware. This is today's overview. First, I will introduce the open stack history at Yahoo Japan. Then, advantage of open stack, followed by presentation of vSphere choice reason and building open stack using vSphere plugin. After that, experimentation for VIO, then future planning with VMware. Lastly, I will conclude with summary. I'd like to first talk about open stack history at Yahoo Japan. August 2013, we released Greeceic cluster. Component learning of the cluster. Kisto, Grantham, Cinder, Granth, Horizon. Semba, we released Havana, this ice house cluster. We added component heat, geometer, swift, and we enable function Cinder boot, load balancer as a service, auto scale, VM, resize VM migration. As favor in 2015, we released Juno cluster and October, we released VMware cluster in addition to KVM cluster. At December, we will release Liberty cluster. Today, over 15,000 instance learning in Yahoo Japan, six time more traffic density compared to physical environment. Now, we operate over 20 open stack clusters. You resource is 150,000 vCPU and 200 terabyte memory and 20 petabyte data storage in open stack environment. Next, let me talk about advantage of open stack. First, open stack is open source software. Also, we can use many, many use full OSS in open stack. Second, open stack has standard API. Open stack can easily cooperate with automation sub-system and CRCD solutions. Third, easy to operate infrastructure through standard API. It is important to have the same API in any operation environments, such as KVM, VMware Metal, VMM, or Continental. Our mission is to abstract the data center base open stack. Next, let me talk about why we choose vSphere. There are three points here. First, VMware ecosystem is feature rich. For example, vSphere HA, vSphere FT, vMotion, DRS, and so on. Second, reduction of security risk. Second, isolation of virtual machine as virtualization layer. Third, VMware has been used successfully worldwide. And read on your own web. Next, let me talk about experience building open stack using vSphere plug-in. First, I saw VMware is feature rich. We aim to, we aim simple structure because it is first open stack VMware environment in Japan. But VMware has a lot of configuration options. For example, VDS, NSX, NIOC, DRS, and so on. Second, I think that vCenter is powerful. vCenter is VMware's core. All instructions of open stack will be sent to the vCenter. Last Friday, ESX, this is broken. In case of KBM, operating system will stop, but ESX has been kept running on memory. I was impressed with it. As a result, engineers can use VMware resource on demand through the same APIs as KBM. This is our VMware open stack cluster structure. They use the operate VMware through open stack. And we manage service networks using VDS through Neutron. Next, let me talk about expectation for VIO. First, I hope more abstraction of VIO. It is difficult to use VIO for those who don't understand VMware. And second, I want vCenter autobug function. vCenter is single point favorite. We cannot operate if vCenter is broken. Such, I want choice of VIO deployment. VIO deployment can only be deployed from VMware plug-in. We want Python package. We'll work on it, we'll work on it, yeah. Okay. Let me touch on future planning with VMware. We will manage of various network using NSX. It is possible to manage by your network from open stack by using NSX. For example, micro segmentation firewall as a service between other service. Next, let me summarize the point of our presentation. First, open stack manage environment, such as KBM, VMwareMetal, VML or Continental by same API. It is important point for us that engineers can use VMware resource on demand through same API as KBM. Second, open stack with VMware have plenty of content configuration choices. VMware help us, I appreciate it very much. Thank you. All right. Sorry. We will get that all implemented for you. Don't worry about that. All right. Any other? Ah, sorry. Third, we expect of VIO future. I think that VIO is very convenient deploy to. VMware is powerful virtualization solution. So I expect of VIO future. That's all my presentation. Okay, good. I think we can move to the next slide. I got one that works for me, I guess. Any questions? That's all we had. So if I were to summarize, like look, as Akira said, right, we're not without problems. Not saying that VMware solves the world hunger problem of open stack. No, that's why I started with the high level concepts of why you're going to build open stack, what infrastructure you're gonna choose and what approach you're gonna use with open stack. All this determine your success with open stack. And we presented why VMware is a compelling choice in there, right? You have expertise to run open VMware data center. You can use VIO to deploy open stack pretty fast, manage it, operate it and things like that. Yes, there are some areas that where we need to focus. We need to add more multiple vCenters underneath so that vCenter is not a single point of failure. We need to reduce a set of configurations and things like that. Sure, every product and software has more things to do down the lane. But here are more sessions. Please go check it out, especially the one that is technical deep dive on Thursday, 1.50 p.m., temp your room. I will give you a complete technical insight of how things work. Why are we able to do upgrade in an hour without disrupting anything? Why are we able to deploy in 30 minutes without you having to forklift anything and you can have your lunch while it's deploying? And any questions that you have? For me or for Akira. Think it's at 120 or 140 ESX host, something like that. ESX host. So I think these are all pretty beefy 12-core, 24-core servers. I don't know the exact VM count because they keep creating and deleting. I think by now they're probably at 5,000 plus VMs at a static environment. And this is just all one deployment. One vCenter. I'm surprised that it is handling that one. That's a different story. How fast do you want? If we gave liberty tomorrow, would you roll it out into your production? Of course not. So a more real question is what pace makes sense for customers to change a crucial control plane? Does it make sense for a crucial control plane to change every six months? Probably not. And that is what we are trying to balance. I have a short answer for you. We will do it every three to five months. The difficult part is nobody's able to consume it. Hopefully with our upgrade mechanism they will be able to consume it faster but even then nobody wants to disrupt production that fast. So six months, we will get there every three to five months. I don't know where the customers can consume it. That's the sad part of it. I don't even think it's necessary to do every six months disruption of a control plane. It's just pretty challenging. Yeah, I totally agree with that. So that's why we'll have like three to five months. Most of our customers say 18 to 20 months is when they will change something like this. Any other questions? Magnum is not included right now. So right now we have NOAA's all the core ones up to a kilometer. Magnum is still a little bit immature. We're going to go closely monitoring it, see when it becomes more stable. Add support. Most likely in 2016 we'll look forward to it. So all the core ones up to a kilometer, heat, horizon, NOAA's, and the glance, everything that you would expect is there. Anything in particular that you want to see other than Magnum? VIC, those are some of the areas that we are exploring for sure. Any other questions in the back? You guys caught anything or was it all lost? Good, okay. All right, sounds good. So if you guys are going to decide on OpenStack, give VI your try. The only thing I can promise is it's going to be much smoother ride, as smooth as it can get with OpenStack. That's all I can promise. All good? Thanks for coming guys.