 There is an open-stack provider in Australia. We also do APAC. Most of the solutions that we provide are hybrid cloud, private IIS-type stuff. We got customers in pretty much every sector of the economy in Australia, government, retail, you name it, we do it. We also do a lot of tech consulting for large firms, so telcos and stuff like that that have really big scale and load requirements. And I guess the last thing about Australia that's important, I'm sorry, Aptirah that's important, is that we founded the open-stack user group there that's been really important from a community perspective and we're doing the same thing in India. So from my perspective, I'm the director of cloud operations. That means that I don't just write the puppet module, but I have to decide which technologies are worth writing a puppet module for, and how they fit into the business strategy. I've been working for Aptirah since August 2012, and you can see some of my previous credentials up on the slides there. I guess the main one is that before I worked at Aptirah, I used to work at the Nectar Research Cloud and we did a very large deploy of open-stack. I think currently 30,000 calls and 3,000 users. So when I started a year ago, this is pretty much what the business model was for making money from open-stack in Australia. We're gonna build a cloud, somehow we'll make money from it. And something similar on the external use case as well, we're just gonna build a cloud and the hype will take care of the rest. But as we started to talk to customers, we discovered that this wasn't really a viable business model at all and there were a bunch of realities that they were getting in the way of these perceptions. So, internally, we have been using vSphere for a lot longer than I've been working at Aptirah and the local business has been growing quite successfully for many years before I started, 30% per annum is the average. And a lot of the features that were in vSphere were our bread and butter, right? Like high availability, power management in the rack, stuff like that, stuff that we were selling to our customers and they were expecting on a feature basis. At the same time, we were finding that OpenStack couldn't actually provide the stuff that we needed. I guess that's changed a lot now, but at the time it wasn't like that. And as well, most of our customers were what we call puppy's customers, right? They had a stateful virtual machine and it needed to stay alive at all costs. They weren't running a stateless application spread across multiple machines, multiple networks and multiple availability zones. The last thing, I guess, that's important is that we built whole operational infrastructure around this and scripts like Ghetto VCB and stuff like that were extremely important to what we were doing and we couldn't just palm it off into nothing. And we saw the same thing when we went to our customers. So, Australia happens to be the most virtualized country per capita in the entire world. vSphere makes up the majority of that virtualization market. So, we would be going to customers two, three times a week asking them if they were interested in OpenStack and the answer was always the same, does it do vSphere high availability? Does it do DRS? Does it do the features that we already are used to with vSphere? All of our customers or potential customers that we were talking to had existing vSphere deployments and they'd operationalized around them completely and there was a massive amount of inertia behind that that meant when we suggested building a KVM OpenStack or similar solution, they were just not interested. So, I made a little pie chart from my inbox. You can see down the bottom there. I hope you can read it, if you can. Basically, it shows that more than half of the questions that we got over the last year to do with OpenStack were actually doesn't have VMware feature XY or Z. Only a very small portion of the questions was how much does it cost or something about OpenStack in actuality. So, we pretty quickly realized that vanilla OpenStack which I'd successfully used in my previous role was not actually as useful for small, medium businesses and most of the enterprises in Australia. It just didn't meet their requirements. So, in terms of customers that we were finding success with on that front, it was the large public cloud providers that wanted to set up shop in Australia, service providers and companies that didn't have any virtualization in-house and were looking to green-field solution. Unfortunately, what we discovered was that if you want to use OpenStack and you have an existing solution then you end up with two silos. You end up with a silo effect, right? You have your vSphere and during the integration process between vSphere and OpenStack, you end up with these two silos and that means you have to pay double on everything, double on data center costs, double on infrastructure, double on everything. And if you don't have a developer on-site then if OpenStack doesn't do what you want it to do, it's gonna be a problem, right? So, we pretty quickly realized we had to change our business strategy. Like it says, I'm an open-source geek. Like I said, I come from working in research and stuff like that. Most of the OpenStack community is the same and in August 2012 at the Folsom Summit I sat down and I watched the VMware CTO promise to the community that they would make VMware a first class citizen for the OpenStack community. And immediately a light bulb went off in my head. Hey, this is actually a great idea. Fits perfectly with the Australian news case. And I realized at that time that it's not really VMware versus OpenStack, right? OpenStack is a framework that you can sit or you should be able to sit on top of vSphere. Actually, it might be vCloud versus OpenStack but definitely not vSphere versus OpenStack. So, we started to do some work. There was an existing driver in the code base already. It had been produced by a VMware developer by the name of Sean Chen. Unfortunately, it didn't really seem to work at the time and the Nova network model that I was used to deploying in my previous role was not really, didn't really seem to apply. So, we decided to make a strategic decision. Let's work with VMware and get this thing running. So, that's what we did. We started working with the team at VMware, the OpenStack at VMware team. If Sean and Dan are here, many thanks to them. And we also started working with the NYSERA team or VMware subsidiary now to do in-kind stuff as well on the same thing. So, the major focus was to show customers and potential customers that they could expose their existing inertia-causing vSphere deployments which they'd operationalized around as open infrastructure as a service cloud, right? That's a big difference from what they had where they might have a very high utilization virtualization solution, but at the end of the day it still took two weeks to get the IT guys to provision a VM or it still took a month to get a new network created or any of these things that was just standing in the way. So, I guess that the core points are we proved it can be done in the last year. It's a massive resource efficiency for companies in Australia to be able to expose their existing vSphere solutions and ESXi solutions as an OpenStack cloud. It gives them a technology leapfrog effect where they get to use the latest technology without having to actually put a new data center footprint in. And that's kind of how I see it should be as an OpenStack geek, right? ESXi and vSphere should just be another hypervisor citizen in the ecosystem. And so that's what we were doing to bring private cloud to Australia. So we've come a long way since Dan Wenland who is the lead developer at OpenStack at VMware project live patched our code at the last summit. We had a bug that wasn't fixable by anyone but Dan. The code actually works really well now. There is actual documentation. I've got a link up there if you wanna have a look. And in the last month or so, we've been taking this as a demonstration to the OpenStack user group to show that it can be done. We were lucky enough to have an over core developer show up at the Sydney meetup and he was commenting to the entire crowd and he was very impressed at the contribution of the VMware OpenStack team. So if you're curious to see the state of the project, I recommend checking out that launch pad link that we got there. If you wanna see the documentation, it's actually quite good at the moment and the link is up there as well. We're using Grizzly right now. We're really excited to start moving to Vanna. That's horrible, sorry. So this is the overview of what we've got today. At the very top, I'm sorry it's ineligible but you've got the OpenStack compute scheduler, OpenStack image servers and those rectangles there are Nova compute nodes, right? So in this configuration, you're running them as virtual machines and you can actually, as of the latest software, manage multiple vCenter clusters using a single Nova compute node or you can have multiple Nova compute nodes to manage multiple clusters. The idea is that Nova compute sort of becomes a proxy for the vCenter API. I've thrown up the nova.conf directly from the documentation link that you saw earlier. This is pretty much all you need to actually set up a vCenter OpenStack deployment at the moment. So you specify a host IP for your vCenter server, a username and password. It's gotta be an administrator user. You specify the cluster that you wanna schedule to and the data store regex allows you to specify in a, I guess, a regular expression format of which data stores you wanna provision your virtual machines to. The last thing there is just the VMware SDK that handles the web services link between Nova compute and vSphere. Are there any questions so far? Am I going too fast, too slow? Everyone's happy, great. So in terms of making images, we're happy to say that you can use pretty much any image that you're used to using in a public or private cloud today. The one that we recommend for use is the Ubuntu cloud image. And I've thrown up the procedure that we use there to grab the image, convert it to a VMDK and then make it usable by the OpenStack vSphere deployment. So if you're a VMware admin, you're probably familiar with the VMware make-of-s tools. That's just to import the file. The magical happens in the glance, image create command, sorry. The two main things are you can see that there's a property that you don't normally use for non-vSphere deployments. And they're the bits that specify how the VM should be built in vSphere, right? So you can specify the different disk adapters, the guest type, which I'm sure if you're a vSphere admin you've seen the different guest types. And what the actual back-end image format is. So usually we're using 8.0 thick images. Right. Yeah, so actually in this case, it doesn't seem to matter, right? If you're using a single VMDK to upload the file, you can specify a container format to be bare or OVF and the code works the same regardless. I've learned that using the container format and specify that OVF. Yep. But it seems that doesn't work. Did you run the VM-ACFS tools command before that? Yeah. Number three? I've run that. Yeah, okay. Maybe we can talk afterwards and I can have a look at it. But I mean, that's the procedure we're using for all of our virtual machine images and they're all working at the moment. Happy to demonstrate at the end of the thing if you like. So I just threw up a couple of glance image show commands to highlight the different properties. So for Windows virtual machines, we're using LSI logic SAS as the disk controller and the VMware OS type is a Windows 7 Server 64 guest which is the generic operating system type for Windows 2008 and 2012 server. And this is our Ubuntu image. So you can actually see here that these ones are uploaded with OVF container type and they've got a different disk adapter and an OS type. So in terms of what works right now, this is what we're using in production. For all the OpenStack services except for Nova and Neutron, we're just happily using the Ubuntu Cloud Archive without any problems. So that covers Keystone, Glance, Horizon, all that kind of stuff, no problems there. The Nova portion of what we're using is provided by the OpenStack VMware API team directly and you can just check out that Git repo as you please. We are not using the Ubuntu Cloud Archive because it seems like there's some issues there. So if you want to try this out, I recommend you use that Git repo. We are very keen to go back to using the Ubuntu Cloud Archive and I believe we're actually meeting with Mark Shuttleworth to complain about that amongst other things. For NYSERA, which is the back end we're using for Neutron, we're using NYSERA provided binaries. And I guess the important thing to note is that compared to a year ago, all the important features that you would expect from an OpenStack Cloud work. So what doesn't work? My main problem as an operator of this solution is that if I go and lodge a bug today and it's a critical bug, there seems to be a gap between how long it takes for that bug to get reported and get fixed before it makes it into the Ubuntu Cloud Archive. So that makes it really difficult for me to use that model and most of the time I'll end up having to do some back porting manually. A very minor feature which also doesn't work is the Nova console log. If you've used the Horizon dashboard, you'll have seen the ability to get your Linux machines de-message output. That doesn't work. I raised that issue with Dan Wenley yesterday and hopefully as of the next release there'll be some fixes there. So I guess this is a big one, but I didn't really specify it as well as I should have. So image interactions between Nova compute, Glance and the VMware data store are a bit roundabout. Right now you upload an image to Glance and then when you wanna boot that image for the first time, it's downloaded from Glance onto Nova compute and then from Nova compute it's uploaded again to the VMware data store. The process can be a little bit slow for some reason I've noticed and it's not good. This is another issue that I raised with Dan Wenley yesterday and they recognize it as an issue that they've seen as well and they're working to fix it. Snapshot semantics. So if you snapshot of VM in OpenStack vanilla, it's a snapshot of the primary disk. If you snapshot a VM using VMware, you get every disk. So if you're expecting consistent behavior across a multi-hypervisor environment for stuff like snapshots, that's something to be aware of. It's probably not an issue for most of you, but it's something that doesn't work. At the moment, virtual machines unfortunately don't launch in a tenant-associated folder. So all the VMs just kind of load up into your vSphere tree and there's not really a useful way to determine who those VMs belong to. You just have a very long list of UIDs. Like I mentioned before, Nova at the moment is a one-way proxy for vSphere and basically that means that if you have existing machines running in vSphere and you want to manage them using OpenStack, then you have to go through an onboarding process to make that happen. Good and bad depending on how you look at it. The VMware guys have notified me that they're working towards making OpenStack features visible inside of vSphere to make that onboarding process easier. But we're not there yet. As of Havana, most of these issues have at least some code fixes or code fixes coming down the pipe. If you wanna see what the current state of the bugs are, I put the link up there. You can see most of them say fix commuted next to them. But there are a few open ones out there. I'm gone too far. So this is a slide titled What's Coming. You can see the title's missing. But basically at the moment what we're utilizing is the Nova and vSphere neutron NSX stuff. What we'd really like to see happen is the Cinder, vSAN and the Glance vCenter mappings. I believe that is a target for the Ice House OpenStack release. It's definitely been targeted as interesting. I don't actually know what the vCAC application director is. My understanding is it's something similar to OpenStack Heat and they're working on that as well. So are there any questions about this? This is the most interesting slide for me. Right, so the question was what is the onboarding process that I described, the third last bullet point there. Basically you're exporting a running VM as an OVF and then using this procedure to upload it into Glance and then boot it again. That's the way it is for now. Yeah, so like I said, the VMware guys are working towards making the thing less one way. So that VMs that are existing inside vSphere already can be onboarded much easier. So like I said, we've come a long way. The internal use case today is that we are deployed in in production. All of our ops guys that are used to using vSphere for the last four or five years, they're happy. They don't lose sleep at night worrying about vSphere HA or whatever. It all works and they're happy. We are using this service right now to offer what I would call unique solutions to our customers. So we actually offer boutique, I guess you would say, solutions to customers around big data and digital media stuff. The customer calls us up and says, we need something yesterday or we need something very intricate or tricky. Then we get the best of both worlds, right? We get the best of the vSphere world and the best of the OpenStack world in terms of API calls, utility model on the front end and on the back end, all the HA and stuff that we're used to. And it's really easy for us to do stuff now like scale up heat, stuff like that, compared to attempting to run it on vSphere, which is just hell. So these days, when a customer asks us for a website, we're not offering them a regular all-in-one LAM stack, we're offering them like a auto-healing, auto-scaling web service, right? So they just upload their PHP code and the pass does the rest. And that's allowed us to do things like explore multi-hypervisor and multi-region configurations. That's something we're in the middle of right now. I guess from the perspective of VMware, it makes a lot of sense to support OpenStack. And from our perspective, it allows us to tell our customers that they don't depend on vSphere like they used to, right? If they want to start with a vSphere OpenStack installation and then pull the workload out into KVM or something like that, we can definitely use this kind of configuration to achieve those use cases. So on the external front, we are no longer going to customers and simply saying, do you want OpenStack? The question is now, do you want private cloud? Do you want OpenStack instead of vCloud, right? And we can offer OpenStack to you very low impact. We can come in, spin up just a handful of virtual machines, and you have OpenStack done. Very different from where we were a year ago, where we were going to customers and saying, if you want OpenStack, we have to put in eight to 10 physical machines, three of which are going to be a control plane and the other seven of which are going to be hypervisor nodes. And our customers are really responding to this, right? Because they want the features that OpenStack gives them. They want to be able to offer their customers utility model. They want them virtual machines to boot up now, not in two weeks, right? Same with networks. And I guess the last matter is that using OpenStack gives a platform for customers to change their existing application infrastructure. It's no longer a stateful VM. They can actually break it out into multiple VMs that are stateless, active, active kind of application architecture. No, that's a really good question, though. We haven't actually attempted to target vCloud directly. We're trying to move customers around vSphere up the stack, right? So no, we haven't had any customers come from vCloud to OpenStack yet. Do you have a? All right, so the question was, have you had any customers come off vCloud onto OpenStack? And the answer is no. And I think that's actually it, guys. I probably want to walk quicker than I anticipated. So if you have any questions, feel free to shoot them up. You got one here? You kind of bring up the cloud environment in the VMware infrastructure on the? That's right. OK. So I assume this is a very trimmed-down OpenStack. No, it's completely full-featured. I would say it's more full-featured than the cloud that I built in my previous employer, because it has features like network virtualization, which is something that we were not doing previously. So that's compute nodes and everything? Yeah, absolutely. I'm happy to demonstrate it. If anyone wants to see what it looks like, I'm happy to demonstrate it. The only reason that I didn't was because you can actually go down to the VMware booth and see it. Great presentation, Sina. Just on storage, I saw Scott Lowe's demo earlier as well. Cinder is mentioned as a storage platform, but are there other options with, say, CIF or other? Right, so at the moment, like the majority of our OpenStack infrastructure, we've just spun up a virtual machine to handle the various Cinder components, and that can scale out horizontally like Cinder normally would. The back end is up to you, right? So this is a roadmap, what you might see in a year's time for using Cinder with a vSAN back end. But today, you can spin up Cinder and back with anything you like, Gloucester, CIF, NFS, NetApp. But the specific interconnect from vSphere into the storage is only supporting Cinder at this time, I understand, yeah. I'm not sure I understand the question. So at this layer, there's no interconnect between vSphere and the storage layer, right? You just spin up a virtual machine. It's the Cinder volume, the Cinder scheduler, the Cinder API service, and you specify the driver, which is whatever you like as the back end. So I'm going off Scott Lowe's presentation earlier, and he showed us from within the web client. And basically, from what I could understand, the storage is actually fully managed by VMware. And there's a plug-in through to Cinder, and it's almost just sending the messages straight through into VMware from Cinder. So that would be probably a new feature that they haven't released publicly yet. But my guess is that's just the VMFS driver for Cinder. So they've been doing the same thing, right? They just spin up a VM, and the driver, instead of being SEF or Gloucester, is VMFS. And so it's just got some configuration options like the password, the host name associated with it, and that's all it is. Questions? Everybody is enlightened. Great. I would say something like 30% of our overall is now coming from this, which is very different from where we were a year ago, where it was 0%, right? So, yeah, I mean, it's opened up the open stack or the private cloud market for us in Australia, where before, it was a matter of convincing people to go to a Greenfield installation. We no longer have to do that across industries, right? But mostly still at the service, that's what I'm saying, mostly still at the service provider level. But it's really across industries. Anyone who has a vSphere installation and compute requirements that aren't being met by the more traditional model of vSphere, they're definitely finding interest in it. A lot of our customers, like digital media, advertising agency, stuff like that, where they want the server yesterday, no contract, just give us a server now, we'll pay you in a week. They really love stuff like this. And this environment, kind of customers or rather, who are the users that are actually managing the open stack infrastructure? Is it the VI admin? Or is it the different stacks that are out there? Sure. So we actually provide like our puppet modules to handle the management. They take care of almost everything, right? Only issue you might have is if there's an outage with an open stack service, you might have to log into one of those virtual machines and restart the servers. But again, you can use puppet to manage that. If you're willing to wait for a 30 minute cycle, puppet will restart the service for you within 30 minutes guaranteed. Thank you. No more questions? Does anyone want to see a demonstration? Okay. Oh no, that's the wrong cluster, sorry guys. I hope the internet works. Great. So, this is the demonstration that I actually gave a week ago in Melbourne, a week or two ago in Melbourne at the open stack user group there. So basically what I'm going to attempt to show you is how quickly you can stand up like a regular lamp distributed kind of architecture with a rich network topology behind it. And then if you like, we can happily go into the vSphere backend and see what it looks like. So the first thing I'm going to do is create a couple of networks. So usually I create a management network first. In this case, we are making the management network specifically to use the open stack metadata service. The open stack metadata service requires an unrouted network to function. So that's why I'm making this with the Disabled Gateway checkbox ticked. You can obviously use this for more than one purpose. Puppet, Chef, whatever can sit on the same network. I'm just using Google public DNS on the backend. So just a quick show of hands. How many people have actually seen this open stack dashboard before? What it looks like. So this is going to be the application network. As you can see, I'm just making the slash 24 subnet. This one will have a gateway specified and the same DNS because it's cloudy and cool to use Google public DNS. So if we go to the network topology interface, you can see there's three networks whereas I just created two. So that blue one on the front there is actually an administrator provided pool of public IP addresses. You can see that the networks that I've created are actually internal subnets. So the next step is to create a router, application router. I'm going to attach this to the application network. At the same time, I'm going to set the gateway on it to be that public network that I just showed you. So the next step is access and security. So if you're not aware of what open stack security groups are they're basically layer three firewall rules that you can use to secure your VMs without actually logging into them. So I'm just going to create a few security groups here. This one is going to be SSH. I'll specify a rule like so here. So I can SSH into any computer that has this security group and another one for web and this one will have 80 and 443. So you can see that I'm actually building the entire architecture of the application before I've even created a single virtual machine. So this one's going to be database and so the difference between this one and all the other ones is I'm just going to say 3306 and it's only going to be accessible from the application network. And the last access and security thing that I need to do is import my public key. So I'm just grabbing my public key from my home directory and then import this into open stack. Now what this means is that VMs that I boot up that have this public key associated with them are going to have this key injected and that will allow me to SSH into them without requiring a password. So what we have here is the launch instance dialogue. You can pick your availability zone, we only have one right now. So the first VM that I usually like to spin up is a jump box. This will allow me to actually access the other VMs in the network without having to assign public IP addresses to all of them. So that's the Ubuntu VMDK. I'm going to give it the SSH security group and I'm going to log in with my key pair. From a networking perspective, Nick Zero is going to be the management network, application network will be second. And I'm just going to specify a small script in here to bring up the second network interface because these Ubuntu cloud images don't actually have a second interface by default. So now while this is spinning up, I'm just going to go back to the network topology and show you what it looks like now. So you can see I've got this jump box instance. It's attached to two networks. The application network is also attached to the application router, which is passing traffic out over the public network. And I can hover over any of these individual components to get a little bit more information or interact with them booted now. If my VMC proxy hasn't died, which it might have, that's okay. So now to access this machine, because it's running on an internal network, I'm actually going to assign a floating IP to it from that public net pool that you saw before. So I'm just asking for a VM, sort of for an IP from that pool. I'm going to assign it on the public network. And there you go. So now I should be able to, so that's 194, it says. I should be able to SSH Ubuntu. Yes. And so you see a little delay here. This is actually open vSwitch setting up all the tunnels. Obviously the first thing you want to do is run an update. So that's my jumpbox machine. And then I can go around spinning up the rest of the machines that I'll need. So I might need, say, four web servers. And these will have a different security group, similar networking configuration. And I could do something like this, right? I have to get install Apache to MPM worker so that I don't even have to go and find the packages. It'll just do it for me. And so you can see it's just gonna go and network them and start spinning them up. Sorry? Yeah, so the question was, what does that look like in vSphere? And I am just going to do a remote desktop into my internal network so I can show you. Once again, hoping the internet doesn't. So if we just go back to that dashboard quickly, we can see the UID of this one is DFFFAD0C starts with. So that's the machine I'm gonna look for. So that's my OpenStack infrastructure there. From here up to here. I've got a couple of Nova API nodes, Cinder, Glance, Pstone, et cetera, et cetera. And these are all my OpenStack VMs down here. So that's the one that we're just looking at. And so if you look in the activity log, you can actually just see all the back end stuff that's happening to spin up those virtual machines. So if we go down to one of these virtual machines, you can see, edit settings, that it's booted up with the parameters that I specified. It has four gigs of RAM, two CPUs with two cores each. It's using the LSI logic disk controller. The hard disk is 40 gigs as I specify. So that's I guess the demonstration of how it all works. Any questions now? Everyone's happy? You're all saying very happy. Yeah, nothing broken, thank you. Thanks, guys.