 OK, brilliant. Fantastic. Good morning, everyone. I'm Chris Murray. I'm a technical specialist at Rakuten, specifically in Europe. So this is a VMware-sponsored session, but this is my customer story. This is how we did VMware Integrated OpenStack. And it's more than that. It's more just generally how we use OpenStack, the features we use, what we've done differently to make it work on our vSphere environment. So it is sponsored, but it's my honest point of view. I'm not going to be here doing a marketing pitch. OK, so Rakuten. Rakuten are a Tokyo-based e-commerce and internet services company, very big in Japan, less well-known in Europe at the moment. That's something we're trying to change. The Rakuten ecosystem is really our core business. So we're known for our e-commerce, but obviously there's a lot of other services that we offer around that. And it's really about providing a set of services for our customers, everything from the e-commerce side to e-banking, media streaming, e-books, et cetera. To give you an idea of the scale of the company, we're 110 million users registered in Japan now and a gross transaction volume of around 7.1 trillion Japanese yen. So it's about 62 billion euros. And we've expanded quite aggressively globally over the last few years, and there's a few names there that hopefully you'll recognize. My department was acquired as part of the Play.com acquisition in the UK. And we now have taken that infrastructure, and we've been providing a private cloud for European use. So we do have data centers globally, obviously, some acquired through the acquisitions, but also some more strategic. And the data center we're running in Luxembourg is a great location for a strategic data center for the European market. As an ops team, generally it's our role to provide the tool, support, and service so that our development teams can actually concentrate on delivering the business logic that drives our customer satisfaction, really. So after the acquisition, we took our vSphere platform and we looked at how we could make it multi-tenant. And the choice at that point was vCloud Director. We deployed vCloud Director and provided that multi-tenancy data center that was relatively easy to consume by our other companies. That's about 840 VMs at the moment, so it's not huge, about 36 different business units within Europe. Now, that platform was OK. It did the job at the time, but the world's moved on a lot since then in the last few years. We're very much more cloud aware now we're aware of what the developers want and we needed to replace the platform. So vCloud Director is End of Life as well for enterprise customers, although still available for service providers. So that was one big push. And we also wanted to move to the next generation of SDN software. So we were doing virtualized routers and virtualized networks with VXLAN in that environment, but we wanted to take it to the next level. We wanted to move to things like security groups rather than traditional enterprise firewall rules and things like that. Secure multi-tenancy was obviously still a key requirement. We're still acquiring companies and obviously if we can onboard them and have them using our data center, that helps with that onboarding process of new companies and also interoperability between those projects. And we still have pets. There's no way of avoiding it at the moment. We all want to go towards these new cattle cloud aware applications, but we still have lots of pets. So whatever we did had to be able to support both those, if you like, legacy or existing workloads, as well as the next generation of apps as we move forward. Some of the feedback we got about our vCloud Director environment as well was that it worked fine. You could use the GUI. You could deploy your instances. But it wasn't really enabling our developers to develop clever, intelligent workloads on there. So they couldn't autoscale. They couldn't use the APIs to define a blueprint for an entire application and deploy that. So we needed something that was more open, that people could use their tools like Terraform and Chef and all these wonderful tools beyond just the VMware ecosystem. And crucially, we had to do it quickly. This wasn't something that we need in two or three years' time. This is a problem we need to solve now, really, yesterday. So it was crucial that whatever we deployed had to be in and running and production ready as quickly as possible. So I spoke about pets and cattle. I'm sure it's been mentioned quite a few times this year or this week. We've got quite a lot of this middle workload. My automated pets, we've scripted them out. We're using Chef. We're using Ansible. We're using all sorts around the company, Puppet, a lot. I think we've got every tool deployed possible. But we're not fully cloud-ready for a lot of our applications yet. So OpenStack, we needed a new cloud platform. OpenStack, obviously, it's very widely known. It's probably the leader now in private cloud technology. There isn't really much else to compete with it. On top of that, we've made a company commitment globally. Neil Sato and Kintari Sasaki stood up at the Tokyo summit and declared that as a company, we were starting on our OpenStack path. And that's great. And we have a few teams globally working on OpenStack in various different flavors with different requirements. We were the same. We couldn't directly use one of the platforms that they were already working on, mainly down to the aggressive nature that we needed to deploy this quickly. So we went about understanding OpenStack. Now, obviously, there's the Summit Videos. We used Lyons Academy as well as a way of getting some sort of structured learning around understanding what OpenStack was, understanding my NOVA from my Neutron, and then lots and lots of reading. And I've joked here about the link rabbit hole. As soon as you start reading about OpenStack, you find a great document, and it's got a link going somewhere else, and then somewhere else, and somewhere else. And I had days when I'd have literally 100 tabs open on my machine for all these things I have to read. And it's great that all that information's out there. And we also attended some meetups. So the London OpenStack meetup was a great event to meet some new people and find out what other people are doing. And now, OpenStack attendee. We looked at the OpenStack components and tried to identify which ones were crucial to us. Obviously, I think that's some silly number of OpenStack projects now. But really, it came down to the real core components for us. We didn't need all the bells and whistles that some of the projects are doing right now. So NOVA, Neutron, Cinder, Glance, Keystone, obviously, and Horizon. They were the real ones for us. And then on top of that, we looked at, identified, Solometer and Heat as some of the ones that we wanted to use as well. We have mixed developer workloads, some that are keen to use the APIs and go straight in at that level, some that want to use CLI tools or Terraform. But we also wanted to provide some level of automation tool within the platform, if possible. And obviously, Heat provides that orchestration. Longer term, we'd love to look at things like Trove to support databases as a service designate for the DNS. And then there's also some work we need to do around the actual billing and making our billing a lot more automated than it currently is and obviously contain a support who doesn't want container support at the moment. So we identified the OpenStack components. And then we looked at what skills we had in-house. And again, about that aggressive time that we had to get this deployed. And really, the obvious choice for us was to use our existing vCenter platform, our existing ESXi. These are tools that we've been using now for seven plus years in production. We're very confident in using them. We know when they go wrong, if they go wrong, we know how to troubleshoot them. And that was really key for us. And there's a couple of extra tools on there, vRealize operations for services for monitoring, and vRealize log insight, which is just a syslog. It does a lot more, but syslog collector as well. To identify those, we knew that's the way we wanted to go. Now, VMware provides, and this is all open source. This isn't anything special we have to buy from VMware or anywhere else. The drivers for NOVA, Neutron, Cinder, and Glance so that you can take your OpenStack deployment, any OpenStack deployment, and use ESXi vCenter underneath. So that's all there. Anybody can use that. And that made our life easier, obviously. And then we looked at the VMware integrated OpenStack offering. And really, what this gave us was a way of wrapping that up into one easy-to-deploy platform. It's still using the same components that we could have deployed manually, but they've done all the hard work. They've done all the answerable playbooks to deploy this, to configure all the services in the best practice, the security, encrypting all the credentials, making it HA, as well as providing the ability to do backups and restores of the platform. So this was a no-brainer, really, with the speed to market that we needed. So where does this sit within our environment? We know we wanted it. We know we wanted the public cloud style of consumption for our developers. And so it was crucial that we made it accessible when it had the public IPs. But equally, we're an enterprise. We care about security. It had to be secure. So we presented our Horizon and our API endpoints behind our firewall, but it's on a no-natted IP address. So it appears public, but we actually block all external access at the moment. The only time that we'll open this up is when we've got maybe acquisitions of new companies that maybe aren't fully integrated onto our network, backbone networks yet. And obviously, we can expose or trust their IP addresses to allow them access to the OpenStack platform. Equally, we've got an un-natted public subnet that we present into our tenants that allow them to have floating IPs. And they see the actual public floating IPs that they have. But crucially, as an enterprise, because it's behind that firewall, we can still apply the security restrictions and policies that we like. So if we don't want to allow any SSH into the environment on those floating IPs, we can control that on our firewall. From a project point of view, it's great to be able to have these isolated projects for tenants. But as an enterprise, they need to be able to talk to other things. So we've provided a mechanism where we can deploy another small virtualized firewall using NSX and connect that to an existing VX LAN that we provision in the tenant. And then from there, we can route the traffic out to our core network. And as a cloud provider, we've got control about what traffic is allowed in and out of that tenant. So again, we retain control of our production network, our core networks. But we allow the tenants to have an amount of autonomy within their project without generating too many risks. OK, so just talk through some of the services, some of the decisions that we've made. Keystone, you can't really avoid Keystone. But we decided not to go with Active Directory for a few reasons. One, our Active Directory is quite large. I think we're at about 20,000 users at the moment. And because of the way we're doing quite a few migrations of data around, the scoping for AD for Keystone made it not that performant. It caused us some issues, especially around using Horizon. The other thing was we wanted to be able to quickly provision service accounts. And by keeping it a standalone authentication, it meant that we could provision service accounts for projects, for Terraform users, for example, without having to go through the policies of creating new Active Directory accounts in our environment. We also set every user up with a new user project. Now, this gives them an area that they can use to play around with OpenStack that's separate from the projects that they work on. But it also means that by making the default project for all of our users, if they want to use Terraform, their default is to use the user project. And they can't break anything or break an existing environment. If they know they will need to work in a specific project, then they can override that and set the project setting within their CLI or within their Terraform or whatever tool. And then we're just using standard roles within Keystone. So it's just member and admin. From an over point of view, we identified quite early on that we needed a way for the tenants to be able to define applications across multiple availability zones. If they're deploying something like RabbitMQ, they needed to be able to say, all right, these three nodes have to remain separate. And there are tools within VMware to do that historically with affinity and anti-affinity rules. And they are exposed into Open and Stack. You can use those. But we wanted to make sure that whatever we did on this cloud platform was interoperable. We weren't doing anything special because it was VMware. And so we decided to go with three availability zones for a start. We're using a single region at the moment. There may be scope in the future to expand that to maybe add a second region in a second data center for sort of DR workloads. But at the moment, we've concentrated on the primary region. All our instances are using persistent storage by default. Because we're using SAN for all of our storage, it made sense just to have everything persistent. We still have a lot of these pets, and we didn't want any mistakes of people accidentally losing their storage. As a future option, we've talked about potentially changing the way our clusters are designed, still having three availability zones. But behind the scenes, actually having more clusters in vCenter with a sort of tiered approach. So we might have our high end, maybe HP or whatever, hardware that's more of a higher price point, but it's more suitable for our pet workloads. And then offering a lower tier of hardware, less resilience, a lot cheaper. So effectively, we can pass that cost saving on to the tenant as well. And that we can do. We can flag the flavors as either a cloud flavor or as an enterprise flavor is what we discussed. And then those tags can flow through and effectively help decide what cluster in vCenter it goes on. That is a future thing. Moving on. So it's possible to run VMware Integrated OpenStack just with flat networks. And not using an SDN, excuse me. But for us, it was crucial to have the SDN features. We already had them in our vCloud platform. We allowed our tenants to create their own multi-tier applications, their own networks, their own routers. But we wanted to move to the next level. And NSX has given us that. Obviously, it does vXLAN behind the scenes, which is what we're used to. But there's a lot of optimizations on the vXLAN side of things from having multiple VTEP endpoints now on a host through to the simplicity of deployment with no requirements on your physical environment. So we're not doing the IGMP reports or anything like that anymore. It's just pure unicast in our environment. We also wanted to take advantage of distributed routing so we can optimize the east-west traffic within the data center. And load balancing as a service was also something that we wanted to offer for our tenants. Some tenants or projects are using the load balancing as a service. Some are bringing their own load balances with HA proxy or a brocade or whatever and running them as software instances. And then in terms of the connectivity back to the rest of the data center, we provide this sort of peer network. And it's created in OpenStack as a vXLAN. And it's then that we go behind the scenes and add our NSX edge to allow the routing into our back-end network. Now, I've got a rather complicated slide there to show how that works. I'm not going to talk for it now, but if anybody wants to know more details of how we're doing that, how we're using BGP and ECMP to distribute those routes into our core, please just speak to me afterwards and we can go through that. Cinder. So I mentioned earlier we're using Sans to back all our storage at the moment. It's all fiber channel connected. And we're actually using pure storage. It's pure flash at the moment in this environment. Now, we wanted to, again, provide our developers with a way of saying these machines shouldn't sit on the same storage. At the moment, we only have one storage provider presented into this platform. But we've set the scene now in terms of having a separation by having storage zones. Now, the storage zones are effectively volume types when you deploy your Cinder volume. And you choose that. That maps through from a volume type to a storage policy within the vCenter environment. And then that maps through, in turn, to tags on specific data stores. So we allow the tenant to choose that and that makes sure that the two of their machines, if they're in a cluster, don't end up on the same storage. Going forward, we want to extend that. We want to potentially offer different tiers of storage as well as making sure that those are more separated onto different storage platforms. The storage tags within vCenter are not one-to-one. So you can have one-to-many. So we can have, for example, SSD as one tag and storage zone one as another tag. And then set up the storage policies to map multiple tags and really give quite granular control to our tenants. And one other nice thing about the way that the presentation of storage into Cinder works within a vCenter environment is that it's abstracted away to a lower level. So if you've got fiber channel connected, NFS storage, if you've got a pure storage and HP or a vSAN or whatever storage you want to use, it's very easy to add those in because they're all presented as data stores and then that's all presented to Viya in a consistent manner. So there's no changes in Cinder for adding a new storage provider. So glance, I don't have time. We're using base images that we're building with Packer. At the moment, these are OVA-based, but we're looking to switch to a VMDK-based standard image just because it gives us a few more features. It is possible to import QCAN2 raw images as well. And that's just as simple as in Horizon, just adding a URL on saying, import this image for me. In reality, we've had some mixed results with that. Certainly, we haven't had the consistency that we'd like. And because this is going to be a production platform, we need our images to be 100% rock solid. They need to be fully optimized. They need to have VMware tools. They need to have the VMX net-free network driver and all the best practices for our images. So that's why we've chosen to go down the Packer route and build those images. Going forward, we're looking at potentially offering some standard images, but then also have some prepackaged applications, a MySQL or a RabbitMQ as a standard image. And we might not publish those to everybody, but use maybe the image sharing feature so that we can specifically target individual projects with those images. Interoperability, obviously that's been a big topic this week, and it's been a big topic for us. It is crucial that whatever we do here can be picked up and put on another OpenStack platform. So a developer can use the same Terraform recipes, the same everything end to end has to be identical. We have teams in Japan, in the US, other teams in Europe. They're also looking at OpenStack, and we don't want to have anything that is specific to us. So VMware's work to get Devcore compliant was obviously crucial in this. And obviously, we've seen yesterday's keynote that that interoperability is really becoming something that's there now. It's something we can rely on, which is fantastic. Now, we didn't want to just fully trust VMware on this. We actually engaged with Stack Evolution as an independent OpenStack for their expertise and had them have a look and make sure that we were doing things in the right way. We wanted somebody that didn't know anything about VMware to come in and say, yes, you're using OpenStack like an OpenStack engineer would use it. We were. So there were a few inconsistencies. I've already mentioned about image formats. So QCAP2 and Roar are supported. But realistically, we're going to build our own images just so we can get far more reliable images. We, like I said, the drivers and VMware tools are key. And the other big inconsistency for us is around the console log. And that's not currently there at the moment. It used to be in vSphere years ago. It used to be available. And it was, I believe, taken out for security reasons. So we're now badgering the guys at VMware to try and get that feature back in and exposed through to Vio. I can't really talk about Vio and the VMware platforms without at least touching on the data operations. We did have a speed to get it in. But that speed to get it in and in production really also meant that our data operations had to be right. We had to be ready to manage this once it was in use. And we've used our existing skills again. And the existing tools that we have were V-realized operations and log insight to manage that environment. And it's fully aware of OpenStack and Vio. Again, this is independent of Vio. You can use these tools with your own OpenStack and VMware platform. There's also an initial management box that you deploy when you deploy Vio. And as part of that, there's a set of CLI tools that you can use to manage backing up with the platform, restorations of the platform, some maintenance tasks as well around Cinder, and upgrades and patching. Now, I keep asking, but I'm yet to find another provider that can do upgrades as seamlessly as VMware could do it with their blue-green platform. It's really quite amazing what they've done. So in terms of futures, what I'd like to see from OpenStack, obviously, there is a policy or in place for roles. You can go in there and create your own custom roles with different permissions. But I'd really like to see that come on a little bit more in the future, really get some granular controls and some different roles that we can create. Maybe we want a read-only customer, or maybe we want a user that can only manage the networks and another that can only manage the instances. So it'd be really great to get that in the future. And really, we just want OpenStack to continue maturing some of the great projects that exist now, but maybe aren't quite ready for us. And then from VMware, I've mentioned the console log. I really want that console log. And generally, I just want them to keep doing what they're doing and bringing their enterprise-grade system and extending it to maybe for a container offering and designate. And just a little thing, I'd really like the storage cluster support. I'm missing that from vCenter. And what's next for us? We're going to be looking to upgrade to Viya 2.5. Viya 3 is out now, but we're going to hold off for that at the moment. There's a few really nice features that we're excited about in 2.5. So we're keen to get on that. But the problem is, my tenants are so pleased with their proof-of-concept environment that they're using it, and they keep putting more in there. And I need to move them on to our production platform. And until I've done that, I haven't got an environment that I can play around with my own leisure. And we're also going to be looking at billing improvements. So in our vClient environment, we used to bill on a sort of quota basis based on what we allocated to tenants. That's out of date now. People want to be billed on what they use on a per hour basis. So we're looking to move to that model. Right now, we're using the stats from NOVA so that we can get the memory hours and the CPU hours and do some billing on that. But really, we need to look at the next stage, especially if we start looking at different tiered storage with different costs and different tiered compute with different costs. We need a platform that can then do that. And there is a VMware product that does that. We will be looking at. And there's also other products within the community that we'll be looking into. And then there's the automation of some common tasks. So the network connectivity between our projects and our callback end is currently a manual process. So we manually deploy some separate NSX edges and do the configuration of BGP and ECMP, et cetera. We want to automate that. And there's an Ansible playbook for NSX that we're going to be looking at to automate that process and just make the onboarding of new tenants a lot more efficient. We're also looking at Rundeck for the automation of some of the administrative tasks. So whether that's a new user coming on and they want to request access to the platform, we want to use Rundeck in conjunction with JIRA to allow those users to request a ticket, go through the JIRA approval process, and then trigger a workflow in Rundeck to create the user for them so that we don't actually have to touch the platform or do anything for them. And that's it. I don't know how I'm doing on time. Pretty close. So if there are any questions, I'll be pleased to take any questions. I'm around today if anybody's got anything, and my contact details are there. So if you're thinking of Viya, if you're looking at maybe not using Viya, but you want to use the vSphere platform and OpenStack, please do get in contact. Be pleased to have conversations with anybody and everybody and share more of what we've done. Thank you very much. Are there any questions at this point? Yes? Yes? OK, so the question is around us using Terraform or whether we're using Heat. Generally, it depends on the development teams. So because we've got so many different development teams working on different projects within Europe and globally, some of them have chosen to go down the route of using Terraform. Some of them are already using Heat. Some of them are using the API directly. So from an ops point of view, we want to be able to support whatever they want to use. Whatever toolset they want to use, we should be able to support it. That's a tough task. In terms of the way we direct people during the onboarding, it's a real mix. It's a case of talking to them, finding out what their skill sets are, what they're trying to achieve. So one of the things with Terraform is around state management. And if you've got a large team, you need to share that state. Now Ansible has your court provide Atlas to be able to have a shared state. But if you don't want to run an Atlas survey, you don't want to run just Terraform, then Heat gives you that all in one and allows you to have that sort of state of your application. But we still have a lot of work to do on Heat to try and work out the best way of using it. We've got a team at the moment that are trying to use the callback feature so that they can be acknowledged when the instance is fully up and running, for the on-wait feature is in Heat and really build some quite intelligent stuff on top of Heat. So yeah, it really depends on the development teams and what they want to use. Thank you. Any other questions? Here you have it. I see you mentioned that you're using OVR apps for full operations. So my question would be, how do you monitor the low-pistic layer over here? OK. So VR ops obviously natively has management packs for the vSphere environment and the NSX environment. On top of that, they integrate with Hyperic. And then there's a Hyperic agent that gets deployed within the OpenStack management plane. And this is another thing that is managed by the VIO CLI. So you provide the agent and the configuration and it goes out and deploys it across the entire OpenStack estate and then configures all the monitoring for your salameter services, your NOVA neutron services, and pulls all that information through Hyperic and then up into VR ops. So in VR ops, you get full visibility from all of your services on the management plane right the way down to your data stores, your actual instances, your hosts, the whole one. It's all collated. So it's a really well integrated system. Have you been experimenting with endpoint operations? That's the new way beyond Hyperic. So the new endpoint monitoring removes the need for Hyperic going forward. And that's something that we're working with the guys at VMware to understand how that works. I think in VIO Free, they're going to be using those agents within the OpenStack management pack in VR ops. So it's just because we're on an older version, we're still using the Hyperic agents. OK. Thank you. OK. Just one more. OK. Did you work out some methods to do a data plane backup? I'm in the OpenStack configuration database. What you can do with the VIO CLI backup and something like the vCenter specific, because it can get out of sync if you're just restoring one and not restoring one. Yeah. So at the moment, the backup of the VIO platform is dependent on restoring into the same vCenter environment. So it's obviously crucial that you've then also got your backup of your vCenter environments. We use a couple of tools to do image level backups of our VMs. And our vCenters are all virtual. All of our NSX control plane is all virtual as well. So we can back up all of that, which means that if we had a full disaster recovery, we could take bare metal, restore the vCenter environment, build the NSX and restore that, and then restore the VIO platform on top of that. So we've got a full end-to-end plan for that. Which means you do snapshots of the vCenter appliance as well as the VMs for the OpenStack and back up them just binary way. So snapshot level backups of the main management components like the vCenter. But then the VIO CLI provides, say, it sends it to NFS. And it takes a backup of all the key configs. So we don't actually need to back up all the VIO management nodes themselves because we can rebuild all of those. It's very simple for us to just say, OK, this node's gone. Just rebuild it with VIO CLI. Because that management node is fully aware of what the configuration should be on that node, it's able to rebuild it. It's just backing up those main config files, effectively the configuration that goes behind the Ansible playbooks and the database. They're the two bits from VIO that we need to back up. And then it's NSX and vCenter backups beyond that. OK, thanks. No problem. OK. OK, thank you very much.