 Hello, I'm going to speak a little bit about the current feeding of a public cloud service on the open stack Sorry the tape Basically we're doing a public open stack cloud an infrastructure as a service we have multiple public cloud sites Eight of them in six different countries We also do something called compliant cloud that is for businesses with special cloud requirements for example financial services or healthcare Also a number of private clouds that are basically Set up pretty much the same way as the public clouds, but done a little bit Differently due to the fact that they are customer specific And most of the things we build are built on standard open stack services We try to stay vendor independent and not do very much specialization as far as possible a brief history We started back in 2009 What we did was basically a non open stack based KVM solution And we found it to be lacking in multiple ways All on both in features that didn't have all the features we wanted as well as the development rate wasn't really very good so we started looking at alternatives in 2012 and Our first test installation was based on Havana. It was done in 2013 and We launched the open stack based service publicly in 2014 and it's based on ice house So we've been doing this for for some time We do regular updates we do release upgrades at the time and Our current situation is Mitaka Newton or Ocata depending on where in the upgrade cycle the specific Site or private cloud or so on it is in Also, there are multiple data centers It's basically 10 data centers with open stack installations and We have 10 gigabit connections between sites in a redundant way and Do transport VLANs through an overlay MPLS network over the core IP network? We started out with one keystone And multiple regions based on that keystone We're now working on separating and distributing keystones to both to get better performance As well as bet better redundants as well as there might be different Compliance requirements depending on which country the installation is in and so on We have written some own code code to be able to move virtual machine snapshots between regions This code is pretty specialized unfortunately, so I haven't been able to commit it upstream, but that's something we're we might do later on The hardware we use for this setup is mostly based on Dell when it comes to servers some Dell force 10 network equipment to Do the internal networking in open stack and so on and Cisco equipment for core networking We use mostly M1000E blade chassis to and try to cram as much as possible into them But we also have some non blades for features like Storage and stuff like that All the hardware is deployed through MAS nowadays. I'm gonna get a little bit more into that a bit later We have gone through a different number of a number of different platforms We started out with CentOS doing manual installations. That was pretty Bad to maintain and also very hard to do new installations So we pretty quickly moved on to CentOS and did our own scripts for deployments that also required pretty much Manual work, so we tried to standardize an ansible and at that time open stack ansible didn't have Really good support for CentOS nowadays. We have seen that it's much better But at that time we created our own ansible playbooks and we did that pretty much based on the scripts We used to deploy it previously so we didn't do it in a very generic way more in a way that worked for us basically After that we looked at a number of different technologies to deploy open stack. We looked at Red Hat director Based the deployments We looked at Red Hat with our own playbooks. We looked at Ubuntu with autopilot Ubuntu with UU but we ended up going with Canonical MAAS to deploy hardware Ubuntu as operating system and doing standard open stack ansible Installations as that gave us the flexibility we wanted in in the best way we found When it comes to vendor support, we have found it quite hard to find Really useful support for us for our use case It has also been pretty time-consuming to get support even simplest question for example asking when is this specific bug That we know we're getting impacted with when it's going to be fixed Okay, at that time you have to provide all kinds of the information provide log files provide every bit of information that You can think of instead of just getting an simple answer. Okay, it's gonna be fixed in release this release We have also found sometimes that vendors have difficult requirements to provide support that has been difficult for us to to adhere to For example that things have to be set up in a very specific way or that you can't do custom modifications or and so on Usually we have found that we find fixes long before the vendor support has when they have requested multiple pieces of information from us and so on Usually some time in that analysis phase we have already found a fix by ourselves Hey, when it comes to storage we have historically used very much Solaris ZFS and NFS based storage for our older cloud service also Some of our first installations of Open stack used that same storage as we already were knowledgeable about it and so on Performance-wise this has been really good, but it has had some issues with scalability and redundancy that We really wanted to address We tested a number of technologies During all this time we had some brief encounters with the equal logic. We have some had some cluster encounters and Finally we ended up with just doing plainsef and we have never looked back When it came to networking We found that to be One of the more complex challenges lots of choices Lots of issues especially from the beginning when it was quite new technology to us Both issues when doing upgrades and the performances issues and the stability issues Even though we Decided on going on the standards components in this case as well We had the choice of going with vendor vendor Solutions we Tried to stay as vendor independent as possible. So we just went with the neutron and open V-switch And that has really actually worked pretty well And it has been huge improvements in Savannah That was our start start. So nowadays we find it to work really well and stable and lots of work has been done with for example upgrades and And not getting data plane interruptions while doing upgrades and so on When it comes to billing We have used we started to use kilometer We had lots of issues there both performance and reliability and some customer unhappiness due to that and We've heard that lots of other providers have had similar problems and experiences with with this And basically we ended up doing writing our own hacks to do telemetry and to get out the data we wanted but it looks more promising today and we are we're trying to looking into Getting using the more standard ways of doing telemetry as there have been quite some developments there since since we did our own Development there When it comes to upgrades We're currently doing release upgrades during service windows that permit that allows us to have some downtime But we're trying to do them without downtime We have found Though that most issues we encounter in between those upgrades already has fixes committed in later versions that we then the one we run at the moment, so we share picks smaller fixes in between release upgrades and Get those in even even between those release upgrades And we're moving more towards a more continuous upgrade strategy to be able to Meet these requirements better Just some quick some we're actually running out of time here, so When it comes to test environments, they're needed for many purposes and you need gonna need lots of them to be able to To and so it's easy important to easily set up new ones and be able to do that quickly to be able to and both develop new features test Test new features and test upgrades and so on as well as Working on troubleshooting issues and testing fixes for those When it comes to customer interface we expose the open stack API to customers We also needed an easy to use control panel That supported some customer features that we needed that are not in for example horizon and We needed to support some non-open stack services that we provide as well, so it we ended up creating our own control panel for that and When it comes to the API encoding against them we found it to sometimes lack documentation and documentation Was sometimes not up to date and or wrong But this has improved over time and it's also the beauty of open source You can also always go back to the source and actually check out how things actually work And when it comes to the last one supporting this This thing We have a large variation among customer knowledge, so the first line support is basically okay they take the call or the ticket from the beginning and Start looking at the issue and can help with the billing issues and and Usage issues and so on and then we have the second line that do that's more thorough Troubleshooting and can also acquire the help of the DevOps team to Troubleshoot harder problems and the DevOps team of course handles everything basically that has to That requires modifications of the open stack environment. I mean for example Code changes or configuration changes and harder troubleshooting And that's about it. Thank you very much