 OK. So welcome to one of the last calls for today. So my name is Norbert Weiner. I'm working at OpenExchange as a senior platform architect. And we are running on OpenStack, our own software, what we're operating in. So first, somewhere, what OpenExchange actually is. So OpenExchange is a software company. And we started as a company providing software to hosting providers and telecommunication providers. And this software was a web-made on collaboration platform. Some years ago, we identified the need of operating that stuff ourselves. So in 2013, we started to selecting and looking out and for a service provider or a provider which can provide us an infrastructure as a service where we can build up onto our stuff. So finally, we selected then an internment-based company which is named Exion, which was running OpenStack at this time and based on Havander. And we started then beginning of 2013 to build up our environment there. So there was quite low numbers of users, so only 100,000 users. We had back then a plan for running that service. But it was growing. So in 2015, we expanded our user base by 300,000 users, so which meant we needed to grow also resource-wise. And we started also to look out for another company in our US. Because there, we also identified customer needs in the US that customers also have a service, providing access service to them in the US. And also, then we had the first, more or less, learning from OpenStack. There, we experienced the need of migrating to a new OpenStack cluster, because the currently running, back then running, Havander cluster was not upgradeable by the provider. So we had to shift our workload to another cluster. By 2016, we grown tremendously by putting on another 2 million users in our European platform. And we started to build out the US environment as we found a new provider. And as we are cooperating there with RecSpace, which are providing us an OpenStack platform as a private cloud environment, as a dual-site configuration. And so we started there. We did some significant tests, and started then in 2017 to actually bring on, also, in that environment another number of users, about 4 million users in the US. And we hit the next requirement of moving over to a new OpenStack cluster in the US, which is not finished yet. But that's what we are doing. And now, in 2018, we have grown to overall 10 million users in our European and US environment and also start now there for a specific reason in the US to migrate into a new OpenStack environment. The reason there is that there were some customizations done for us in that environment, especially to the integration with the Swift object storage we're using. And as we are migrating away from Swift, because it's not fulfilling our performance needs we need here, we decided to not adapt all the changes to a new OpenStack release, but rather try to migrate over to a new OpenStack plane and get rid of all the customizations we had there. So far, we have overall in our US and European environment together as provisioned resources and also used resources, about 10,000 cores and 37 terabytes of RAM, and operating there with our 10 million users. So what were actually lessons we learned during all the time coming from a quite low system with only a few amount of users growing to 10 million users? So in regards to network, for example, we learned quite early that throughput connections, so 10 gigabits connection on the hypervisor level doesn't mean that you're actually able to reach these 10 gigabit connections because we had the problem of packet loss. So if there are coming a lot of packets in which was running fine in the beginning, but once we ground to a certain extent in our environment, this connection is coming in from users and also internal connections where especially we experienced problems here with DNS resolution, there were packet loss. And we detected it by seeing that the packets were lost between the hypervisor and the VM. So the packets were received by the hypervisor, but the VM just dropped it somewhat, so it got lost in between. Another problem we hit also in our environments is the contract limits just by the way of the software is working on our side. We are also sometimes it was missing the proper monitoring for that. And that led, for example, to intermittent connection problems, so mostly our internal monitoring was not able to reach DVMs while they were still running and doing something. So we were quite puzzled what's actually happening now. And we found out that it was a problem of the contract limit on the hypervisors because we make use of security groups, which is part of our security infrastructure inside our environments. And then also we learned by accident. So for our loop balancer needs, for example, we need to provision ports with the address pair of 40 slash 0 just to have the ability to have these VMs sending any IP packet to the backend services. And there you need to be very careful when it comes to using ports which are belonging to a security group and use this security as a remote security group in another security group because that will lead to the VMs which are using these security groups can be open to almost everyone because it's using the 0, 0, 0 as a filter there. And finally, when it comes to network, as of now, still experiencing our problems when it comes to hypervisors being restarted and that can lead to the security group being not fully populated. So there is only the networks partially working. So for example, usually the VMs waiting for DHCP coming in so that the IP address is properly configured. The other connection types are not allowed yet because it's still being set up on the environment and we are trying to find out how to optimize these. On the hypervisors, actually, we also faced some interesting things. For example, selecting the proper CPU governor. So initially, in all environments, the GPU kind of around the default was running with the on-demand governor. And when there is a certain load on the VMs, the VMs started and also our system started to behave quite oddly, I would say. So it means that the connections seem to be longer than usual, some dimensions seem to drop, and other things which are stuffed. And once we have set in all our environments the CPU governor to performance, all these problems were gone and the VMs were acting as we expected them to be. And the second one, when it comes for us in our load environment, is that we are providing email as a service, which means that we are having backend services where there is some cache data of the user's email, while the email actually stored on an object storage. There is quite some high IO demand and if you are not selecting the proper provisioning of the local thermal disk, it has a severe performance impact. So OpenStack lessons directly. So that was, I think, which applies to more virtualization environments. But on the OpenStack side, at least what we learned during our five years we have there is that, for example, initially in both environments, we started for authenticating towards the object storage to use the Keystone service. And with our use case, we easily were able to break that Keystone service for various reasons. We did some optimizations in the US environment because in the US environment, we are using Spift. And it was quite complicated to remove the Keystone binding from Spift. So their REC space, at least, did some optimizations. So there, the Keystone service is able to run this overload. We are approving that. We did some optimizations on our side. In the EU, then our provider was able to remove the Keystone authentication. And as we are using there, for example, the SAP storage as an object storage, the only safe base is now the authentication. So we are not using Keystone anymore. What we also learned, and as you have seen one of the initial slides, that the OPSAC upgrade so far seems to be very difficult. All the promises we got so far from our providers that in the future, things will be better. For us, it's quite a challenge to move over the workload. Because we are providing the email service and also the web mail service to other providers, which have the expectation that the system is always up and running. And moving over and disabling stuff is quite a complicated task. So we have to find solutions how we can move over workload step by step and finally get rid of the old control plan. And also, one of the last things here is when it comes to OpenStack, as it allows to build things in different ways. And we are going with two different companies. So the company in the EU and the company is for providing all the service. Also, the environments will build a bit different. And there, we need to be careful when deploying things that things might not work as in the other environment. So that was what we have learned so far. But overall, I say it's good to have partners on board which are proficient in OpenStack and know what they are actually doing. So we can focus on using OpenStack as an environment to build our software on top of it. So that was all. Thank you.