 Gwyntag, gwynt hon yn直接d ag ymgyrch, roi mwyaf i'r Mathew John's a rwy'n ganwyr sydd y Egypton ar ystafell wedi eu ffaint cymdeithas iawn. Ond nes o remind ychwaith, y chees yn rynnu i'r fi wnaeth hwnnw bwrddol os yw'r cyfardd o wath, mae wedi ddechrau y ffordd iawn yn gêmyn nhw'r cynllun llwythol mukhaf. But joining me on the flight deck today we have Manuel Liene from QSC, one of our customers, Antora Dwarararararararararararari from SUSE, one of our systems engineers. And Manuel is going to be giving you a brief journey through QSC's trip onto Openstak and Cef. So without further ado, I will hand you over to our co-pilots and leave you to it. Thank you. Yeah, thank you Matthew, and welcome to Bolean. I'm new in OpenStack, and at QSC we are new in OpenStack for this year, and I want to tell you about our journey, about why we need an infrastructure as a service platform, about what were our decision about which platform we are using, because there are many platforms in the market, why we are using OpenStack at least, what is our design in the OpenStack platform in our data center, and what are we doing with the OpenStack installation in our data center. So, this is what happened when you start your application and there is no data from the server. I called it the app data gap, and that happened to many of our customers because they are not able to deploy enough servers as they need. The reason is, any application today, you know that you are in your smartphone, doesn't have data in the application, all data is loaded over the network from any data center, from any server, from any storage system, and so on. And you can deploy a new application every day, but it's hard to deploy new server every day with a traditional setup with installation and cabling and so on. So, that's the reason why we decide to need a new platform in our data centers. We have data centers in complete Germany, and normally our customers are using co-location services in those data centers. They deploy their own infrastructure, their own hardware in their racks and so on, and they have the problems, what happened if I need new server, if I fastly need new servers. So, we look at the requirements of our customers, and first of all, they want a platform with zero downtime. So, that's the reach from all customers, I think so. Everyone wants to run their services on an IT platform with no doubt downtime. The one the platform with is scalable, they don't want to order new hardware boxes and wait many weeks or months to install them. They want just click and have new resources, network or compute or something else. And a new requirement is they need it geodistributed. So, in the last presentation from Red Hat, we heard edge computing. Today I heard many presentations with edge computing. 5G, that's the reason why the customers need the application deployed in many data centers around the country or around the world. So, we are only a local, a German vendor, although we can only deploy the servers in Germany, but that's a hard requirement at this time for the customers to deploy the software or the infrastructure in different data centers. Sure, they want to save costs. They want to go back from hardware invest, from the complete invest they want to save costs at all from management costs and something else. And the customers want a central management interface. When they have own installations in different data centers, the most users have different software or different platforms and have a management interface for each data center or for each platform. And they are sick of them. So, they need all in one platform for all workloads in all data centers. They have servers and they only want to pay for what they are using. Everybody knows that there are, that's the reason why the hyperscalers, so the so-called hyperscalers like AWS or Microsoft Azure became so big because they were the first in the market to have a pay-accuse model. But they have many problems to solve the requirements for German or for other local industries about some specialized requirements. Okay, the next thing is, there are not also customer requirements in our decisions. We have also owner requirements as provider. So, first of all, we need a platform who is completely automated because you see it before, we need zero downtime for the customers. We want to save costs and that's only possible if we have a platform which we don't need to attach with our hands every day doing the daily work load every day. We want to run scripts on it. We want to integrate the platform in our processes, in our billing systems and our support systems. So, we need a completely automatization. We need a service portal for our customers. We are a hosting provider and it's important that we have a platform with a multi-tenancy feature because we want only one platform for all of our customers. Otherwise, we are more better than the customers itself. Many, many same installation in one data center. And the most important thing for us as service providers is the network capabilities of the infrastructure service platform. The reason is we have many customers with MPLS, with VPN networks, with customers with own data centers and we want to have a platform where we can connect every data center in the world. It doesn't matter if it's our own data center in Hamburg or in Frankfurt or if it's a customer data center or if it's only an edge data center. We want to connect all of them together and want to able to have a network connectivity or maybe also a workload shift. On our side, there are also an open stack platform. In our example, we can shift workloads to other clouds. The next point, we need a hybrid capability of the platform because this is a special thing in Germany. We have very strict laws and for data protection, it could be necessary for a customer to have dedicated hardware or separated hardware for his own data. So we need a platform where we are able to separate hardware for each customers with own hypervisors but have the same service portal, the same management interface for the customer. The next thing is we have many co-location customers with legacy hardware. You can imagine when you bought new hardware for $1 million or something else, just two months ago, you don't want to crash it and throw it away. You want to still use it anymore. So we need a way to connect those hardware to a new modern infrastructure and that's the hybrid thing. We can use dedicated hardware and connect it to the new platform. The same is for specialized hardware. I heard many presentations with special graphic CPUs or something else. I'm not sure what's the next thing next week, maybe our special CPUs and we can connect them or we want to connect them with a hybrid setup in a dedicated rack to another cloud. So we need different regions. I mentioned before, we have some data centers around Germany and we want to have a global platform overall. We are a business provider. We have a big management team in our company but we need for the main platforms we need to be certified in each way and we need to have a support for our platforms. So that was a very big point for us to have an infrastructure platform where we can get support at any time we need it. So we look at the market and there are many players with infrastructure as a service platforms. This is cloud stack, if anybody knows it. We also have a cloud stack installation in our data center. It's quite good but the community is really dead. It's a very low community, not so active. I'm not sure if there are any big vendor who supply commercial support on this platform. So for sure we look at VMware. We also have a very big VMware installation for special workloads. We do a big SIP hosting. We did this with VMware. It's running good. It's no problem but we have price cost problems with VMware in a big infrastructure environment because of the license fees. We have problems in the APIs. I'm sure VMware has a very rich API but it's not open. You need to use what you get from VMware and if they don't want that you do it on this or this way they don't allow it to you and you have no chance. So they are open Nebula. I'm not sure what happened with this. I think it was too small and have too less features. There are also other commercial vendors like Nutanix. I think it's more a hyperconverged infrastructure but we look at it. It was very interesting but it was expensive than VMware. One app is also a very cool solution but the network stack was not so richful as we needed. So what we are using, we are using OpenStack. That's the reason why I'm here. OpenStack give us all the requirements we need. All the requirements I mentioned we had from our customers and from ourselves. I am very lucky with this because I personally very like the open source community and I think this is the reason why we need to use it because the open source community give us the flexibility to use the product as we need it, as our customers need it. So the big problem was we have, I mentioned it, a big management team but we have very less experience in OpenStack. I think the most of you know it, OpenStack is not just plug-in and boot up. It's a very complex installation, a very complex platform. You can do everything you want with it but not in one day, not with the vanilla sources. So that's the reason why we look at the market. There are many companies, vendors on the market who offers commercial support or complete setups, own software products, Red Hat, also here, Tanonical, Myrentis and so on, who or why is running the biggest, I think the biggest German public cloud with the open telecom cloud. But we decide for SUSE because the platform we wanted with all the projects we need like the web interface and the complete technical base layout. SUSE were the only one on the list before who can support all projects we needed. And the next thing is SUSE is also located in Nuremberg, like our head office, like our head data center. And we had very good connections to SUSE. And that's the reason why I can welcome Torre. So hello from my side. I will do just a short break as usual in the plain commercial. But be sure I have no gifts or something that I want to sell. So I want to explain a little bit about what is SUSE, what have we done with QSC. So if I'm not wearing this clothes, I'm working as a sales or systems engineer and work together with the QSC guys to get the open stack part up and running. So let's have a look on this open stack cloud product. What does it benefit? We already had heard the requirements we had about the decisions why you have selected us. Here are some more points we see here as our benefits. So yeah, open stack is complex. So what we built around this is a provisioning and management tool set up. We try to integrate whatever you need for running open stack. We try to be, if you saw our message, we are often called the open open source company. This is not a copy and paste error. We just want to say, okay, use what you need for achieving your requirements. If it's not from SUSE, it's okay. If there is an open standard, which is connecting to it, we are fine with it. So we want to be into our interoperability as much as possible. And yeah, one of the nice thing with QSC, we are living in more or less the same town. So you have a direct connection to this. SUSE is based since 25 years in Nuremberg. It has a lot of experience how to support open source software in the enterprise market. And this is something, especially in the German era, you can use. Because the people speak the same language mostly in the front. Sorry. And same time zone. So it's a direct connection. The mission which we see behind the open stack part. If you are going with vanilla, it's likely the same as if you want to deploy Linux. Nobody in the market at the moment will compile and create its own kernel. You are using a distribution. Why is it? And why are we talking about this with open stack? Everyone who has tried it and is proven by a lot of researchers, it's complex. Complex to configure, complex to deploy and to upgrade. So the operation itself is complex. And as we are people, we are quite common to use complex technologies like a car. So we are all able to drive a car. We are also, most of us are able to order a car. But if you go to a car manufacturer, you just say, OK, I want these seats and I need a driving wheel. I don't need it. I have it already. Please use my. If you don't select all the parts on your single system, you look at the configurator. Say, OK, this is what the car vendor has already prepared for us. I select the color and say this uses engine. And after six, nine or whatever months, depending on the engine you have chosen currently, you will deliver and get a nice car. And it's ready to use. So it's packaged, it's optimized, and you have a service partner. If there is any problem with this car, you can just go to him and he will fix it. And this is the same we want to use and do with open stack. So we package it, we distribute it. We make sure that the quality, reliability and the performance fits to your requirements. And that there is an update and upgrade parts. So this is also very important. As you know, open stack is very fast with new versions. You need an idea how to move in production from one release to the next one. And this is what we call then enterprise support. Let's have a look a little bit more in detail what we have in the cloud architecture. What you see there in the orange part, these are open stack releases. Oh, I see there is a copy and paste error. It's not Newton at the moment. We are talking about cloud eight. This is pike release. But these are the normal vanilla open stack packages. So there is nothing from us put in addition or removed. This is just open source open stack. We added everything which is needed to run it on Susie Linux Enterprise Server for sure. This is the base operating system. And you can use every physical infrastructure which is supported for Susie Linux Enterprise Server. So you have a continuous support stack from the physical infrastructure up to the APIs. Also these boxes are not shifted a little bit. This was intended to have it because what we create are just open stack APIs. And here you are open to you whatever you want as a billing engine or a managing engine. If it's following the open stack APIs and they are open. This is why they are called open stack. You can use it so you have the freedom to choose whatever you want as a management system. This is very important. Besides this we have everything which you run needs as a basic service. So like pacemaker to make the control services high available also the compute nodes. You can make them high available with pacemaker. And we have a life cycle management tool. So we take care about how to install all these packages on the different nodes. More or less like you are using a cloud itself. So if you need new hardware you just get the server into the rig. It has to be connected to power to the network. It will power up. It will be recognized from the life cycle manager and say okay yes put it as this role. It should be a new compute node a storage node and network node. And it will install it automatically. You don't have to get in touch with any console if you don't want. So this is the basic idea what about our open stack distribution. So we are getting now more a bit less out of this commercial part. What are the design criteria for the architecture we built for QSC. First of all we heard zero downtime. Zero downtime in open stack is a little bit different. We have the focus to have a high available control plane. So we should be ever be able to communicate with open stack and start new instances. So this is one design code. On the storage side we want to get rid of any legacy shared storage device. So we want to just use just local disk. But we want to have storage classes. So we have SSDs and spinners. And this was a decision to use Ceph as the main and only data backend. This is used for the users. So as Cinder backend but also for the infrastructure itself. If you know a little bit about pacemaker you need a stonest device. And so we also use here Ceph as the stonest device. Then we want to have a dedicated network setup. So we separated all networks. We have a dedicated management network. We have a dedicated storage network. And a dedicated open stack network. On the other side we created some central system management service which could be used for the open stack part and for the Ceph part. We looked a little bit at monitoring. When we started with QSC to discuss it, Monasca was already available but quite heavy. So we decided to say okay in the first step, let's try to integrate this open stack in the already existing monitoring. During the journey we see it was not really the best decision. So this is something it will be redesigned. So this is at the moment a little bit white space. And we see that we should be open to fulfil all the new ideas about networking. We already heard there are a lot of regions which has to be connected and this has also to be bound to the open stack installation. Really a high level design what we created here. So we have an HA open stack control cluster in reality out of three nodes in the beginning. We have separated KVM nodes. We have an installation management server, an admin node which controls everything. Then we have a dedicated Ceph Sousa Enterprise Storage. So SAS is a software defined storage. It's a lot of assets you have to say here. And with this we create all these networks separately. So really just a brief overview about the architecture if you need some deep delts. I believe we can discuss this in a smaller group after the session. So this is how we as Sousa helped QSC to deliver and to fulfil the requirements and created this as a base and now it's growing. And maybe I can come back and take over to tell a little bit about the future and the use cases. Yes. So thank you for my side. Thank you, Tora. Yes, Tora said already that we had some problems with our old monitoring systems. We were using CheckMK and Narios that are very cool systems and they do very good jobs for our co-location customers and so on. But they were not flexible enough for the new OpenStack platform. So we set up a new monitoring environment with those components. I'm not a technical guy so I can't explain those to you. But as Tora mentioned, we have also some technical guys here at the summit. I'm sure they can explain it to you. So the last thing is we talk about the software, the infrastructure. But the software needs hardware to run on it. And yes, it's an open software and you can run it on every x86 hardware, I think so. That's a benefit of OpenStack. But we decided to run it on enterprise hardware. The reason is we want to have a set up with complete certification with SUSER because, oh cool feature, HP hardware is also a partner of SUSER. So with this hardware, maybe I put some more coins in the machines. I'm not sure what's happened. OK, I have four minutes to go. So that's the reason why we decided for the HP hardware. The next point is I promised you to tell what we are doing on those platform. For what we are using it. On the left side we have the co-location business with our co-location customers in the data centers. And on the right side here is the new virtual data center with OpenStack from the SUSER OpenStack platform. And we put them together in our base product. So you can use co-location and virtual data center together with network connectivity. And then I don't read all the management and service products. We offer a wide range of management services like operating system management or firewalls or domain services on those platforms for our customers. The next thing we are doing right now is to take a closer look at the networking part of OpenStack because we want to have a complete flexible software defined network or software defined wide area network. We have cloud connects in our network to the hyperscalers for example and we want to be able to connect any virtual data center in our OpenStack environment with one click to Azure or AWS or any customer data center and so on. Completely software defined. For next year, I hope we are here. No, the next year will not be in Berlin. I'm not sure where the next summit is. But the next summit, we want to talk about this because the logical next steps is containers on OpenStack. Some of you are wondering about this logo. This is hardware. This is a firewall render. We are planning to do firewall services with Fortinet in the OpenStack environment. I think it's a very cool thing. You can have hardware firewall features from hardware firewalls in a virtual data center. So some partners of QSC. You also see Susie here and HP and there are much other windows. My last slide are some reference customers. Not only of these customers are using the OpenStack infrastructure but those customers have their own infrastructure in our data centers and want to use OpenStack for testing or production use. So I'm out of time. Thank you and I wish you a great summit for the next days.