 Okay, so they ran out of music, so I think I can start to talk. It's amazing to see that many people here. I was a little bit afraid because of the size of the room, but it's great to have you here. My name is Andreas Puschel. I'm with BMW and I'm responsible for Linux architecture and solutions and also for infrastructure as a service and I'm the project lead for OpenStack at BMW. I would like to take you with me on a journey through our way from the traditional classic IT to OpenStack and I would like to share some puddles of mud we saw and we jumped into and we avoided some of them and I hope you have some ideas afterwards where you could go back home and use that. So we've got an enterprise. I think everybody knows what BMW is doing. That's the good thing about BMW. I don't have to explain it. But what about the IT at BMW? Everybody knows the cars, but we also have got an IT department and it's not that small. It's about 3,800 employees in the IT and we are distributed all over the world in 50 locations in 26 countries. But we have got a central IT from an organizational perspective. It means people are located somewhere but they are all working together. They are all reporting to the same organization and we support the employees, the people, the plants. We've got 30 production sites in 14 different countries. We've got the sales and finance department and we also have research and development in five different countries and last but not least we've got more than 116,000 people working for BMW and we are supporting all those people and all those assets. We've got one huge data center in Munich which is our enterprise data center and we've got three regional ones in the states in South Africa and in Oxford and we've got some years ago we called them data centers. Now they are just disaster-safe server rooms. They didn't change in size. This was just a naming convention. At every plant we've got two data centers to support the local production there and we also have got external hosting in Iceland for our HPC servers for the number crunches. About 12,500 OS instances are operated by us so not counting the clients, just the server instances, Windows and Linux and we are virtualizing since years. I think Windows is 2004, Linux 2007 and we are virtualizing Windows on BMW and Linux on Xen, open source Xen which comes with SLAS because we love to use best of breed. At that size you don't necessarily need to have just one vendor because you're operating it anyway different. Linux guys will be Linux guys and Windows guys are Windows guys. You won't mix it. We've got some storage around. NAS, fiber channel based Xen, backup archive. Lots of SAP instances, lots of databases and we still have the mainframe. Probably a lot of enterprises. You can't get rid of that. It's just there. We evolved. We had some talks at OpenStack Summit and I always said to me IT is like a kitchen. In the past you had a wood-fired stove where you made great pork with a nice crust but it was hard to fire up. It took some time. So over time we evolved. So we introduced the gas-fired stove, the electrical stove. We got some new tooling and it was more convenient. It was faster. We always improved but we're still making pork with a crust. It's nothing bad about that. But sometimes you would like to have, for example, a plate arranged completely and heated up. This is where the microwave comes into the play. So this is the second curve. You've got the evolutionary part of the stuff you already have and you've got the new part not just faster, more convenient. It's something completely new and you're doing it different there. And for some workloads you will need this revolutionary part. So to us there are always these two parts and they will be in parallel because even with a microwave sometimes I would like to have pork with a crust and I don't put it in the microwave. So how did we start with that microwave part of IT? We had a project started four years ago and first we tried to put everything we know into automation. We tried to automate processes. We tried to automate the integration, full integration of all the stuff, self-service portal. That sounded great. But to be honest it was just the gas stove or the electrical stove. It was a fancy thing with a lot of bells and whistles but still the same stove. So we changed our mind. It doesn't work out. We tried to create the full environment there. We created databases, VMs, web servers, load balancers, documented that in CMDB and put it into this and that back-end process system. But it was really hard to maintain and to operate. It wasn't possible at all. So we made a cut and we said, now it's time for something different. Let's make a pure infrastructure as a service offering. This is where OpenStack came into play. Just pure infrastructure as a service. We don't do any past services on top of the app. We don't try to integrate everything into this OpenStack. No, OpenStack just provides VMs, storage, network, period. With the first project we also created something like APIs. It was fun as it usually is when you do something new but it has been BMW specific. We had to create it. We had to maintain it and none of the systems out there knew these BMW APIs. That's bad because you would like to talk to them. And so we would have been in the position where we had to create also the other systems around. This is where OpenStack gives us a great opportunity. It's a standard API. It's industry standard. Everybody knows it. All the automation systems around. They talk OpenStack APIs. And we learned don't touch this stuff. Keep it as is. We don't want to have a BMW OpenStack even if we could do it, maybe. But it doesn't make sense. Use it as is and if you need a change, don't change it on your own. Tell someone to change it upstream. We are not an IT company. So we don't want to do it on our own. So this is where we talk to our distributor and tell him this functionality may be great. Not just for us, also for other people. Let's change it upstream. And we don't want to use OpenStack for everything. I see a lot of companies out there which see OpenStack, it's great. You have to be cloud to be good. The 10 myths of Gartner? No. Just use OpenStack where it's appropriate. And just use OpenStack for those people who can deal with it. From a personal perspective, they need some skill. And also from a workload perspective, they need to know how to operate that stuff. Which workload to put on OpenStack. You cannot move over the old style or database to OpenStack and hope it gets better. No. Just use those workloads where it fits. Always remember, stove, microwave, pork, plate. What are we using as a base product? We could use upstream. In general, yeah, it would be nice, but we are a quite small team working on OpenStack. We don't have the resources to test upstream packages to try to integrate those, to decide which ones to use and which one to update. And if we come to an update, how to do that? So we've chosen a distributor. At the moment, we are using SUSE, SUSE Cloud 5 based, Juno based. And they take care of this testing, QA and support. Yes, we don't get all the possibilities of OpenStack, but do we really need them? If yes, we tell the distributor. Because usually it's possible to do it. They just have to decide which functionality they could support. And they also use the upstream packages in general. And if they make something new, they give it back upstream. So to us, it's the best way to leverage OpenStack. And something which is very hard for us and where we also expect some issues is upgrades. At the moment, we are on Juno. There is killer around. There's liberty around. How could we do that? We don't want to upgrade every six months. We are enterprise. It's just hard to learn that. Because internally, you need some approvers. You have to be sure that everything works together. So on one hand, we would like to do releases very often. Because we would like to have the newest features, the new possibilities, the new improvements. On the other hand, could it be once a year, maybe? So this is where we have to find the right measure of upgrading and keeping the old state. And also, if we don't have the real need to upgrade, probably we won't do it. But we cannot wait too long, because otherwise it will be really hard to do the next step. So when we started with OpenStack, compared to our old project, as I said, with our old first cloud project, we tried to integrate everything. We tried to integrate the technology. We tried to integrate processes. And we tried to integrate the usage and operation models. So we've got these three layers. In general, it's like technology, processes, and usage. And when we try to integrate OpenStack, we would have done it differently in the beginning. But now we learned it makes sense to integrate it a little bit where it's necessary and just to avoid the integration where it's of no use or where it's even counterproductive. So if we don't really need a specific integration level, we avoid it now. Maybe we could integrate it later on. But let's start with a minimal set of integration. This gives you the possibility to start. If we would wait until we have answered all the questions, it would take two years until we start and all the projects out there who are in need of something like OpenStack wouldn't have the possibility to do that. So technology, we knew that it's not possible to keep everything as is. But on the other hand, it's not possible to start from scratch. We've got data centers and we need some racks in the data centers we could fill with the new servers. So we're using the same network. We're using the same racks. We're using the same cabling. And we are using some services. We put the service into the data center and then we tried to start the installation. We talked to our network guys and they said, what are you doing with your east-west traffic on Seth? That might be hard because our topology is not suitable for that. What to do? But we need a high-speed network, no latency network there. So they had to find a possibility to create a network specifically for us, which is not that easy. And we still have a long way to go to do that network integration. We said we were doing DHCP on our own. The network guys didn't find that really funny because they also have a DHCP server running. So you have to find out where you could use the old services and where you need to separate that. And sometimes you don't think of these services unless you really start it and see that you fail. DNS. OpenStack takes care of names. Hey, great. But when I would like to connect to this instance with my client, this client doesn't know that name because we've got an enterprise-wide DNS. How to deal with that? We could integrate OpenStack into the DNS, have it in a special zone. This only works if you have a class C network. Sometimes we've got slash 25. And we have to find a different possibility. Our solution has been we always need to tell the networking guys we need a new subnet. So they give us a subnet. They have to reserve it in cmdb, so nobody else is using that. We are not doing that very often. So we could also pre-allocate those addresses. As soon as we get a new network, we just pre-allocate the IP addresses in that network. CVM and the number. CloudVM. In the past we had something with Linux and Windows in the names, but we don't know. So we are just pre-allocating that stuff. It works for us. Maybe it works for somebody else too. But this is still an issue how to integrate the DNS of OpenStack into the enterprise-wide DNS. And having OpenStack as the master is not suitable in an enterprise environment. User and identity management. Yes, let's create users. They have new passwords. That doesn't make sense. We've got an active directory. We would like to integrate that. What about the different roles? We have the active directory as a front-end in general, but we have some identity management systems in the back. So if new employees come in and they get specific roles, they change departments, everything is done automatically. And this system also knows the roles and the rights they have. We cannot integrate that into OpenStack at the moment. We're just using authentication via active directory, but authorization is handled within OpenStack at the moment. We would like to change that. But therefore we also need some role-based access control because we've got different roles, and it's not just within active directory if somebody is allowed to do something. We've got our ITSM suite where all the incident change processes are. There are roles like change coordinator, change manager. They've got different rights to do something. Who is allowed to increase quota? Who is allowed to delete VMs? Starting, stopping is something for operations. Deleting all of the operation guys or just some of them, increasing quota, who will do that? So we have to find a possibility to integrate all these different components with OpenStack. Again, not now. Let's start with the real important stuff. Let's keep it on the list. We will do it later. And we had CMDB. Everything at BMW runs via CMDB, configuration management database. It's the central component of IT. Everything is documented there. If we create a new server, we first document it in CMDB, this size, this type, SAP server for that plant, and then everything within the automation layer of the classic IT is pulled from CMDB and created appropriately. We couldn't do that for the instances. Because it takes so long to document that the instance is not there anymore if you're finished documentation. Or there are also other systems behind CMDB which replicate every six hours. That doesn't work out at all. On the other hand, we need CMDB at least for the assets because if I don't have an entry there, the technician doesn't know where to go to find that server. And if there's something wrong with the server, I would like to have somebody who could find the server and fix it. So we have to find the right measure to integrate that. Now to document the stuff. Processes. We're really good at processes. We're Germans. People don't like it usually. Nobody likes processes, but on the other hand they are necessary and they help you to improve things. They make you fast, cheap, stable. But do you know if all of your processes are still necessary? I love the story of the ten monkeys, the experiment. I don't know if everybody knows that. They had ten monkeys in a zoo and they had a ladder in the middle and a banana on top. And whenever one of the monkeys tried to get the banana, the people there pulled them with cold water. So they very quickly learned don't take the banana. And then they took one monkey out and the new one in. The new monkey saw the banana, tried to climb up the ladder, but the other monkeys pulled them back and beat them up. He didn't know why, but he learned don't take the banana. And so they exchanged monkey by monkey. And at the end they had ten monkeys. All of them knew don't take the banana, but nobody knew why. And this is what we sometimes have with our processes. They are from a time where we needed the processes, but then organization changed, technology changed, workloads changed, but we still have the processes. Everybody knows how to follow the processes, but do we really need them nowadays with a new workload, with a new environment, new organization? So we have to challenge all these processes if they are really necessary. We also have processes which doesn't fit in this open stack world, life-cycling management of servers. We know everything about our classic servers, where they are, how long and who and what. It doesn't work in open stack because this poor guy who would need to do the life-cycling management for the instances, he would go crazy. Accounting in the past, workloads changed quite slowly, you got a new server, you had it up and running for seven years and then you put it down. It was sufficient to measure once in a month because the server didn't vanish. In open stack it's different. If people knew when we were going to measure, they would switch off all their instances, and it was for free. Good thing for them, but it doesn't make sense. On the other hand, we have to find new possibilities. People always talk about pay-per-use, great idea. The more they use, the more they pay. But sometimes, for example, CI environments, I don't want to have someone pay more just because he tested more. He should test, and if I charge him because he used a lot of CPU cycles, that's the wrong way to do it. Also, in the new world, we have to check if we are on the right track and if we don't create processes which are in the wrong direction again. This accounting is still something we don't really know. We have got a model now based on reserved vCPU hours, measured every five minutes, but charged or shown once a month. We sum it up, and only if you reserve CPUs then you pay for it. And we have got three different models. On-demand, reserved, and dedicated. Some of you might know that kind of model, as others are doing it, so let's try it again. On-demand means you always pay what you're using. If you have two days a month where you need your vCPUs, you just pay for these two days. Okay, perfect. Others have a base workload, and they move up and down a little bit. They could use the reserved one. It's like with a mobile phone where you've got a plan where you get 300 minutes included or something like that, and then you pay on top. Or the dedicated where you pay for a quota. It's cheaper per vCPU hour, but you could do whatever you like. And we have to learn with our customers. For example, the CI part. The CI guys told us, hey, you don't want to charge us by CPU cycle because they are reselling their CI to their internal customers. And they said we cannot charge them or let them pay more just because they did something. So you have to learn with your customers. It depends on your workloads. Where it is quite crucial to have a close look is where this new environment touches the old world, like asset management. You need an integration point, and you won't get rid of those processes over there. You also need incident management. You need a possibility to do that. On the other hand, in the past, it was okay if some processes have been done by hand. Once in seven years, so what? Once every five minutes, it doesn't work. What about creating certificates? What about revoking certificates? Something we didn't solve up to now. Then we've got the usage and operation model. Do your customers really know what they get? And do the managers know what you are going to provide? Because they think you need cloud, and then we put everything into cloud. No, I would like to have my pork with a crust. And only that workload that is suitable to run there that benefits from that new environment should go to the cloud. You don't have to move everything over. If you've got a very efficient database running in a classic IT, leave it there. The main metric for a database in this classic world is stability. And I would like to keep it. Agility is not an issue there. So OpenStack is not a one-to-one replacement for this traditional IT, even if some people in-house think, hey, that's great. I don't need the whole process with financial approval and planning and my applications and the blueprint and stuff. I could go to OpenStack and get everything fast and easy. No. At the moment our onboarding process is send us a mail, tell us what you're going to do, and then we have a talk. And if we think that you are the right person to go into OpenStack, you'll get a project. Otherwise, no. And we have quite a lot of people who just forgot to order a server, who are just doing something different because it was easy. And we have to send them back because they would be unhappy and we would be unhappy. So we don't win anything. And if someone goes to the cloud, you have always to answer the question, why? It's not to show, hey, we are 60% in the cloud. I would like to show we are doing good business. We've got a business benefit. So the workloads that go there have to follow the typical cloud fundamentals, scale out, deal with failure, expect the failure. In the first days, we had the issue that we misconfigured something and the project was gone. Oops. In the past, we would have had escalations and we lost millions and whatever. But here, okay, let's recreate it. We still have the data. Wow, cool. That was a completely new experience to us because now our customers internally don't think that they just get the service and they don't have anything to do with it. They cooperate. We've got community. We had just last week, we had our first internal BMW cloud summit. We brought together all the people, let them know what they are doing because what we saw that within the open stack environment, there are a lot of instances in different projects which have the same name and it's not test, it's more elk or something like that. So people are probably doing the same thing, let them bring together. It's not the same process as before where we've got one central operations team which creates a solution and everybody has to use it if they want it or not. No, here people create what they need, but you have to find a possibility, also a kind of process to bring the people together. And those people who are working in those areas love that model of cooperation and it's really great. So it's a lot of fun. Yeah, within their open stack environments, people have got a lot of new possibilities. They can upload their own images. We thought most of the people would bring their own images. Nevertheless, we provided some. Let's provide Slash, let's provide Ubuntu, let's provide Windows. All of the projects started with our images. Great to see. So our assumption was wrong. We started with a 20% solution because we just wanted to do something. We wanted the feedback of the customers to see what's going on, what are the necessary features. We thought CI environments would be the thing to go. A lot of CI environments out there, they need direct access to the network, they need an IP address which is reachable from the corporate network and easy to configure. So we created flat networks, provide a network, Linux bridges. It was great in the beginning. No waste of IP addresses. Everybody knew how is it working. And then the new workloads showed up. Analytics, big data. Hey, we need private networks and hey, we need this. Damn, wrong way. In the past, we would have waited one additional year to change that way to create a new solution but here, together with the customers, we are moving on. Yes, we will need a downtime probably to reconfigure the network to use Open V-Switch or any other overlay technology, open source, proprietary, whatever. Let's use what's best for our business. But in general, it's very important that the workloads know what they are doing and they could really benefit from cloud. You could put a lot of workloads into cloud and they will run. But it's not sufficient to really get a benefit. Cost is not the big benefit of cloud if you have been good enough before. If you have got a highly automated, efficient operation of your classic IT, you won't have this big benefit in cloud because you just move around the effort within your company. And it's not just an infrastructure project, it's a business project and the whole company needs to benefit from it. What about the future? Cloud is not an end in itself. And I hope that all the people here agree. It has to be aligned to the business needs. Don't do it just to do cloud. Don't do it just to be the man of the year in your company before your manager. No, do it because there's a benefit. And provisioning of Open Stack is the first step, but you have to continue. You have to create the services around. And the people have to understand that, live that new mindset. Then you've got a lot of possibility with Open Stack. And from a technical perspective, Open Stack is set. There's no other possibility for us at the moment. Open Stack is the right product to do it, but it's not a question of technology. Sorry to tell that here on Open Stack Summit, but to us it's just a tool. It's a great tool, but maybe a businessman. What do we have at the moment? The biggest thing at BMW is the acceptance. We don't care about the size, but people talk about it. We don't make any advertisement internally. People show up from shadow IT. They say, hey, we heard you've got something great, and we would like to participate. We've got another 200 VMs on some CA workstations under the desk. How did you get that? But hey, they are showing up. They are coming to us. That's a completely new experience. In the past, we always had to push them towards central IT. At the moment, we've got about 50 projects from especially the new technologies, what we call the digitalization projects, car projects, connected car. People at BMW, we are not just a car manufacturer anymore. We are a mobility service provider. So we are doing new stuff, a lot of it with mobile phones, with in-car communication. Car is a sensor sending information to improve maps, to connect the cars, to warn each other. These people have got a complete different mindset. Most of the 50 projects are of that part, of that number. Then our environment is at the moment 10 servers, 400 physical cores, four and a half terabytes of memory. We've got a Sath cluster below. It's also based on Susie Enterprise Storage. And we really need to extend it because people came in, first projects, they need 5, 10 VMs instances, and we thought, hey, they don't use that many instances. But when we talk to them and we do this on a regular basis, because you have to understand what they are doing, what drives them, they are just preparing the APIs. And then they told us, oh, I've got another 500 at the moment on VMware. I would like to move them over. I've got another 100, another 200. So at the moment we are busy with ordering hardware, preparing software, preparing network. They are just waiting until they really could fire with their environments. And use cases, continuous integration was the beginning. Now we've got a lot of smoke tests. Somebody would like to try something. Back-end engines for these new dynamic loads, big data analytics. We still have the big data physical farms, Hadoop clusters. Great, InfiniBand based. But for some workloads they have to reinstall it for 14 days. That doesn't make sense. So they are using OpenStack as a front-end now, as a flow-over room. Great. So we really love how they use it. What's very important for us is that the community understands our needs. There have been some talk, what's the right way for OpenStack to go? Is it the hyperscale or is it the enterprise part? I know enterprise is different. But I think from a business perspective there are a lot more regular enterprises out there than the real hyperscale companies. We also have to work together. The enterprises have to define the provide information. We are not coding. Sorry, we cannot do that. But we can provide use cases. We can provide test cases. We can discuss with you what are the possibilities? What are the real-world problems and maybe real-world solutions for that? How to integrate it? Where do we need something? Where is already something in place? Let's talk about that. The last point, first, interoperability. I think the OpenStack Foundation did the right step to have this interoperability check. This is really great because we would like to be able to switch. We are replacing our server hardware every two years, the vendor. We are replacing the vendor of our storage hardware every five years and so on. We would also like to be able to exchange the distribution if necessary. And the upgradability. This will be an interesting part beginning of next year. We don't know if it works. But we hope. Yeah! Any questions? Do we have a microphone? Great. Regarding the connected cars project, are you running the workloads of it? The machine-to-machine applications on top of OpenStack? No. We've got a combination of public cloud or virtual private cloud because it's not real public cloud. But it's at an external host. It's off-premise. Let's call it off-premise. We are combining off-premise, on-premise cloud and classic IT. So at the moment, there are just parts running on OpenStack. But the front ends are better in this off-premise. When all the cars start driving in the morning, we cannot scale at that extent at the moment. So we would like to see a lot of these projects because it's just great to be part of that. But at the moment, we are not running those on OpenStack. Any other questions? Feedback, annotations? Do you have any plan to integrate public cloud and your private cloud for future? We will use both worlds and we will try to integrate it from a business perspective. But we will not integrate it at least at the moment into the OpenStack landscape, for example. So there will be OpenStack for private cloud. There will be the off-premise cloud. And if there are applications who could deal with both, they could run on both sides. But they have to take care. We won't integrate it on an infrastructure level. Feedback, I think there's a feedback button on the app. Send me some feedback what you think. Because it's really interesting to hear what other companies experience. Maybe they have got ideas we could leverage or we could get into exchange and say, hey, we had a solution for something. Maybe you have got a problem where it fits. Usually only service providers show up. And they always have solutions without knowing problems. It's just for the video. What about upstream contributions from BMW? As I said before, we are not able to do this upstream contribution because we are a huge team of two and a half people. We just don't know it. Yes, it would be fun, but it's not possible. We are not an IT company. Our contribution is we tell Susie, hey, we've got some features. We think a lot of people could use. Or could you make this part upstream? Sometimes it's small parts. I did something I just provided to Susie, not upstream yet. I would like to open stack to do DNS lookup in the main DNS first before giving the VM a name. Because as soon as we have got the name in the pool, it gets the right name and nothing like host whatever. It helps us a lot. Yes, it's not real cloud. Because with cloud you shouldn't care about the names, but we are still enterprise. Okay, any more questions? Then, oh. How long it takes for you to deploy the open stack into your environment? So the deployment of open stack from an infrastructure perspective, it was done within two weeks, three weeks. But it was not a technical issue. It's more, hey, I forgot this subnet. Hey, we need to switch off DHCP on the network side. Damn, I forgot to document something in a system to get access for something. So the main problems when deploying for us have been process things. It's the process where you just don't know that you need to integrate. And also for specific workloads. We've got internally we've got a process system for males. So you need to sign an SLA with our mail guys where you just enter the IP address of your VM or your server, your name, your project name, and that's it. Okay, then you can send emails from your instance. We didn't think about that, that you need to enter the IP address there. The first people would have liked to send emails, didn't work. And you cannot just create the SLA for the whole subnet there. So sometimes you think you are very close. And then you find another issue within your landscape. They need to change something. But this is the way it goes. You start with a 20% and start. Don't wait until you have got 80% resolved. Just start. All the people could already work. Even at 20%, some people had a big benefit. Trainings. One guy showed up. We've got a Cisco training. I need 20 VMs with a specific image. Impossible in classic IT. You won't get your images. You won't get it that fast. And you won't get it where you need it. And you cannot control and reinstall and open stack is amazing for that. Maybe it's not the most prominent example for using open stack. But it helps. And so it helps is the answer for the why. And it's about it. You mentioned you guys only got two and a half people. So is this like a brand new team you guys spun off from BMW to from the traditional IT to do this? Or did you guys, you know, reach transition to storage guys, the network guys and the system. I'm just trying to figure out how you started at the beginning. With the first project, we said it's important to reach the people and you have to have them behind you because they have to support you. It was because we had this deep integration into the old processes. Now, as I said at the beginning, I'm responsible for Linux architecture and solutions, which is classic. But I'm also responsible for open stack. There's another guy. So I'm not working 100% on open stack. Sometimes it feels so, but it's not. Then there's another internal guy who came into the team from a daughter company. He's working 100% on open stack. And there's one other guy who is just migrating some stuff from this daughter company and also works with open stack. So those people know and understand our traditional IT. And I think it's a good point because if he would be completely new with a new mindset and he would try to send the mail, it wouldn't work. So you need to know the issues of the past and sometimes also why has this been created? It would help to have more people. It would help to have more dedicated people. Of course, we talk to the networking guys. Of course, we talk to the storage guys, but they are not part of the core team. And in terms of operations, we are moving now operations over to a hands-on provider, our provider who is also doing the Linux operations, because we cannot do 24-7 operations with two and a half people. So we will concentrate on architecture and solutions, and there will be a provider doing the operations. Okay, time. I think there's another session afterwards, right? They have to wait. So one last question, I guess. What are the supplementary tools that you guys use to support the open stack environment? What are the tools? We're integrated in Agios. Reporting and visualization, good question. We would like to have something. Other tools. We've got some old, not old, it's major installation mechanisms. It's great. We've got installation mechanisms, bare metal, which install a server from scratch, including firmware, BIOS configuration, and stuff within 20 minutes. We don't exchange that. So we still use something out of the classic IT, but really tools for open stack specifically, there are not too many. So what we are really need of is reporting. So basically you use really vanilla open stack without no backup? Indeed. So yes, backup is on our postage area. We need some kind of backup. We need some kind of shared storage, but not at the moment. Thanks. Okay, so thank you for coming, and don't forget the feedback. Thank you.