 So first of all welcome to today's session. I Understand our position here on stage. It's us against your lunch. So we try to make this as interactive as possible We Would like to present you we don't want to sell you anything We don't want to convince you of anything. We just want to show you our journey into OpenStack over the last two and a half to three years and Without any further delay, I would like to introduce my co-speaker Ralph Dehner Hi My name is Clausen in Capelle. I'm a senior program manager at SAP together with our implementation partner B1 We went on a fantastic journey of the last two years to make a strong move into OpenStack and We would like to present this to you Again we'd like to make this as interactive as possible. If you have any questions, please interrupt us at any time At the end of the session. We try to keep another five minutes for four questions Before we get before we begin About three months ago. You might have seen that SAP has released a press release that SAP Has committed to Cloud Foundry and OpenStack. We are very proud of this After we've worked hard for two years internally to bring OpenStack live to bring it into production to make a volume rollout We have been able to convince a couple of other people within the company that this is a good project to work on and We are very happy that SAP along with a bunch of other partners are supporting OpenStack We've set up a short again agenda for you for the next 30 minutes We'd like to speak a little bit about both our companies what we do how we do it And then we'd like to take you on a on a short trip into our company into SAP What we've done in terms of virtualization what we've done in terms of cloud computing What constraints we have in terms of our software in terms of our data centers but then most prominently of course we want to speak a lot about OpenStack we want to talk about the project and Of course, how this whole OpenStack project integrated into our existing environment So Thank you. Yeah, my name is Ralph Dana from B1 Systems We are founded in the year 2004. So we have last month our 10th birthday We are specialized on Linux and open source Beginning and at the beginning we we started with high availability and virtualization And the last one and a half two years we do a lot of projects in the OpenStack area We have Some partnerships. I think with the mostly Leading solutions which are necessary in a data center In this in this project We use Susie's Operating system puppet for the automatization now yours for the monitoring and part-time Arista as network devices So as Klaus Henning told you we are not here to to sell you anything. So we will not do a lot of marketing Anybody knows B1 Systems already No, a couple of people right there So Maybe most of you know SAP and in case you don't so our company recently two three years ago had our 40th anniversary We are the leading provider in the world of ERP systems. We have a Global or globally operating organization We have hundreds of thousands of companies that run SAP very mission-critical business 13 million users in hundreds of con and 120 come countries worldwide. I Think what's more interesting is What we as the internal service provider of SAP actually do for SAP for our development systems for our customers So in total we have more than 70 70,000 servers Out of which we have about 50,000 virtual machines on two and a half thousand hosts In case you might know I Didn't update this slide. So these numbers are from 2012 As you can imagine these numbers continuously grow at a very very high rate. So you could imagine that these numbers are a lot higher today Back in 2012 we had more than 10 data centers worldwide As such we are not only in Germany, but also in Europe one of the biggest data center operators and With every acquisition we do with every new product We ship the number of data centers the number of servers the number of VMs keeps on growing and growing and growing at the same time This slide is actually from 2007 I don't want to bother you with old slides, but I just want to show you that the complexity within our internal landscape is Very very high. We bring out a lot of new products, but at the same time we also need to support our existing operations Having said that I'd like to Explain to you a little bit how we try to manage this high level of complexity within our data center operations so In our internal business and also the external business we have Around about four pillars that we segregate all our demand into On the left side we have what we refer to as volume business This is a very generic Let's call it generic volume business. It's training and demo here. We have a high turnover of systems Very short-lived high provisioning numbers But again, these are predefined landscapes on the right side is what we refer to as the production business here We have of course our internal development systems as a software company This is the majority of our data center But we also have business systems and business systems is not only our internal SAP systems that we need to run our company But these are all the external customer systems as well. So Whatever you hear for example with HANA enterprise cloud or HANA cloud platform Hosting operations cloud products software as a service is all into the production business systems What's very very important in such a complex and growing fast-growing landscape is that you have a standardized data center infrastructure So this is what we refer to as building blocks here We have a predefined set of building blocks where we try to optimize the hardware to the TCO And the usage of an SAP system as you can imagine an SAP system Especially a HANA based SAP system is very memory hungry So one of the largest Hardware that we have in the data center has six terabytes per box and above I think the smallest hardware that we have is 256 gigabyte of RAM this kind of puts into that I mentioned The number of physical servers compared also to the number of VMs that we have Next to the standardized data center infrastructure, which forms a uniform layer from the bottom We have a uniform layer on the top, which is all the change management automation process design workflow management In idle terms, you would probably say in general service management on top which manages all the different customers across all areas This picture is very important for us to keep in mind because when we started with OpenStack two years ago To know almost two and a half years ago. We had to somehow fit into that picture and we'll come to that in a minute Let's talk really quickly about virtualization and cloud previous cloud projects we did and then we go into OpenStack, so These are some of the projects we did From 2008 until 2012 I Don't want to go into a lot of detail on these we had a lot of Gardner engagements We had a lot of early cloud adoption We have a lot of VMware in-house. We actually have a lot of Xen KVM in-house Hyper V Solaris What have you we have a very diverse environment, but in general what we can say is that Probably next to VMware. We have a very very significant installed base of Xen I don't know if these are 5050, but these are the two most largest players How did that journey start so? Back in 2002 2003 we at some point realized that our growth in the data centers that we have is so fast in terms of server growth That we will reach a cap in the year 2008 2007 so this is the maximum Capacity in the data center and we said if you continue to grow this way we need to build a new data center We need to wire new electricity We find a new location maybe a new country new operations teams and you need a lot of lead time to do that so at some point we said how about we start to virtualize to Avert this effect of having to build a data center and we started very early with virtualization already in 2003 by the time in 2006 round about 2007 we certified SAP also for for virtualized platforms and We went into a volume growth This is roundabout when we started to virtualize to to standardize all the virtual building blocks that I showed you before in terms Of the hardware and at the same time brought in a standardized change management from the top today We have roundabout 70 percent more than two-thirds of all of our servers virtualized I Personally think there's a certain cap within a data center probably you will never reach 100 percent and Probably for for some workloads. It's good that you don't reach a hundred percent because for example If you run large HANA and memory databases in a productive environment You want to keep that physical, but but that's just a side remark So we believe that roundabout at 70 80 percent We are already in a saturation mode where we don't want to grow much further as I said before We have this building that that we run in in in our data center And I want to spend just one second talking about the bottom part about the standardized building blocks It's a small picture of one of our cloud Data centers on the top right very fancy and blue. I think the marketing department went there and put installed some blue lights typically these are off The bottom part is more interesting. So For example in our VMware environment. We run clusters of 16 servers 15 productive one spare. So we have 16 servers each server Today in the new in the newest building block has three terabytes of RAM and as you can imagine you can put a lot of virtual machines on these servers Our average virtual machine is around 40 gigabyte per virtual machine 30 to 40 gigabyte So we typically have 80 100 If not more virtual machines on this environment It's important to say this because we want to keep the same Standardized infrastructure for all platforms. So we use the same for hyper V for the ember and Foxen and this was kind of the starting point for us How we we moved into OpenStack. We had a given set of constraints from the bottom with the hardware With the way that our networks are set up with the VLANs with with everything that came from the bottom from the data center But we also had a fixed set of change management and service management that we had to integrate in Into the top and this is our our change management. We have a cloud lifecycle management which is kind of a workflow that end users or Self-service portals can can request Virtual machines from these can be SAP products on the top left such as LVM Netweaver LVM in case you use that product in your own data center to deploy SAP systems and through an API Through the IT service management portal They can request their VMs and these are built automatically into the right cost center and so on and so on So for us it was important when we introduced OpenStack that we had to somehow integrate into existing data center proceed processes at that point On the bottom as I said we have a CMDB asset management. We had a certain set of data center constraints that were given We have an installed based off the ember. We had a installed base of Solaris and here we wanted to Productize Xen even more So this was kind of our starting point For the OpenStack project In the very early days How did we manage Xen? So when we when we first tested Xen a lot of it was script based So there was not really a clear management framework on top We had a console front end where you could easily request the VM start stop and so on and We wanted to take this existing Xen Infrastructure and integrate it into all the existing change management processes that you saw in the earlier slide So our target was for Xen to build a private cloud To integrate it into the internal IT systems into monitoring reporting APIs and so on We Decided from an early point on that Open step from an early open stack point on that we want to use open stack I think we did our first proof of concept with Baxter. We did our first kind of Pre-production use case on Cactus. We are very proud when we got to See the event you had in some we had in San Francisco in 2012. I think it was The second open-stack event and we saw there's a lot of traction on the market for open stack a lot of people jumping on that Train because we are doing really something good And as I said before we also decided well, let's go on this open-stack route. Let's use it for Xen Let's try it internally and let's see how we can productize this And we started Together with our partners on a on a workshop You can see some screenshots some pictures we took from our whiteboards because we we kind of keep it as a reference What we discussed at that point as I said we had a we have a global data center operations We have at that point. We had 10 locations worldwide that we needed to hook up So we thought how can we use open stack in an environment which is so diverse, but has a standardized hardware We came up with a certain seem to be data model integration interfaces We saw the need for certain custom developments, which I will explain later on and of course we need high availability and some load balancing so We came up with an open-stack architecture for us This is probably not too far away from any other open-stack architecture that you will see What was important for us is that we built in some low-level high availability into it You can see on the top everywhere. It says LB with a star. These are just simple load balancers and this was the first Kind of high availability concept that we had maybe what's important for us, which has come out as a piece specific We had to make Nova compute and Nova network In such a way scalable that we can actually use it for such big hosts also in a performing Performant matter. The second point is that from the way that we run our business We have two NFS storages attached to each host. We don't use local file systems. We only use NFS We do a lot of live migrations We need a lot of flexibility on on on lifecycle management on the on the hardware So we needed to be able to shift VMs easily from one node to another node So we came up with this concept and this required some custom developments Into the standard open-stack So what were these custom developments? I want to go through these very quickly The ampersistence is important for us because many times as SAP customers as our internal and external SAP customers we need to Have the emper systems to start stop reboot VMs As far as I remember by default open-stack when you reboot or when you delete a VM At least in the early days. I think they got deleted. So there was no reboot or there was no The second point was we needed some kind of network range extension. We have a very diverse network within SAP we need to extend networks we needed to be able to extend IP ranges and Also the way that we manage our networks is Not a greenfield approach. It's It's an environment that has been running for a couple of a couple of decades. So We had to do some custom developments on the way we manage networks and open-stack Data store load balancing. I talked about this was the two NFS storages that we have attached to each host next to that Live migration to be able to easily shift VMs from one place to another Along with my live migration We have a very simple button in our horizon dashboard, which says evacuate host to evacuate it So we can do hardware maintenance from the upgrades and all of this good stuff Dashboards is not really something that's an open-stack Custom development, but dashboards is something we additionally install on the hosts to be able to collect monitoring information performance So that we can run overbooking In a different fashion, I have some screenshots later on which I'd like to show to you VM resizing somehow goes along with VM persistence. So many times Our internal or external customers will come up with a sizing of hardware and then all of a sudden they realize Hmm, I need more rum or I need less. I need more CPUs So there's a constant struggle to somehow resize these VMs and that's something we built in into open-stack as well We have a Let me see if I have a picture of this. We have a certain Set of hardware. This is just a snapshot from early 2014 how we run our infrastructure. These are just by numbers all the hosts And we have certain what we call live migration zones. This is on the top. You will see LMZ 0 1 0 2 0 3 and so on That means that within these live migration zones the CPU types are the same The NFS storages are somehow computable and you can live migrate between the hosts easily What happens? However, if one live migration zone is full or for whatever for whatever reason you need to migrate a VM into a different data center Or into a different network. So here we need some kind of offline migration functionality and that's something we as well built Into the product. So this is the offline migration between availability zones and host evacuation we talked about well as a result at some point we I Think after three months, but two and a half very fast It's actually quite fast We were quite astonished and we had more time that we thought we needed but we had a running open-stack We were very proud and then we flip the switch literally flip the switch and I remember in the very first four hours. There was a kind of run somehow. Everybody had been waiting to use open-stack and Within four hours, there were 200 VMs on there and We were actually quite astonished. I think in the beginning we said for we went live I remember this. This was the second of April. It was not the first of April and we I think it was some Easter weekend or something and Remember the developer called me and he said um we thought we need 16 hosts for the whole year But actually the 16 will probably be full in the next two weeks. Can we reorder hosts and I said well I have to speak with my manager to order new host so Within the rest of the year we actually went very quickly to 120 hosts where we thought the whole year demand would only be 16 So somehow a great success In the very beginning we did not have three terabyte hosts we had 512 gigabyte hosts now we are The newest generation is three terabyte And Maybe in the very beginning especially but even today we have an automation rate of 100 point zero percent Point down no failures, and this is of course a great success rate especially in such a complex landscape as I said before high acceptance from the customers there was a To our benefit there was a large migration project happening side by side I think some company starting with an O Oracle which is a competitor to SAP I Think at that point they bought Solaris and there was some discussion internally to use Solaris or not and then some developers decided to Try out Xen and move away from the I don't know long story short There was a certain Comotion also within internal development groups where they said hey, you know, let's let's try different infrastructure Let's try OpenStack and this was of course an opportunity for us as well to to show what OpenStack can do and and and and how well it performs as I said before we we Have an installed base of hardware. I saw some talks yesterday where they said wow we have 100 hosts We have 150 hosts By now we are we are looking more towards 400 and we are talking minimum 512 gigabyte hosts more towards I think on average we have 1.1 terabyte right now The newest generation will have three terabytes As you can imagine SAP applications are super memory hungry So we need a lot of RAM that we are pushing in And I'll come to that in a second what implication that also has for us as an IT operator to try to Optimize costs further because we need to work with our customers to really find the correct sizing for RAM This is our infrastructure that we have today Couple of hosts are missing Especially our our new Icehouse is not Really on here yet. So this is what we have Today on Folsom we started first productive usage with ESSEC we did an upgrade to Folsom Running rock solid and now we are starting to prepare for an upgrade to ice house But I think we we still have some internal discussions Whatever really want to use ice house or because Juno was released, you know move right into Juno immediately Supposedly upgrade ability is better now. So I still have to do some discussions internally On the very right side you see VSC on the top right We will talk about that later on these are virtual system clusters This is our new ha concept that we have so we've adapted that continuously as we gain more knowledge into the infrastructure Talking a lot you have to interrupt me by the way. Otherwise, this is going to go on Go ahead. Yes That's a good question. So the question was I talked a lot about memory. What about the CPU good point Compared to other customers or to let's call it standard web hosting scenarios the number of Of course does does not play the the first bottleneck in an SAP environment, especially on such a large scale For us we noticed that many times. It's more the memory that we run into bottlenecks very quickly And less the CPU. So you can even say that on average We are five to six times less CPU core hungry as a standard web hoster so First of all, I'm not a technician. I'm a program manager, but if I can rephrase your question, we have First of all, we do CPU overbooking heavily as you can imagine So our newest generation of hosts have 15 cores per socket For sockets. So we have a total of 60 physical cores on the main board and again, I'm not a technician I'm just trying to get from memory what I know. We have hyperthreading enabled. So we're ready overbook by a factor of two So we have 120 somehow physical course available The virtual CPUs that we provision to the customers we limit to 480 So we have a factor of four and an over provisioning If you would count hyperthreading as an over provisioning of factor two, we actually have over provisioning of factor eight On some internal scenarios we go into an over provisioning factor of 12 With all pros and cons We're not specific to any vendor We have a lot of vendors in house. There's a lot of HP. There's a lot of IBM. There's a lot of Dell Fujitsu Everything You give us hardware at the best price. We take it but remember the memory Okay so some of the stepping stones we took and some other What do you say in English Stolper Steine some of the problems we faced First of all, we had some problems with big tenants on Nova So some of our tenants have two three four thousand VMs and the performance especially for an administrative you gets very slow at some point in time We had a certain problem for partitioned images sparse images than provisioned images But maybe these were more related to to the hypervisor and less to the actual management on top network configuration is a is a tricky topic because Typically as I said, we don't like to use DHCP. We need to have a static IP configuration so the systems can talk to each other So we had to find some way to actually on a first boot provisioning an IP address using DHCP But then have a script that changes this to a static configuration VM persistence we talked about Support of bridge network devices in Nova A good point. I don't know who put that on my slide, but there was probably something around that I have to talk to the guys later on Problems with DHCP lease time of course DHCP This was one of the first Problems we had at some point DHCP is the single point of failure if DHCP for example The service goes down or the host which runs the service goes down on I think at that point was on neutron Then after 3600 seconds or whatever, you know VMs are starting to lose their IP lease time and this was of course a huge problem So we tackled that problem actually twofold first of all On the first instance, we have a local persistency on the VM That the first run script takes the IP address it has and saves it on the local machine So even if the central DHCP service is not available the VM still knows its IP address on a second point We actually Change the IP address into a study configuration and made sure that open stack will only grant this VM the same IP address So we have a a tiered approach to solving the DHCP problem I'm not sure, but I think under quantum or the the next product. It's it's already fixed. It's already fixed So think back it falls on yeah falls on falls on Support for sparse images him the line pass through and so on and so on But overall, I think the fall them environment until today is running rock solid and we had fortunately so little That we act that we had to do or not do with it that it's running Well, and we can concentrate fully on on anything that goes towards I saw so anything beyond that a Lot of the things that we noticed we were able to push upstream We will notice later on that SAP B1 are some of the top contributors Even though the size of our project may be compared to other companies here in the room is fairly small or considerably small We are I think one of the top 10 contributors in the whole environment In the unit release especially in the general release so bringing into memory this picture really quickly I want to Show you how we then integrated OpenStack once it was running into the standard SAP change management Of course the most important service when you create a new infrastructure is you need to create VMs or create Xen VM was first the first and most important service that we brought live I Had a very practical hands-on approach. I said look guys. Give me the OpenStack API Show me what attributes This is actually from our project plan show me exactly what other attributes you need and let's make sure that we receive these Attributes from the change management system and let's work backwards. So we took OpenStack. We took the API We said what attributes do we need and we created a UI from that very easy or rather. We adapted an existing UI so Here we have six attributes the first two are user password so that's simple And the first and the next four are then the ones which we which we worked on that's clear Then what we did is we created a workflow around that of course you cannot provision a naked OpenStack VM You need to put this you need to retrieve values. You need to create host names Using a host name generator according to the convention that we have internally at SAP You need to have an intelligent placement of sort of resources. You need to put it into into certain resource pools into certain availability zones Then you need to register all of this in your DNS. You need to create CMDB entries You need to create billing records. You need to you need to do a lot of stuff You need to create users to make OpenStack Enterprise ready and use it actually within your company. So just deploying a VM That's easy to do but using a VM in an enterprise where you have 70 100 120,000 BMS That's the tricky part and that's why we had to create this Workflow around it and of course satisfying the demands of the security department is is a whole other story So we we had to create actually before we give out any VM into internal or external customers We patch it according to all Up-to-date patches There's a whole CMDB model associated to using OpenStack That we run internally But I think I don't want to bore you with IT service management details. What's what's most important is that next to create VM We started deploying a lot of other Life-cycle services that you need to run an infrastructure. So here you have reboot create users add disks You can read the slide probably by yourself start stop There's a lot of things you need to do to actually use OpenStack in such an environment then So once we had a considerable amount of VMs the next question is how do you manage that huge number of VMs and here we Together with the operations teams we then thought how can we then further optimize that from also from a TCO perspective How can you further optimize? So and also how can we guarantee customer performance if you overbook CPUs so heavily if you use so much memory and Where's the best TCO for SAP? So what we did is on the top right? You will see We created the dashboard I'm a business major so my interest is always bring down TCO and meet the timelines So my biggest worry was we have a lot of guys requesting 192 gigabyte VMs Do they really need 192 gigabytes and when we actually looked into these? I said can't we create like a small script that retrieves the values? How much memory is deployed but how much memory is actually being used also the same with the CPUs Can we somehow measure the CPU performance to make sure also on a hypervisor and on a VM perspective that these guys have enough performance? so What we did what you will see here? This is a Very simple thing what we did. We have a lot of collectee agents We we measured the load we measure I think 27 or 30 values Continuously we aggregate them in a dashboard and and we push all of this into a dashboard for me The most important part is the top right. You will see what is my configured RAM? What's free and over the course of 30 90 and 120 days? What's the maximum and the average RAM usage and this is for me an important point because sometimes an SAP system can be Configured with 192 gigs of RAM which it never uses But when once you do a quarter-end closing you need these 192 gig RAM So you need to measure these values over a longer period of time Let me tell you there are only a very few customers which actually really oversized their VMs Even even with the very Even with the VMs which have a lot of RAM which never use them sometimes you will see very sharp peaks when they really need This memory and then it goes down again, and this is exactly what we can measure with this dashboard What what we as a team do once in a while we click on what's the maximum RAM usage? What's the maximum free RAM over the course of 30 90 180 days and we? Target those customers specifically and say look why don't you resize or let us over book? The RAM in a different way, so this is a very valuable information for us at the same time what we do For example this bottom part here is a smoke ping smoke ping we continuously measure latency Packet loss all sorts of stuff to really make sure that customers have the best performance CPU performance and so on and so forth This dashboard is also available for their customers. Yes. I didn't mention that. It's a self service so customers can actually themselves look Internal customers so internal departments because we are delivering our services internally to other departments not And customers on the market. Let's say it that way Yeah, but I think from a technical perspective you can even say more to that good Any questions at the moment? so As class heading already told you we don't use only open stack views also some other open source solutions Here you can see The logos on the right side The first important thing was our virtual system cluster with this ice house We now have in blend implemented the infrastructure components in a high available mode so All the management services Run in the virtual machine that is virtual machines are high available based on a base on a pacemaker Virtual system cluster so that means if a host goes down This is a three-note cluster solution if one of the host goes down or a service goes down pacemaker automatically Restart the virtual machine on another hold node and so we have the infrastructure components high available And the fall zone release in a fall zone Environment we don't have high availability. I think we are we are starting to upgrade the environment into that direction But for ice house, we are really going to use it productively from the first moment on High availability for the virtual machine for the instances by self. We don't have at the moment Only for the infrastructure components. Yeah, we use lockstash for a central Lock collection. Do you know lockstash? So each each of the Multi hundred of hosts all the locks We collect an essential server and so we can use a different or we can do different Auswertungen reporting reportings and so we we can go into the past Every information is what interesting we can check there. Yes For automation we use puppet so that means if we get a new host The new host is fully automated installed With on with puppet and so we can grow the open stack environment very fast Everybody I think knows puppet puppet and open stickers often used in this combination. Yes, and for the for the monitoring by self We use on regular nagers Or we don't have a full nagers environment, we use only the parts which are interesting for us And then with the dashboard We see the main informations which are interesting for us So what is Good Yeah, the virtual system cluster already described Puppet based centralized management, I think everybody understand it now is the ice house We use also the center volume service in the fall summer we used the local storage now with ice house we use a single volume What is I think Is there anything all the standard features that come along with ice house? Yeah, some our custom devs Was was was fixed so not necessary anymore time And at the moment we have the ice house environment parallel At the moment we think and plan of how we migrate from fall summer to ice house So a rolling upgrade is not possible or not so easy. We have different ways how we can do the migration from fall summer to ice house or Perhaps we go directly to the chuno But I think next year we can tell you how we migrated from from fall summer to to ice as a chuno It's come What we have now implemented in the ice house environment Because I think you know it more better than we some new features Which are really really interesting for the enterprise for enter environments This CPU pinning is an SAP Special request, I think it was Yeah, actually I can if you want I can say two words so that We we have a department a performance measurement department which measures continuously measures the performance of SAP systems Coming from development and what they've requested from us is to not do CPU overbooking So so that they can measure really measure the performance on exactly the same hardware So for them specifically for this department We created a new tenant with a new availability zone with a new life migration zone That has hardware Which we don't overbook in terms of CPUs. So we use a one-to-one core and bCPU ratio And no other customers are allowed on this environment just to make sure that There's no disturbance on the network on on file servers on and so on and so forth. So this is especially for SAP Performance performance. This is part of our quality gate process of the QGP department And I think the go live was just last week on on Tuesday or on Wednesday Good Yeah, I told you already Upgrade to Juno This is decisions already done. Oh, I just put that on her We will go now we have a good working infrastructure as a service Platform and now we will go also to to to offer more platform as a service for this We will use heat. Everybody knows heat. I think so we can go also into the VMs and do some configurations automatically And this is also one of the features which we will use in the future I think about a migration problem. We talk already. So at the moment, we're not sure how we do it We have two different ways how we can bring the virtual machine from wall falls on to ice house But at the moment there is no decision Good before we move into the next slide what's really Let me let me give one sentence Sorry, what's what's really important is that we don't use open stack for every send host and at SAP we have I Would even say more currently still more extent hosts outside of open stack than inside of open stack and what's very important is that From a company perspective these are Existing infrastructures that have existed even before Open stack as itself existed. So these are if you want to call it more platform as a service Use this platform as a service. So therefore right now we've we've kind of rolled up Open stack bottom up. We've Collected a lot of internal demand, which is pure infrastructure as a service and here we positioned open stack very very well As you saw earlier on the press release SAP has also committed itself to open stack. So what's happening right now, and I Absolutely no announcement. I want to make but it's it's clear from the press release that you can see is that SAP is moving more and more Into open stack and also into a platform as a service. So what you will most likely see over the course of the next month is that open stack Will most probably be used more and more internally and also for customer deployments that go towards platform as a service And here you will have a multi-fold of the volume that we have today just for the pure infrastructure as a service This is something that is is happening right now. So there's a certain shift where new open stack installations are emerging that Probably even go beyond the scope of of of our pure IT operations that go more to the development organization. So here we are starting also SAP internally to go more into a Let's call it open stack DevOps who use open stack within platform as a service and IT operations Go ahead. So our end customers today Our end customers today go through a standardized portal that that you saw earlier with the workflow So here they have really a GUI where they can click it with their mouse What kind of VM they want to have but these are pure infrastructure as a service This is a pure infrastructure as a service portal. So you can say do I want to be em with windows? So I want one Linux how much how many CPUs do I want to have how many how many? hard drives do I want to have? It's not gonna set it up as a tomcat only operating system correct pure infrastructure as a service But that is the shift that we're moving into So internally there's Another area side-by-side to us which uses chef for that purpose We have a couple of other SAP tools which have been used very prominently Also in the past for the last eight nine years What we're trying This is more an internal IT strategy What we're trying to do really is get all the different automation platforms all the different Workflow automation platforms run books collect them into one more one unit into a more unified stack And here open stack is gonna play a significant role With the time we are over as I said, it's us against your lunch We're gonna keep on going if you don't stop us any questions at the moment Sorry Let me do a quick math. It's It is We had to go live two years ago. So by now It is round about eight to ten percent You're helping us you can come on stage just next because it's the next part. We're gonna talk about But I'll let you go first very quickly and then I'm gonna ask your question Yes, we build up of the open-stack environment we do all the support for this How much people we need I think four to six people are necessary for the operation from the full open-stack environment So I can answer that question very easily we have a level one team in India Which is a shared service team just has 20 people but this shared service team runs, I think five or six different clouds internally You can say on average. We have about one and a half people in that shared service team 24 by seven that run only Xen In addition to that we have a 24 by seven on call duty With our implementation partner and here eight by five people are on site and The rest of the time we have an on-call duty and escalation path Where support can be gathered in addition to the Level three team. We have a I don't know if you want to call it level four level five team Which is more on the development organization as I said the move into platform as a service and here SAP specific workflows How do I put that so the level one level two and level three teams do Everything from the hardware into Xen into open stack Because open stack integrates so tightly at some point into Xen and lip word and whatever you have technically You need to have one team that manages both But when it comes to open stack and the SAP application and the way that SAP uses the hardware We need more development and blueprint centric team and here we have Super specialists experts that have all the deepest knowledge Between SAP how they use the hardware what provisioning workflows because they are the ones that also Run develop the platform as a service What's important to note is that be one as a company is both within the level three operations team, but also within the blueprint team, so there's a there's a They are there on both parts. It's architecture and also architecture and operation Any questions Good Yes, so that's a good question. So when when We repeat the question. I'm sorry For the people in the audience the question is do we grant our customers access to horizon directly? So the answer is yes and no it depends on how you define customer So for us as an internal IT department our immediate next customer is another internal department, so Right now for the pure infrastructure as a service stack. Yes, we are starting to grant access into into horizon directly For the platform as a service probably this will be integrated in into another SAP provisioning tool Thank you. That's a very good question So and there's a lot of internal debates on that would I would we consider this a p running an open? Stack environment as a managed service contract or would we want to have this knowledge in-house? So the answer is both I'm saying that because Nowadays infrastructure as a service is becoming more and more Commodity and that's actually why we're doing open stack, right? We're trying to Somehow find an alternative to public infrastructure as a service is such as Amazon such as others But at the same time you're trying to safeguard your knowledge in-house So what what we do is both? We have a level one organization. We have an implementation partner. That's the expert on OpenStack on Xen on Linux and that's who I'm presenting with and we're very proud to co-present It's together, but at the same time once the infrastructure is set up We bring the knowledge in-house into our level one level two organization and we have a managed service For L3 and the continuous development of OpenStack at the same time We push that knowledge into the level four level five organization, which at that point again is SAP internal and these are our internal Development departments so the knowledge is somehow where we really have a managed service contract is just for it's not just It's actually from an open-stack perspective the most significant part. It's the open-stack development and it's the open-stack operations And here we have a very well functioning Partnership and as a result a very well functioning managed service contract and we do all the training for the first and second level So that they can do every time more processes can be covered by them I don't know who was first. I'm gonna go with the right side to switch sides So the question is if we're gonna plan to have all the virtualized Workloads on OpenStack in the future I Don't know yet. We have a very very large VMware installed base From a strategic perspective we're positioning OpenStack in such a way that VMware is OpenStack is going to manage the whole stack, but this is probably going to take a lot of years to Evolve so the question is we're very IOPS hungry whether we're utilizing the OpenStack IOPS or whether we're using our own so First of all again, I'm not the architect. I'm not the architect. I'm happy to put you in touch with the blueprint team Which are actually here in the room so That's definitely an answer that we can give you quickly, but right now at least for the infrastructure as a service stack We're using plain NFS We're hooking that up to whatever EMC or or net app that we have in the data center. I love that question It's my favorite. It's my favorite question because I have the I have the I do all the TCO calculations for that specifically and Yeah, that's exactly my my my target I Did a TCO analysis? the last update was about 12 months ago and If you consider If you consider a workload which runs 24 by 7 for one month then Internally you're about a factor 8 cheaper Then non-negotiated cheapest external prices However, if you have a workload that runs 8 by 5 And maybe for two weeks in a month You're gonna see a Break even at some point Our biggest problem as an internal IT organization, which I hear on every Gartner summit I attend is that Many IT organizations including ours can only do billing on a monthly basis We're a cost center, which means that we're we do cost recuperation, which means that we Zero our balance sheets at the end of every month, which means that if we have a low utilization then We still charge a whole month for a VM even if you only use it for one hour Now that part is changing slowly But there's a lot of process adaption that you need to run Internally From an accompany perspective as a whole Moving away from a per VM perspective if you're able to deploy a VM for an hour and take it Take it back after an hour and then running it as an internal demand is a lot cheaper We did the economics we As every company also we have at some point or run into public clouds such as Amazon where people Start swiping credit cards and then reimburse themselves using travel expenses and all sorts of weird stuff that you have out there So we're in the same boat as everyone else But if you try to get that under control and it's not really prohibiting saying you are not allowed to reimburse yourself using Travel expenses It's really finding an alternative that you can have internally and how we were going to build that is a whole different question So we started doing that by not building at all. We said just have it for free Anyway, that's a thousand times cheaper than pushing all the demand into Amazon for example or any other cloud So this was one of the projects of our CIO No charges no billing just take the infrastructure you need give it back as soon as possible and then The usage of external clouds somehow went down very quickly any other questions if not just in case you're German speaking and You're interested in reading more about this I Did not give my input to this book the guys wrote it themselves. I only wrote the forward But a lot of the experiences that we made under the falls and releases and Beyond that is in this book. You can order it on Amazon the guys just released it. I don't know a month ago and It's an interesting read and in case you still have additional questions feel free to reach out to us Thank you very much