 OK, welcome everybody. Thank you for joining us for this session. We will talk today about enterprise applications running on OpenStack, the experiences that we made in onboarding them and running them on OpenStack. My name is Gerd Pussmann. I'm cloud architect at Deutsche Telekom. And I'm since three years with Deutsche Telekom. And I led the development team that developed the first production OpenStack platform for Deutsche Telekom. Thanks, Gerd. Can you guys hear me at the back? If not, please come forward. It was a trap. Yeah. Well, I'm Shridam Subramanian. I'm founder and principal cloud specialist at Cloud Don. Cloud Don is a SI research and analysis firm. It's a combination of services. But we primarily started as a deployment service provider on OpenStack. And then I branched off to give you more analysis and market analysis kind of services too. So I've been involved with OpenStack communities since the Diablo days. I started early on as an operator. Between Diablo days and now, we have done more than 50 deployments, small-sized, medium-sized deployments, reasonable variety of workloads. What we're going to present here is, Gerd is going to focus on the enterprise workloads that they've had success with. What are the challenges they faced? What is the journey around getting them successful on OpenStack Cloud? But before that, I want to set stage around what kind of workloads that people have been running on OpenStack. So by show of hands, how many have workloads running on production on OpenStack? Very few. Dev test? About the same. What kind of workloads that you're running? What do you run? Dev test. OK. Someone showed production. What do you run on production? OK, good. Very good. The reason why I'm asking is, I don't know if you noticed the most recent user survey, and still web services, databases, and dev tests are the top three workloads, between, it is a bit historical. And then, if you compare the last November 2014 user survey and then most recent user survey, predominantly, those are the workloads that people have been running. And the good sign is, more cloud deployments are moving into production, but still, there's a large question of, OK, I have the infrastructure up and running. What is a good workload that I can run successfully on OpenStack? What is the something that I should not take? And then, if there's a specific workload that I have interested in, what are the steps I need to take to migrate to an OpenStack cloud or cloud? How many of you run SAP workload on OpenStack? OK, don't raise your hand, even if you do. Good, thank you. So the agenda here is that I will start with a brief history, walk through some of the workloads, and then follow up with what are some of the methods people were running or people are using to get their workloads up on OpenStack cloud. And I will keep it generic. I won't focus only on enterprise. It could be DevTest, could be a combination, and it could be anything. But just looking at the history and reading from a lot of use cases, keynote presentations, talking to developers, our own experiences. And then, some high-level lessons learned. Again, it's going to be more generic. But then, pass it on to the GERD, and he's going to talk specifically about their infrastructure and then some of their workloads that they have been running. Going with the history, right? Obviously, very early stages, people are looking for spinning up and down the VMs. Naturally, a lot of workloads that are around compute intensive, a lot of computation. Typically, your DevTest is a good case there. And then, again, at the inception, ServicePro was a large use case for OpenStack. Not necessarily having a large storage backend, but still like a center backend, but primarily around a lot of VMs. So around the same time, users like CERN picked up, a lot of HPC workload, high-performance computing workload, a lot of computing applications, right? Along the same time, very early adopters like eBay, PayPal, you might have seen the keynote today. Right now, they have 100% of their PayPal transactions running on OpenStack Cloud, right? So when they begin with some of the web services, some of their shopping cart experience, those kind of applications, more of compute, kind of independent workloads that they were having. Around the same time, Swift was relatively mature among all the projects. So people have had success in storing a lot of objects, a lot of files, a lot of media applications, right? You could say Conquer, DigitalTree, a lot of use cases that you'd have seen in the past summits even until now, right? That's been a very large, successful use case. Sorry, you got a question? Sorry. And then, once things got immature, I'm talking about the days where we didn't even have Neutron, still there was no network. And then, we were in a mess of quantum, right? But then, we got into the Neutron mess, and then Neutron got better. And then, overall, the products started getting better. We started having monitoring, orchestration, and those kind of other features, right? So then, around the same time, there was interest around, OK, let's try to have compute intensive plus a lot of storage requirements, right? Like big data analytics or bioinformatics, right? So those were some of the applications that people have had success with. And then, right now, if you look at it, kind of, I want to see this. I'm seeing there's a service provider version, too. There's a large of interest from the telcos. There's a lot of noise about NFE applications, right? Comcast mentioned that they have a lot of NFE applications on their OpenStack cloud. So this is kind of a generic layer of the land, right? I'm not going into what is the stack of application? What languages, what are the languages, right? What is LAMStack, or is it Java-based application, or is it WebRTC? I'm not getting the details here, but just kind of giving you a context of what kind of applications that people have been having running on OpenStack, right? So around that time, I mean, sorry, not on, along these times, right? So how do we get an application, right? If I have an ERP application, if you have a CRM application, if you have an email server, can I move it to OpenStack cloud? Does it make sense? Or am I going to write, I have a new kind of applications, can I write them on OpenStack cloud? So those were some of the questions here, right? For those who are lucky, who have so-called cloud-aware, cloud-native applications, what I call as the unicorns, like there's no question, right? It's very straightforward for you. They use programmatic APIs, they are fault tolerant, they are stateless or nearly stateless. So there's no question, I don't even have to talk about this, you'll have success there. But for the majority of those that don't have those kinds of applications, what would you do? You can port, you can rewrite, you can have a complete rewrite or you can have a partial rewrite. What's more interesting is like, GERD will give more details about a specific application and then what are the parts that they rewrote, what are the parts they retained. So those kind of decisions are case by case, but again, these are one of the techniques or few of the techniques that you need to follow. Other than rewriting, porting, right? There are other ways you can move your application. Maybe you can package it smartly, right? You have containers technology here, you can probably split your tiers, you can move some of them into VMs, some of them into different VMs. So that's what I call a smart packaging, but it's gonna be a trial and error or it's gonna be more, unless until we have the application in hand, it cannot be a generic technique, right? You need to have experience with the application and then you do that. Finally, there are some applications you don't want to run, back off. Don't run your cabal, don't run your, even SAP, I wouldn't recommend, but people have been running it there, right? So these are the high-level techniques that people were using it, you know? There is no one hard and fast rule here. I cannot take one big hammer and say, okay, I'm gonna rewrite all the applications to move into cloud or I'm going to port everything. So that's not gonna happen and it doesn't make sense. So I'm kind of summarizing these things, right? So if you look at your application and your infrastructure, a lot of early adopters or a lot of people who made the mistake is, okay, I wanna go into OpenStack Cloud, okay, I got it now, and then I have my application, maybe I can move everything into OpenStack Cloud, I need to move everything into OpenStack Cloud. So there's some people burn their hands with that, right? We don't want to do that. So the better approach would be like, I have this application full, what kind of application it is? What kind of infrastructure it needs? If it needs an OpenStack or a Cloud infrastructure, it makes sense to move it over, port it over or migrate it, right? If it doesn't, just leave it alone. It can be running on your, or Solaris server if it needs to be, right? So you can do that. And I've had clients, I've had people who are having a combination of their applications, some of them run on Solaris and it needs to run on Solaris. There is no way I can move it out, right? And second thing is, there is no one rule that I can apply, as I mentioned before. It's all case by case, take the application, you know the application better. What is the nature of the application? There are high level principles, there are high level patterns that you can apply, but you know, again, there is no one single rule. You need to go case by case. And few other things, not just to OpenStack, right? People who don't understand the cloud architecture well. They might think that, okay, I'm running on bare metal, maybe I can provision a VM, I can just lift my application, run it on that, SQL server, get a large instance, or very large, extra large instance, and can I call it done on a cloud? That doesn't make sense, the economics won't work out and it's not even like cloud, right? So, I don't expect people to be doing that mistake here, but there are cases where people have tried doing that, don't do that. And finally, it's okay to leave something out. As I mentioned, if an application is not fit for to be run on cloud, let it run there. Let it run on bare metal, let it run on virtualized environment, whatever it is. So, summarizing back, take your application, start from the workloads, what is the infrastructure that's suitable for that workload, go with that. I want to stop here and pass it on to Gerd. He's going to talk more about what I did was more theoretical and then kind of anecdotal evidences, but you will see what are the applications that they're running, what is the stack, what are the lessons learned, what are the steps they took. Thanks, Gerd. The enterprise applications we are talking about today are running on the business marketplace of Deutsche Telekom. The business marketplace is a web portal where end customers of Deutsche Telekom can order and book software applications, seeds of these applications, one seed, 50 seeds, for example. In general, this is a software as a service offering. The software offered on this platform is either from ISVs, from software partners of Deutsche Telekom or from DT itself. And the target group of these applications is small and medium-sized enterprises. Deutsche Telekom has around 3 million users of this size or customers of this size currently. The platform, the cloud platform below the business marketplace, below the web portal where the applications are running is based on open source technologies only. So we are using OpenStack, Ceph and Ubuntu Linux, for example. The project to set up this platform started in 2012 and we were able to set up the platform until the Q1 in 2013. And since then, we are in production with this platform. And as of today, this is the first OpenStack production system of Deutsche Telekom. But it complements other platforms that DT has, for example, big, huge enterprise platforms based on VMware, for example, SAP HANA. We are running a 2.9 million users SAP HANA platform. And it complements also other OpenStack platforms that are currently either in setup or already running. For example, Cisco InterCloud or OpenStack-based platforms for research, internal IAS offerings, or an NFE platform, for example. This is an overview of the applications or the workloads currently running on this platform. So we have enterprise social network and enterprise cloud storage. I will talk about these both applications in a few minutes, more into detail. We have ERP systems, CRM systems, enterprise content management, a lot of different workloads. And as you can imagine, all these applications are completely different from each other. Not only with respect to the business case because it's a different application for a different target group, but also with respect to the technologies used to create this software, to run this software. The size of the tenants is different. The resource usage, we have tenants with only five VMs, for example, up to tenants with 100 virtual machines running. These applications are the ability to scale or scale elastically is very different as well. And the number of users on these systems is also from a few users up to thousands of users. And one important topic is the efforts that you need to spend or to invest in these applications to run them, to operate them, and to maintain them is very different. And they have a different level or a very different level of cloud awareness. So some of them are really pet applications and some of them are more cloud-like or cloud aware in some cases. For these applications and ISV, we offer in general two different service models. The first one is the managed model. And this model, our operations team is responsible for the tenant, for the resources, for the platform and for the software and the application as well. So the operations team will also do the upgrades and the installation of the application of the ISV. In the second case, the hosting model, the operations team is responsible up to the level of the operating system of the virtual machine, to the upgrades, security upgrades, for example, of the operating system of the virtual machine. And from then the ISV is responsible for the application updates, maintaining the application and restart the application, for example, if it fails. Obviously we are offering the cloud resources via the cloud, the tenants and all the virtual machines. And we also provide the ISV with reference tenants and production tenants. So the ISV is able to test his application, his software first in the reference tenant before it goes to production. To be able to run these applications inside of a tenant, you need to provide support by different services like VPN access points, for example, load balancers, proxy servers and kind of these services. We offer different databases to the clients, to the customers, MySQL, PostgreSQL and MongoDB, for example, and in one case Informix. And very important backup and monitoring. Monitoring is very important for the operations team and for the ISV as well, but backup is very important. Who are the tenants here? Who are the tenants here? Who are the tenants, internal customers or? No, external customers, so the tenants, we create the tenants for the ISV, but the customers are external customers. And we offer individual support for onboarding because without our help, the ISV wouldn't be able to integrate his application on the cloud. At least that's our experience. The first workload I would like to talk about is a business cloud storage. It's comparable to Dropbox or Box. This application was developed by DT and it's an enterprise secure online storage located in German data centers and applying to German data security and data privacy laws. It offers a web app in general and mobile app and as well as a PC sync line. So the customer is able to synchronize the local files from his PC or mobile device to the cloud and doing backups, for example. This application is integrated in some other applications on the business marketplace from other ISVs so that a customer who ordered one seed in one application and another one in the other application would be able to save the data in one application and open it up in the other again. This application is currently bundled with every mobile business user contract of Deutsche Telecom in Germany. The application consists in general of common services applications, for example, Apache web servers. In the core, it's a Java application web, a Java application server. It uses messaging servers and transcoders, for example, for uploaded images. We generate thumbnails or different sizes of the image. In the back end, we have a MongoDB server and we have load balancers in front of the application. Before we onboarded this application, it was installed in a traditional data center in a traditional hosting data center on huge servers. It was a traditional three-layer installation with few web servers and the application servers and a MySQL master slave database server in the back end and it used DRBD for application in the back end. It consumed very expensive storage appliances and it didn't use any configuration management so if we wanted to scale out this application, we had to manually install a server, bring up all the software, the application and then connect it to the rest of the system. So it was not dynamically scalable. After the onboarding, we had a dynamically scalable tenant on OpenStack. So what did we do in fact was to put load balancers in front of each layer of this application in front of the web servers, in front of the application servers, in front of the databases and all interfaces and the Ceph S3 server as well. We deployed the application on default KVM virtual machines and put a multiple MongoDB servers in the back end of this application. All persistent volumes in this application are hosted on Ceph RBD back ends and we replaced the NFS shared storage that was used in the past with a Ceph S3 back end. The configuration is now managed by Puppet so when we start up a new VM, it's automatically configured via Puppet. So what's the toolbox that we used at the end to bring this application to the cloud? It was necessary to rewrite a part of the application. So for example, the part of the application, the functions that were responsible for storing data on the file system, we had to rewrite them to use S3. We introduced the MongoDB in the back end. We had to change parts of the application to achieve this. We introduced the load balancers, so we deployed the application differently and we introduced automatic installation. So when the VM comes up, the application will be automatically installed on the virtual machine. Very important is the scalable and highly available storage in the back end. This application is the biggest consumer of our cloud and uses an important or big part of our Ceph cluster. The result at the end was a scalable enterprise application on OpenStack. The second workload I would like to present is very different from the first one. It's an enterprise social network application developed by an ISV and it's in fact a private cloud offering to companies, small, big sized companies and it's installed normally on premise in the data center of the customer. So the ISV needs to make a lot of customization on this application when he installs it on premise for a customer, for a company, because the companies have a lot of individual requirements to this application. So what he did is he uses multiple single instances of this application for private cloud customers, now on our platform. And it's just one VM per tenant, per customer tenant inside of this application. Each VM contains all the applications and services necessary to run this application and it has access to the databases and the Ceph cluster in the back end, for example. So every new customer gets a new tenant, a single VM. What we did to onboard this application was partly the same like for the first application. So in this case, the ISV implemented the changes necessary to access S3 cluster in the back end. We introduced the configuration management with Puppet and the ISV developed kind of a small service, a cloud manager that reboots single VMs. So for example, if an end customer locks into the system, the VM will be started of this tenant and if the last user locks out of this system, the VM will go down and will stop. This cloud manager and this concept is portable to any cloud platform. It can run an open stack, it can also run on other cloud platforms as well. So the toolbox in this case was to partially rewrite the software to introduce, to adopt the S3 storage, for example. Additional development was necessary for this small application, the cloud manager. And we did the smart deployment with a kind of containerized virtual machines that contain all the necessary services and applications. And we accepted the limitations of this concept. So scalability and availability are not that high like in a distributed deployment with many virtual machines, obviously. So this VM, one of these VMs starts in one minute, less than one minute, and that's acceptable for the ISV and the customers as well. So what are the lessons learned? During three years of onboarding, this kind of applications, I'm pretty much convinced that there are no kettles in enterprise applications. This is only about pets. So if you want to onboard this kind of applications, we are talking about pets, in every case. You can say cattle is a sport only once a day, so you're done for the day. Maybe we have kind of species in between. So the first application I talked about, the business online storage, it's scalable now. Maybe it's more cattle with a burden. It's not really cattle. So for example, the MongoDB servers are running on specific servers to be able to run the huge VMs. So there is kind of a burden in this case, but at the end it's mainly all about pets. And this is specifically true for the sector software, software for specific branches. These software, these applications have a specific target group of customers. And this is a limited number of customers, and a limited number of users. These are not the Googles and Facebooks of this world. So they are not offering their services to millions of users worldwide. So cloud has a very different meaning for them. Cloud native for these ISVs means a lot of investments necessary to rewrite their software to adopt all the cloud technologies to be a cloud native at the end. And there's no business case for this, so no return on investment. They will not do this. And this is true for example, for applications from Deutsche Telecom as well. I saw recently a list of internal systems around 500 systems in Deutsche Telecom, and we will never rewrite all these applications to bring them on the cloud. That's not, we will not have the money to do this. The majority of these applications are not well prepared for the cloud. You could onboard them, but you must be prepared that they are stateful, not really cloud aware, and that they are not using dev ops technologies. So in general to onboard these applications, you have an increased overhead. You have to put a lot of efforts into this. You need a project to set these applications up. Together with the ISV, you need to know how from the ISV and the ISV needs you know how to bring this application on OpenStack for example, or on the cloud platform. From our experience, a good tool set is to at least do a partial rewrite of the software to adopt some of the technologies, to change the deployment and the architecture of the structure of the tenant and the application, and especially introduce configuration management and automatic installation to this application. And you have to change a lot of processes, especially the processes of maintaining these applications because normally the ISV has its own operations team and they installed this manually on premise for example for a customer. So they have to learn these new processes as well. With this, I hand over to Sriram again. So good, I have a question for you. So after you did all these things, did you guys disable our blog Dropbox in Facebook? No, good. So we were trying to give you an overview of what applications that real enterprises running, right? So keynote after keynote, summit after summit, we hear a lot of use cases, right? I don't know about you guys, like I always have questions like what do you mean by web service, what exactly, what are the applications that you're running, what is the language stack, what is the stack there, what is the architecture, right? So our hope was to give you a preview of this kind of a take on the application that an enterprise is running and the journey that they took, right? Believe it or not, like there are some customers who are running .NET applications. So the point here is that there are some techniques here. You also, what we want you to take away from here is that there is help here, that have been successful use cases, that have been successful case success stories. If you want to have more help here, there are two work groups that you can reach out to. App ecosystem work group, you might have seen the app catalog getting released in the keynote, right? So you can reach out to the app ecosystem work group to kind of learn more about cloud-aware applications. The Win the Enterprise work group is doing a lot about what are the best practices, how do you bring in the enterprise features like HA and more resiliency into open stack. You can always refer to the super user website, you can learn a lot more about what are other users doing and then of course there's the Unicon, there's a white paper on cloud-aware applications, but in bottom line somebody is that don't be scared. If you have an application, think about take the application, think how you can migrate to open stack workload and you can always reach out to the community to get more help. Thank you. You can ask him more questions or ask us any more questions if you want. A question for Harit. The first enterprise application you spoke about, you obviously went through the effort of cloudifying the application to get it onto open stack. Did you consider or did you evaluate putting it in a public cloud versus a private open stack cloud and if so, why did you choose to go private with open stack? It's for him, for you. We put it on the private cloud because this was the concept of this production system. So we are offering software from ISVs on this platform and we don't have a public cloud. So this is the system that we have and this is the portal that we have. So it was the easiest way to onboard the system on this application and we could use the different other systems that are connected to this system like the portal itself, provisioning systems, identity systems and all this stuff. That was very easy for us to leverage this. I wanna add one more thing. I think this is the bigger debate of private cloud as public cloud, right? Actually, we are the summit and then we would, I would want you to be in an open stack. But as I mentioned earlier, what is suitable for your workload and if public cloud is the best one for you, please go with that, right? You don't have to be, philosophically, personally tied to any infrastructure. Okay, good. So everybody's happy with the workloads. Good, thank you everybody. Thank you very much. Thank you for staying very late. Thank you so much. Have a good night.