 Guess I'll go ahead and get started. My name is Pascal Jolie. I'm a functional architect with HP Software Cloud Solution Lab. And today I'm going to discuss about application deployment in Hybrid Cloud. So Hybrid Cloud is being kind of a buzzword these days. My definition of it is the deployment of IT services between a private cloud and a public cloud to take advantage of economy of scale, distribution of risk. Just to get a show of hands, how many of you have already deployed an application in production in a hybrid cloud? OK, very few of you. So what I would like to do today is cover a few topics that you see on this slide and then I have about 10 minutes for Q&A at the end of this session. Also to clear the mean spirited jokes on Portland weather. For me, Portland weather is just full of surprises. Now let's discuss about the user story of a typical deployment in a hybrid cloud. So initially we have this scenario of this mean manager, Peter, who is going to deploy this application for demo and orders and he did it right away. He calls Stanley IT. Stanley is not available, but Mr. D that you see on this picture and you might recognize him is available and he's one of the sharpest knife in the drawer. So what Mr. D does is he goes into the service catalog, just picks an application, and then upload a few software packages on the software definition library and then publishes the design and deploys this application in a private cloud initially. So you see here the application that we use when you prototype this. We had an Apache load balancer as the front end. Application tier with Tomcat servers and then a simple MySQL database in the back end. Now a few weeks go by and then Stan or Mr. D noticed that the application monitoring thresholds have been crossed and it's about time to add an additional tier to this Tomcat server. So the IT engineer just goes into the console and adds a new tier. A few clicks and here you go. You have a tier that's published available now in the HP Cloud Services, the OpenStack AP Cloud Services with in the front end still the same Apache load balancer. So we've all heard this story before and now we have a few questions that we'd like to be answered. And one of them, the first one is, how do you make this reusable? Very nice to do that for one application and we'll go over the steps of what it takes to do that for one application. But how about you have to do that again for another application? And how about you have to do that and deploy to another public cloud environment that's not the initial one that you've deployed? Takes a lot of work. Other questions about performance, how does it work now that you have an application remotely distributed? About scalability, how do you scale this application? Security issues and concerns, capacity management. So let's go over the steps that it would take and it is a very simplified diagram to deploy this application initially in a private cloud environment with all the different steps to initially deploy the database tier and then the application tier configure the security on each tier, deploy the monitoring, add to the load balancer and then you have the bursting component which is the steps to deploy the public cloud and then to configure the, if I know in the private cloud, the database security and add worker to the load balancer. So as you see, there are many steps here and the concern is you could do everything by simple scripts. However, it's going to be a lot of work if you want to make it reusable. So let's look at the solution architecture that we use in this deployment. So initially you design a service. You design a service that is an extraction layer that you want to reapply many, many times and then you publish this service to a service portal and a service portal, from the service portal a user can request the service and then using the orchestration engine in the backend, the service will be deployed. So initially you see a deployment in the private cloud and then you'll add monitoring on top. When the bursting comes along, similar scenario where you'll have the monitoring triggering the bursting in an automatic bursting scenario and then the orchestration engine adding a new tier in the open stack public cloud environment. So in our prototype, what we used are the HP sort of tool that some of you might be familiar with, a cloud service automation for service design, service portal and orchestration engine, server automation for your software deployment and configuration tool, otherwise known as software definition library. And then we had as an open stack public cloud, the HP cloud service. And here for monitoring, we had used, we used the site scope tool that's an agent less monitoring tool and then initially on a private cloud, we used the VMware. So just for those of you who are more familiar with open source tools, I've done just a mapping of a few of those tools with puppet labs for instance, dadios for monitoring and eucalyptus for orchestration. Now, the next step is how do you design the service? So here what we want to have is an abstraction layer that allows you to configure multiple layer on a single pane of glass. So we configured the private cloud environment with you see a database tier component here and a load balance application tier that could be a number of them here. And then once we burst in the public cloud, we add an additional tier here that's load balance with the same environment and you'll see that you could also burst in an EC2 public cloud. We also did that in our prototype and from the same, exactly the same service design model. So what does it mean when you look under the hood at the execution of this service model? Now each component of this service model, for instance the application tier will have a binding to a list of resource offerings and those resource offerings as you see on the screen could be deploy a VM on the open stack layer, deploy an agent to the virtual machine, execute a software policy, configure it and each one of these resource offering is in itself has a life cycle. Life cycle tells you when the action will be executed at different stage of the service. So not only you have the deployment stage but you have the stage where the application is deployed already and you want to start or stop or restart your application or patch it and then you have the undeploy stage where you just want to delete your VMs and that's how you would move your VMs from one environment, for instance if you wanted to move all your VMs from one private cloud environment to a public cloud environment, that's what you would use. So each actions that's bound to this life cycle is essentially an API wrapper that executes the API in the backend. In the end you execute for instance a Nova API compute and then you create a VM and then you move on to the next step. So what does it mean to interact with the HP cloud service instance? So especially in related to comparison to a vanilla open stack instance that you'll get from Dev Stack, there are a number of features that you would look for and the first one is an SLA bound. So each component has a lot of redundancy built in to that infrastructure and so that gives you an SLA that's guaranteed that could match an SLA that you have in your private cloud environment. And then the second part is choices. So you want to have choices to deploy in different zones and different regions and different geographies and that's what this public cloud solution offers you. And finally you wanna have some level of security that's also built in with automatic intrusion detection and scanning that you don't have to worry about. So you have pretty good confidence that the images that you will save on the public cloud will have some security added to it. Now what are some of the lessons learned from this integration? One of them was that we had to deal with the management of the SSH private key. So if some of you are familiar with deploying VMs on public cloud environment, one of the first thing you do is you associate a key pair and when you create that key pair you have to save the private key and put it somewhere safe because you're not gonna be able to retrieve it from your public cloud. So here if you have to deal with multiple of those because maybe you have one pertinent for instance then you have to find a way to manage it so that they are available for the next application that needs to use it. For instance, my software definition library, so automation that needs to deploy my software stack on top of that VM. So that was one of the items that we had to deal with. The second one was due to the fact that you have many different zones available that gives you the opportunity to deploy your application tier if you have many of them in different zones and if you know in advance which zones are redundant between each other then you can add redundancy to your application tier that's deployed in the public cloud. And finally, we have to deal with the error checking. We notice one thing is that when you try to map properties between your private cloud tool that's going to execute the orchestration and the public cloud properties such as images, key pair, you don't have control necessarily on the public cloud properties. And so you have to build some type of error checking and error reporting so that you get some clean state when there's a mismatch between private cloud and public cloud. And we also looked into using dynamic properties so that you dynamically pull from the public cloud provider at subscription, at deployment time. And finally, to supplement the console of the public cloud provider, we made extensive use of the debugging tool like the Nova client. So now that you have debugged and your solution is finally ready to go you're going to start end to end testing and demos. And once you do the demo that's when you hit some major performance roadblocks because you're going to deploy a software component in a remote cloud and it takes forever. So the solution here is to use a distributed in architecture with your software definition library. And it turns out that HP software automation supports that type of distribution with core master server that you can keep in your private cloud. And then a satellite component that you can have in your public cloud environment and keep as an image that you would deploy right before deploying the application. And the benefits of this architecture is not only you increase the performance but you can also have a take advantage of the caching. So there's caching at the satellite server level but also you can deploy all those target VMs on the public cloud environment on one public cloud environment would be tied to a satellite. And if you have other public cloud environments in turn you can deploy another satellite that could manage the VM locally. So kind of a double, double benefits plus the fact that we have a security that's greatly simplified between a core and a satellite server. So number of ports is reduced. Now what about the flexing and the questions, two questions that you want to ask are when do I flex? When do I add new here in my application? And how many do I have? But before you get started on that you have to think about quotas. And quotas are extremely important here because you're dealing with a public cloud that in theory has infinite capacity. It's not true of course if you're the business manager who has a limited budget to deal with and so you want to implement quotas some kind of quotas that could be by tenant and a tenant could be mapped through your LDAP organization. And we did that in our implementation where each user subscribing to a service an application as a service, similar to this one will be limited to a certain number of instances that they can deploy. And after that you'd have to go maybe to a special approval process before you can deploy additional tier. Now that you have your quotas you can define the thresholds. And there are three ways actually to trigger your application bursting. The first one that's not very sexy that you don't necessarily think about is the manual trigger. But essentially this is usually the most controlled one and it applies in many organizations that's how we got started with this. And it lends itself better with change management process for instance. You have kind of a more controlled environment. Now if you have an SLA that dictates you to burst and flex within certain time limits then you'd wanna either have a scheduled bursting where you kind of forecast when I'm gonna have a peak demand. So it could be right before Christmas you have a sales application and you're gonna get lots of requests so you kind of schedule this flexing ahead of time. And finally based on threshold and this threshold would be driven by some type of load that you would have on your system. And how much do you add once you know your threshold? And this is very good question because you could have many VMs that you'd wanna add in your public cloud environments. And this is going to be driven by the business logic. Where does the business logic reside? The business logic could be a simple rule that you have in a database but eventually the monitoring tool, the monitoring framework is going to execute that business logic and tell my orchestration engine this is you've crossed the threshold and this is what I want you to do. So this is an important consideration because there is very tight linkage and when you think about frameworks like a silo mirror and heat and how this is gonna play together there's opportunity here to have those rule engine to develop the business logic. It has to reside somewhere. And finally something that's very often overlooked are all the change management considerations. And if you have an orchestration engine you'd wanna include the approval process, approval tools like Remedy or Service Manager you wanna include some type of notification. And finally, CMDB. So maintain the state of your application in the CMDB and at flexing time you'll be able to update how many tiers you have at which level. So that's another integration that we supplied with HP Cloud Service Automation that would be important to add in your overall ecosystem of to make this solution work in a production environment. In test environment you can just get you don't need to worry about all this stuff but when you move to production that's when they have greater importance. And finally let's discuss about security. So security as you know is always a balancing act which means I have privacy concern with my data and that's why they love enterprise they don't move directly all their assets in the public cloud. They'll have some assets in the private cloud environment and that's why we have hybrid cloud in the first place. So we'll have database components that will reside in the private cloud environment and then you'll have your application tier that will reside potentially in the public cloud environment. Now you could have many of these many of these tier but what matters is the automation that you build around the layered security. So you're going to have automation build every time you add an application tier you have to register that tier with the database. So you have an application level security that you're going to configure and same thing you can have access list so that only this database can discuss and can have communication to that application tier. This is done dynamically. Same thing for IP tables all those VMs here will come with pretty on Linux will come with pretty fine IP table that you have to tweak to exactly match the protocol flows that you're going to have to deal with with first your management applications. So here you see monitoring software library and execution engine all need to communicate to my application tier and also to the application flow. So this is the second level of security. You can add a security group that's predetermined if you know it in advance so you associate it to your VM but eventually you have to deal with all those flows. So one important feature that came up recently with OpenStack is the virtual private cloud and this is a critical way to create the isolation of this application tier so that you can open holes in your corporate firewall when you're going back from the application to the database. Otherwise you don't control the IP space and you're gonna have to open essentially a giant hole through your firewall. So that brings me to the next slide which are what are the opportunities coming with new OpenStack projects when you think about integrating into an OpenStack public cloud. And the first one I just mentioned is to control and isolate your application tier and all your application tiers actually multiple VMs within the same virtual private cloud. So that's going to be key enabler and that's now possible since Folsom with OpenStack networking projects since I'm not supposed to use Quantum anymore. The second aspect is about load balancing as a service so there are some new developments with Grizzly coming up with Atlas project so that allows you to create a load balancer as a service component and that would be great to have this if you have an OpenStack private cloud and an OpenStack public cloud to deal with and then you could have, you could use the same API and potentially take advantage of some load balancing algorithm that would determine for instance where should I add my next application tier should it be in based on response time of different cloud environments should it be in my private cloud, my public cloud and which instance of my public cloud should it be. And finally, some opportunities with the Cilometer project and there was a presentation this morning about Health and Mon which is complimenting that project and that will allow you to collect more data from the VMs that are created in the public cloud and have metrics that you can gather and enforce some type of business logic to make decisions on thresholding. So for more information, I encourage you to attend, of course other HP presentation during this week. Start by HP booth and also you can learn more about HP Cloud Service Automation. We have a Wikipedia webpage and this is below our main webpage. And finally, if you are falling asleep, be careful because the bell might ring without notice. I don't know if you've seen that at the very entrance. So now thank you. And if you have any questions, now's the time. Yes, question in the back. What do they build on top of this? Yeah, so what I described here, it would greatly simplify my work if there was an open stack target cloud as well as an open stack public cloud because then my service model will be just one single component and the set of API I would use. I just have to deal with one set of API. So yes, that would help a lot in terms of direct impact of projects at this point. I would say, for instance, the load balancing as a service would be one of them. If you could have a way to support some methods to load balance that would have measure some response times, that would be useful, yes? Yes, question. Yes, so the SLA is going to be different from one business to another. And you have to essentially develop that business logic and make it reusable somewhere. So you have to have some type of rule that would tell you this is when and how much I will flex. And then let your engine feed that to your orchestration engine. Yes, so you can set up some platform or some infrastructure to measure the performance. And this would be kind of the topic of a different discussion that would lay on top of this. Yes, question? Yes. The configuration of security rules does not impact my performance. What impacts the performance is when I start uploading software packages, because these can be large. And even though in our simple prototype we just had the Tomcat bits, it still took a long time. So you can imagine with a more complex application, it would be even greater. But that's kind of also the profile of an application that's hybrid plot friendly, quote to quote, is that you want to take into account the fact that you're going to have to move those software packages around quite often and very dynamically. So the smaller it is, the better you are. Yes, question in the back? So Tosca is coming up with our next generation, next releases. We're going to have Tosca compatible topology model that we're going to support. And again, I encourage you to discuss with some of the people at the HP booth if you want to know more about product roadmap. Any other questions? Yes, question in the back? Yes. The question is how do you protect the spirit of the public key or as a switch? And in case you have to change the public key to the end side, how do you do it? So we do not look into specifically changing the public key update. There's an administrator task in the back end who has the control on the public cloud access. And then there's a different persona who is the subscriber of the service. And those people, the administrator takes care of everything that has to be done in the back end, including the management of keys, where they should be stored. And then the subscriber just orders the service and everything behind will be automated. But I'm assuming that there's a persona who's an IT administrator who will take care of those types of issues. It's not something that we consider completely automating at least when we looked at our solution. But if you don't have any other questions, then thanks again for your attention.