 All right, everybody, how you doing? Has everyone had lunch? We're feeling nice and tired on the third day of a conference. All right, so hopefully I'll just take a few minutes of your time on what I'm going to show you what RightScale is and what it's all about and what we do. So the first piece of it that's really kind of important about what RightScale does, we're very focused on automating for your application. So you've seen some of the other vendors here that are all about automating to actually set up your OpenStack infrastructure, standing up servers doing those sorts of things. We sit on top of those infrastructures that are automated. We provide that same sort of automation and monitoring and tooling for your applications running on top of public and private clouds. So what that looks like in our dashboard is what you see here. On the right hand side, we have a list of essentially applications that we're running in this RightScale account. We've got a number that are running both in the Rackspace public cloud as well as on OpenStack private clouds in this particular account. What I want to show you is a couple of deployments here. So the first one is this PHP three tier production. So what this effectively is, and I'm going to change my view a little bit here so it accommodates our resolution a little bit better, is a three tier application that is in this case a PHP application. This is going to be your production app running on your private cloud in OpenStack in your data center. And this is where you're usually going to run your capacity from. This is where your developers have access, where your ops team has access. This is where you want to run your workload. And what you'll see is it's based on two load balancers here. So we've got an N plus 1 configuration for high availability. And then we've got a database server. In this case, we've only got one. And I'll get to why in just a second. And this is MySQL 5.5. The load balancers are based on HA proxy and Apache for load balancing. And then down at the bottom, we have what we call a right scale server array. And that has two application servers running in it right now. And those are the things that are handling all the requests coming through the load balancers. And they're talking to the database and so forth. So one of the keys about what we do is that we provide you the automation to very easily stand up these sorts of environments. So what I mean by that is, if you look at this load balancer, for instance, I'll click into it, there's a definition for what a load balancer is. I told you that it's HA proxy and Apache. So the way that we are actually able to stand this up is with what we call this server template. I'll click into that. What it's based on is an image that we actually maintain that is, for all intents and purposes, identical in all the clouds that we support. In this case, we've got a list of the operating systems here. We've got SenoS 6.3. We've got Red Hat 6.3. We've got Ubuntu 1204. There's some additional ones down there. And you can see the list of the different clouds that we support for each one of those images. That's a list of the public clouds and the private clouds that we support with that. Now, the underlying image is actually different. You can't run a KVM image on Zen server. You can't run the public cloud image from Rackspace that's a customized version of Zen in your private cloud. They're different images. But we create a base image that has the same configuration of the OS, the same patch level, and then an agent that we use for the automation that we call Right Link. So what you're seeing here is a list of all of those available images. So when we go to launch, in this case, a load balancer with HA proxy, and we say that we actually want to put that into your private cloud running on OpenStack or in a public cloud like Google or Rackspace, we know to pick the correct image that's stored actually in that cloud, make the correct API provisioning call to that cloud, and you end up with a new VM that's running that base image. Of course, that's not all, because we have to make this actually be a load balancer. It's got to actually fulfill that role. We employ configuration management to actually get you to that point. So we start out with that base image, and then when that machine comes up, that virtual machine comes up, it actually calls back to RightScale and says, hey, I'm that new machine that you provisioned. What is it that I should run? What role should I assume? And that's when we actually start kicking off these boot scripts that you see listed here. It's going to do some housekeeping stuff, boiler plate stuff that you would assume, setting up logging, setting up a firewall, setting up NTP services, those sorts of things. Then we start to do things that are more specific to a load balancer. We actually set up HAProxy. We make sure that Apache is installed and configured properly. We go ahead and down towards the end here, we actually make sure that Apache has a V host that's listening for your particular fully qualified domain name. All the stuff that you need to do to make sure that you've got a working running load balancer. The other thing that we're doing here, and this is where it gets very interesting in terms of automation with RightScale, is the script that I have highlighted is LB do attach all. What this is doing is actually querying the deployment. I showed you that deployment that's got the two load balancers, it's got the database server, and it's got the application servers running in an array. When new servers are added and they're running these scripts, these scripts can query that deployment and discover the services and the servers that are running in it. So when this script runs, it actually goes and looks and sees if there's any existing load, or I'm sorry, any existing application servers already running in that deployment. If there are, they actually go and discover the private IP addresses of all of those application servers and add them to the load balance pool that this load balancer is setting up. We do this throughout our server templates. We actually interrogate the environment and make sure that those servers know about one another. It makes it very easy to have automatically adding and removing servers. It makes it easy to set up the same kinds of deployments in any one of the clouds that we support, regardless of its network topology, the underlying cloud provider, all those things are abstracted for you. We also define what we call operational scripts. So these are common tasks that you're gonna need to perform for a load balancer. You might need to rediscover all of the servers, all of the application servers. You might run this operational script. You might need to do some maintenance and stop and start Apache. You have those tools available to you. This is not quite our newest latest, oh it is our newest, latest, greatest server template that actually does have the two scripts that allow you to actually throw up a maintenance page if you're doing some work on the load balancer or your application, your updating or something of that nature. These are all things that you can automate and do from within the dashboard. So what I wanna do is actually take you to another deployment. I'm gonna highlight another reason why that ability to stand up the same kind of workloads in any one of the clouds that we support is actually very, very powerful. A use case of exactly how that comes in. So this is another view of the list of applications in this deployment. I've already shown you the production one down here. I'm gonna show you the disaster recovery PHP 3 tier deployment. And what you'll notice is this looks very much the same. I have a database server. I have two load balancers and I have an array of application servers, PHP application servers. These are all based on the same server template. So if I launch any one of these and actually instantiate a VM and do configuration management, I'm gonna end up with a server that's identical to the one that I have in production. What you'll also notice here is that again, I've only got one database server. But you'll notice the difference between this database server and the other servers in the deployment. This one's running, it's operational. It's got an IP address. This one's rocking and rolling. We don't have the load balancers running. We don't have the application server running. And you'll notice that this is also running in the racks based public cloud. So what this is, is this is your disaster recovery plan. You have a configuration that's identical to the one that you have in production. But what we're doing here is we're running a second database that's replicating to your production environment, making sure that you have that data so that if something does happen, if the configuration in your data center becomes unavailable, there's a network problem, there's a natural disaster, whatever the case may be, you can actually immediately go over to this deployment and actually launch the other services around it. In this case, the load balancers and the application servers. And in a matter of minutes, you'll have those servers up and running. Your data's already over there because it's been replicating. And you now have the ability to then move your DNS records over. And I think I clicked okay there. And you end up with that disaster recovery plan actually operated and set up and you've got your new deployment running over there and you have it up and running in a matter of minutes. And the best part of this is that because it's cloud and because you pay as you go, you can actually test your disaster recovery plan. How often do you have that plan and it's actually just a bunch of run books and you dare not actually try it because what if it doesn't work? When you're using the cloud model, you don't pay anything or hardly anything to actually do that failover and try it out to launch those machines. Right skill, make sure that you're getting exactly the same image, exactly the same instance every time with the same configuration. It goes beyond that though. I'm gonna switch back to the deployment that's our three tier production and show you some of the visibility that we have into what's running and how we can automate even further on top of that. So I'm gonna click back into the load balancer again and we actually configure an open source agent called CollectD to pull back information from each of the VMs that are under right scale management. And we use that to show these nice pretty graphs that are great health check. It pulls back obvious things you would expect, CPU use, disk use, network IO, those sorts of things. This is a load balancer. We know it's running Apache so we've also installed the Apache plugin for CollectD so we can pull back things like request per second, the number of denied requests, all those sorts of things. But not only can we of course show them in graphs which is great and looks great for a demo but we can also configure alerts. So if any one of those monitoring metrics that you saw reaches a certain threshold and stays at that or beyond that for a certain period of time, we can make a decision about what to do in that deployment. In this case, you'll notice that actually almost all of these alerts are actually predefined at the server template level. We know that a load balancer running HAProxy and Apache is going to actually need to have HAProxy running. We need to make sure that Apache is actually running. So we have pre-built alerts that are part of that server template that define what that thing's role is and then has a health check to make sure that it's performing that role. We can do all sorts of really novel things from these alerts. Of course, sending emails, sending alerts, that's pretty standard stuff. We can also, though, do things like running a script on the server that has the issue, that has the alert criteria. We can run a script on any other server that's in the same deployment or the same application. You might have a server that's in distress. It has too much CPU use, too much memory use. You can't run a script on that server because it's not going to run. It doesn't have the spare capacity CPU, whatever, to run, so you can actually run it on a different server in the same deployment. This is the full list of options. You can also reboot the server, or you can completely relaunch it, which, since we're using those server templates that allow you to bring that thing up from essentially scratch just that base image and get the same configuration, you can actually throw away the old VM, bring up a new one, and know that you'll end up with the same configuration. And then, of course, last but not least, is also how we drive our automation engine for autoscaling. If, for instance, we know that Apache requests per second are reaching a certain threshold, that indicates that we have a spike in our traffic and we can only handle so many requests per second per node in our application server, we can use that as a metric to add additional capacity to the deployment, and then when that peak in traffic goes away, we can actually remove those servers as well. And then if you recall, I had those list of scripts that were part of the server template. There was actually a category of scripts I didn't talk about, which were decommissioned scripts. So when you pull that application server out of the deployment, it actually goes and tells the load balancer, hey, I'm leaving the deployment now, you can remove my IP address from the pool, we can close up IP tables firewalls, we can make sure that it gracefully exits. And that's actually all I've got. I've got about a minute 40 left, so I think I've consumed just about all my time. Thank you.