 All right, hi everybody. Let's spend a little time talking about Scaler. You guys going to the Stack City party tonight? Haven't eaten in like three days in preparation. All right, so our topic for today is building a policy-driven cloud environment. That's what I want to take the next 20 minutes to talk to you guys about. Basically, what does it mean to build that type of environment? And what does self-service really mean when we're talking about a multi-cloud environment that might have OpenStack, that might have AWS, that might have a bit more complexity to it? So first of all, I'll tell you a little bit about us. My name is Ron. I'm Product Marketing Manager for Scaler. You can come visit us at booth B29 right there, right after we're done with this. But more on that later. So first of all, why do enterprises turn to cloud? Why do the big retailers and the big banks and these types of enterprises, what are their main drivers that make them go to AWS and Azure and GCE and OpenStack, or what we can call the big four? Usually, you get two main drivers. And those drivers can be what you may call agility or more simply getting things done faster, getting those resources faster, being able to provision something faster or speed to market, something like that. And the other might be cost. It's supposed to be cheaper, right? That's sort of the promise with cloud. It's supposed to be cheaper than running a data center. It's supposed to be cheaper than running your own physical stuff. But it often is. If we want to get that agility and we want to get that speed to market and we want to get that type of environment set up, a traditional ticket-based system where basically you have to request resources doesn't really work anymore. That doesn't really fit anymore with that same paradigm that we're talking about for cloud. So if, for example, I introduce a human component into that chain of events that says, I want to get a new application. I need these many instances and this type of storage. And this is the networking I need and all of that. I introduce a human into that that needs to approve whatever it is that I want. I'm losing that agility promise that I'm supposed to get out of cloud. So basically, self-service is supposed to be that vehicle for that agility or for that speed to market. So a traditional ticket-based system simply will not work anymore. But self-service will introduce new problems. Basically, you have your developers that can go straight to a tool that will give them only the compute that they need or they can use their own credit cards to go on AWS or they can spin up their own OpenStack. They can use any of these solutions that you don't have too much control over. You don't have ways to monitor. You don't have ways to enforce policy on. So you have to find a way basically to enforce that policy. So as we see it, there are several different types of policies, several different types of ways to do this. You got your observers, which is tools basically that sit on the outside looking in on your cloud deployment, recognizing the patterns that you might be working with. And according to those patterns, you might be able to enforce some policy. But then the policy enforcement, they're usually as reactive. That means something happened. I provisioned the machine that I wasn't supposed to provision so that machine gets scaled down. But you still paid for it. You still incurred cost for that machine. So an observer might lack context. You need basically, you need to know why was that machine provisioned. If I'm gonna open up, I don't know, port 22 on some security group, what other members of that security group exist? Who else am I going to affect? Other types, so those would be those observers and the reactive policies. Something else we've seen people do is try to train the people and not the system. I've bet you've seen this, companies that have their centralized wikis that say, whenever you provision a machine, whenever you're going to provision something through our cloud portal, you're gonna have to tag it this way. You're gonna have to put it in this security group. You're gonna have to notify this person. This, of course, doesn't scale and there's a reason that RTFM is a thing. So that leaves us with inline policies. You have to have a single enforcement point, whether it's an API layer, whether it is a UI, whatever it is, you have to have a single enforcement point that users will go through, that will have context, that will be able to enforce those policies on whatever cloud that it is that you might be using. And that is exactly what I wanna talk about. The Scalar Policy Engine. So first of all, Scalar is a cloud management platform. The idea with Scalar is to create a cloud environment where developers and users can consume cloud at a self-service model out of a self-service catalog while the Scalar Policy Engine takes care of all the financial and permission and security policies that need to be around that. So from the developer side of things, I can automate my deployments, I can have application templates, I want to build an application with an application server, a database server, and a load balancer. I want to have each one of those tiers scale between 10 and 50 servers based on time of day and bandwidth consumption, for example. I don't really care about tagging or security groups or which budget needs to be enforced on me because all I care about is getting those resources. All of those policies around that are enforced transparently, and we'll take a look at that in just a second. So the Scalar Policy Engine transparently enforces that policy that we were talking about. It's context-aware. It means that it doesn't wait until you did something wrong and then you have to fix that. It helps you avoid that wrong choice or eliminates it completely. So that's what I mean by assistive or proactive. Let's take a look at how the tool actually looks. So what you're looking at right now is the Scalar Enterprise Edition. Scalar Enterprise Edition is deployed on-prem in your data center or on your cloud, whatever. Whenever you can run a Linux machine, you can run Scalar. And as we said, it's a cloud management platform that sits on top of your public and private clouds and creates policies and automation around them. And the idea here is, again, to create that self-serving cloud environment where all the policies that you need are enforced around it. So what you're seeing right now is what we call a Scalar account. And the approach that we took here, you can think of it like the federal government. So if I'm gonna log out here one step back and I have my what we call the Scalar Scope or the Global Administrator Scope, this is like the federal government. So I can make rules here. I can say every virtual machine that gets provision needs to run this script or needs to run this Chef recipe or needs to be in accordance with some sort of policy. Or I want to make sure this catalog item that you can provision is available for everyone on this cloud setup. The federal government makes rules. One layer below that would be the state. So the account, that's actually your independent cloud deployment. So this account, a business unit or a department in your organization, basically the state, right? It has to do with the federal government said, maybe Texas is not the best place to make that argument, but it has to do with the federal government said and they can also decree their own rules. So you can also say in this account, my different projects, my different environments, they have to follow these policies. And one level below that, the city would be the actual cloud environment that you see here. For example, I got my PCI Dev or my PCI staging environments here. These are my one to one mappings to the actual cloud tenants. So let's take a look at an example with one of the environments that I have here. I have my PCI staging environment. This PCI staging environment has the API credentials for an AWS account, a Google Compute Engine project, an Azure subscription, and an OpenStack tenant. How would this work? Let me give you a quick example. Let's say you have two teams. You have team A and team B. Team A needs to be able to operate on a larger budget. They need to be able to provision any type or any size of virtual machine that they need. And they also should be able to create their own infrastructure templates like catalog items. Team B needs access to the same OpenStack tenant, but they have a smaller budget. They can only provision a certain flavor of virtual machine and they can only consume pre-built catalog items. They can't create their own. So basically the way that Scaler works, I can create two of these environments. I can create different governance and permissions and security and financial policies around them. That's the way it would work. The paradigm would be that your users can use their own Active Directory users or authenticate with LDAP, authenticate with SAML, whatever it may be. They can talk to Scaler either through the UI or through the API. Scaler will have that policy enforced, will give them that automation capability, and then Scaler will talk to the underlying cloud. Let's take a look at a quick example. So right now I'm looking at things from the IT administrator perspective. There are two other points of view that we need to take a look at. The developer, the actual end user, seemed to have lost the presentation. We need to take a look at the developer standpoint, and we also need to take a look at the financial administrator or manager standpoint. Let me see if you can get that back. So basically from the developer standpoint, if I dig into one of the environments that I have here and I want to work on a new application, basically the way that Scaler will work is I want to describe the desired state of my application. So the desired state of my application is not I want to go to OpenStack and provision one instance. It's more like I want to have a three tiered application where I have my application server and I have my database server and I have my load balancer. And I want to run automation on my application servers to make sure that they scale up and down from 10 servers to 50 servers based on bandwidth consumption and load averages or whatever or a custom metric that I want to come up with. And I want to install some software on them every time a new server comes up or maybe I want to run some chef recipes on them every time these servers comes up. On my database server, again, I want to have an auto scaling rule and I want to make sure that every time that a new database server scales up for whatever reason or maybe one went down and Scaler auto healed that server, I want to make sure that the application servers know about it so they can run some auto discovery script. And finally, I want to have, and again, this is just an example, an nginx load balancer that load balances between these different tiers. So I want my load balancer to load balance between all members of this application server tier, just as an example. And again, you can find this example here on the Scaler wiki. If you just Google Scaler wiki, you'll find this tutorial to see how you build this exactly and this can run on OpenSack or on AWS. But another example will be this, do the same thing, but you can load balance with an elastic load balancer on AWS, so it's flexible. So on the developer side, I can create these automated application templates basically and then I can trigger them with an API call. I don't have to do everything through the UI, of course, but from the administrative side, from the managers side, I know that all of these different policies have been enforced. For example, I know only certain sizes virtual machines can be provisioned. I know that tagging is correct. I know that the right budget is being enforced. So we can go and take a look, for example, how much I'm paying for each one of these different applications. So I want to do charge back based on specific teams and specific applications. So what Scalar provides is this single pane of glass or a single console, or we like to describe it as a single UI and a single API, to orchestrate, automate, and enforce policy on multiple clouds. You create a single policy and then you can use that policy across your OpenSack deployment and across your AWS and Azure and GCE deployments. That's the idea. You introduce a new tenant into the mix, a new OpenSack tenant, or a new AWS account, or a new Azure subscription. You don't have to recreate those policies. So it makes ramping up quick and you can carry the expertise that you have on a single cloud into those other clouds and create a standardized process for cloud provisioning. That's what we're trying to build here with Scalar. So we have a few minutes left. Does anyone have any questions before I just continue talking? OK, so let's take a look at a few more things here before we wrap this up. OK, so we saw a little bit of the way that Scalar works as far as these three different scopes that you have with Scalar. Again, what you're seeing right now is the Scalar Enterprise Edition, which is deployed on-prem. Scalar is an open source product. So you can go on our website, Scalar.com, and download your own copy of the Scalar Enterprise Cloud Management Platform. And finally, we also have the Hosted Edition, which is hosted by us. It's a SaaS offering that you can get a 30-day free trial to play around and get familiar with the product themselves. The core capabilities that you get with these products will, of course, remain the same. Several differences. You can, of course, roll by our booth a little bit later and learn a little bit more about that. If I want to add something new into my service catalog, if I want, for example, to provision a new type of machine, this would be my self-service catalog. And you can create these roles by yourself. And these roles, these templates, or catalog items, are essentially bundles of a cloud image with automation put on top of it. So for example, when I went to provision this machine that you're seeing right now for the Scalar demo that I'm doing here or that we're doing at the booth, basically what I did is I went to our own internal Scalar setup. I created a farm. I named it Scalarception because it's Scalar within Scalar and I'm very clever. And then I just used the pre-built Scalar role. So it was a three-click process for me. Create a farm. Choose the role. It was basically a CentOS image with some automation put on top of it to install Scalar. And I provisioned that farm. And then I can scale it up and down based on whatever I need. Around that, I have the governance policies, for example, that say a farm has a five-day lifetime. One of the biggest reasons for clouds actually costing more than they should is I've provisioned something through this amazing self-service catalog, but then I was fired or I moved on to some other project. And that piece of infrastructure just keeps running and running and eating away at the budget for two weeks, for example. It's a great way for a small team to generate a large billing, basically. So what you can do is you can create a lifetime for each one of these farms and then have that lifetime enforced by either a request for extension or a business justification for this farm, for this application. OK, so again, if we have any questions, I'd be happy to answer all about the Scalar Cloud Mention platform. And if not, please swing by our booth, booth B29 right there. And otherwise, have a great rest of your day. Thank you.