 All right, welcome everybody. Thank you so much for joining us. We're going to spend about 20 minutes talking about self-service and a little bit about Scaler, the Enterprise-grade Cloud Management Platform. And then we're going to jump right into a demo, a live demo. So first of all, feel free to come visit us at booth B12. We're right there by the entrance. We will give you a t-shirt. So you're at least covered for that. And we do even have smalls. So first of all, a quick introduction. My name is Ron. I do product marketing for a company called Scaler. And Scaler is the Enterprise-grade Cloud Management Platform. Basically what that means is that we would sit on top of a multi-cloud infrastructure in a large enterprise, mainly Fortune 500 companies or very large enterprises. And we would help those companies build customized self-service workflows from the most simple types of users with very repetitive requests, all the way up to the more advanced users, your DevOps engineers, your cloud architects. These types of companies come to us with a variety of issues and problems and challenges that they have with their cloud infrastructure and basically how they streamline the consumption of cloud infrastructure. But most of them, eventually, what they do is they use us in order to put a layer of governance and policy over their self-service workflows. And self-service is exactly what I want to spend some time talking about. So for a lot of solutions out there and also for a lot of our customers at first, they have this one-size-fits-all approach to self-service and self-service in general at the enterprise. And when I say one-size-fits-all, what I mean is that they would approach self-service with the mentality of two general personas, right? One side that owns cloud and another side that uses cloud. That's pretty much it. That's, for example, the thinking behind something like Horizon. Horizon is not a great fit if you're not an open-stack expert. If you don't know exactly what you're doing, if you get access to a heat template, you basically, it's all or nothing. You can do basically whatever you want with it. So we found that at most enterprises, it's more like a spectrum of users on both sides of self-service. We're on one side, it would be on the end user side. This is our attempt to sort of put that spectrum onto a slide. But on one side of that spectrum, you'll have users that value ease of use. The like, for example, I started out as a QA engineer. That's why I use that as an example. Where basically all I needed was a machine, like a box to test on, like a Linux box. And I needed to have the latest build off of Jenkins. And then I needed to test the thing that I was supposed to be testing. So it was supposed to be relatively simple, one-click provisioning type of experience. On the other side of that spectrum, you have your, let's say DevOps engineers and your cloud architects and your more advanced users. And they value more flexibility and more operational freedom. And they need API access. And they need application builders. And they need templates. And they're building CICD pipelines. And they're doing more advanced stuff. So between those two sides of the spectrum, you have your people over at accounting. You have sales reps spinning up demo environments. You have all these different types of end users who need to benefit in one way or another from self-service. So basically, what we wanna talk about is how self-service really involves everyone at the enterprise. How it really involves all these different constituencies. Because up until now, I've only been talking about the problems that you might have with your end users. But what about the financial administrator that needs to log in and see how much we're spending? How about the compliance officer that needs to make sure that I have a PCI environment that's compliant? How about my security guys that need to know that everything is right behind the right firewall? So self-service raises a lot of different questions. And I try to outline some of these. So an example is what's the size of the server that I should be provisioning? What servers do I wanna offer up? Or which network should that server be in? Or should that network be public or private? Or which tags do I wanna automate onto these workloads? And do I wanna use different tags for different types of workloads? For example, my load balancers, my databases, my front-end servers. Which cost centers do I want to associate each one of these workloads with? What budget do I want to enforce on them? What tier of storage do I want for my testing environment as opposed to my production environment? What orchestration do I wanna run on these? Whether it's Chef Recipes or my Ansible, all of those types of things. Or just basic bootstrapping, what do I wanna do and how do I want to maintain my applications? What about security groups and what about SSH keys? Do I want users to have access to those? Do I wanna impose those on them? Do I wanna give them choice? What are the general permissions that those users will have once they've logged into an environment? Can they see if you provisioned an application and I provisioned an application? Can I see your applications or can I only see my own? Am I supposed to be completely compartmentalized? Or am I a super user and admin and I can see basically what everybody's doing? Containers of course will be a whole different conversation which images, for example, do I have access to? What can I map? Does operating system affect any of these? For example, Windows is more resource intensive. If you provision Windows, do I want to offer up larger instance sizes and the RDP security group instead of the SSH security group? So basically each one of these raises up a series of questions and modifies all the other ones. And lifetime would basically be I'm doing a training. How long do I want that application to live? Do I want all of my, for example, staging environment stuff? Do I want all of that to live as long as my production stuff? All of these questions need answers and preferably I need to answer all of these questions before I get to the provisioning process. Before I click the button that said I need that stack to be deployed on whatever, on the appropriate cloud. Whether it's OpenStack or AWS and Azure and depending on the type of user, whether that user is, for example, a DevOps engineer or let's say that QA engineer that has a simpler requirement, all of these questions get modified. So that's what Scalar does. Scalar makes self-service safe, responsible and cost effective. There's a lot of text on here. I'm not gonna go through absolutely everything but Scalar is a tool that large enterprises use to build customized self-service workflows for their end users. So companies like the ones that we talked about a little bit earlier, that slide that was meant to impress all of you and show you all the cool people that use us, those types of companies put Scalar in place to solve problems around four general buckets that we tend to see. We see that most problems that are caused by all of those different questions that I just asked everybody fall into either cost, security, agility or productivity. Those are kind of like the general buckets that we see. So around cost, for example, Scalar will give you a financial overview of how much you're spending on cloud or it will give you your budgeting constraints and it will give you a budgeting tools for it. But cost reduction is really more than just reporting so you'll also be able to create those different policies. For example, the lifetime policy or making sure you are using the right instant size for the right type of workloads. Under security, we'll be taking a look at how the hierarchical policy enforcement model works, which basically allows you to build a baseline of policy. For example, in the dev environment or in the staging environment, I want a certain baseline of policy, a certain lifetime. You can only use this Chef server, you can only use this network, you can only do this, this and that but in the production environment you can do something else completely and all of that has another layer of policy on it based on the identity of the user because it's all about matching the self-service provisioning workflow with the end user. Business agility is a term I hate because it doesn't mean anything but what I am trying to say here is that we wanna help the business move faster and get to the actual provisioning faster. So policy enforcement for all user types means that if I have that DevOps engineer and that QA engineer, I can manage the policies that I enforce for those users from the same place. Application lifecycle automation basically means the bootstrapping, the maintenance. Every time a database server goes down, I want to make sure that all of my front-end servers know about it or maybe I wanna trigger some Chef recipes, it's really up to you. And for user productivity, we're gonna be talking about working with a single API versus working with individual APIs for each one of the cloud platforms that you might be using, creating templates that you're able to deploy across clouds, what we call farm templates and building reusable provisioning workflows because what we see in a lot of these enterprises is that you have basically the core IT team that does have the best practices and of course can be trusted but then you have to find a way to replicate that knowledge and replicate those workflows across the enterprise. Otherwise you have people going directly to Horizon, directly to AWS, directly to Azure, your GCP and vCenter and it basically creates a mess and that is what we're trying to control. So just quickly before we jump into the demo, I wanna give you a quick overview of how it works. Of course this is very, I would say oversimplified but just to give us a general overview. So Scalar is deployed on-prem, either we're on your data center or on your cloud and once it's deployed, the general workflow from the end user side is that we have Jane the user, Jane logs into Scalar, based on her identity, she gets the self-service portal that's right for her meaning that I log in, I'll only see the super simple service catalog. Jane logs in, she sees the more advanced, what we call the farm builder, essentially just an application builder or access to the API, even if we're using the same resources. The policy engine makes sure the Jane or Ron only sees the options that are relevant to us for each respective user. When she provisions her application, the policy engine enforces the relevant policies like the placement, the server sizes, all those different questions that we asked earlier that I'm not gonna repeat, and then eventually the application gets deployed to the selected cloud or to the relevant cloud. All right, so I'd like to spend the additional seven and a half minutes that we have on the platform itself, and then afterwards happy to answer questions or you can feel free to visit us at the booth. All right, so basically what I wanna do is I wanna log in as one of my end users. So this is Enterprise Scalar that we're seeing here and this is an example for a super simple service catalog for the simple user, the user with repetitive requests. I'm logged in as lukeatscaler.com, that's my end user here, and because we're at OpenStack, let's go ahead and provision a stack on OpenStack, like an application stack. I have a few different workloads running on different clouds here, so I'm just gonna take this simple stack. I have two application tiers here, let's call this awesome stack, and let's go ahead and create and launch this. So this was my provisioning experience as that simple user. What this does behind the scenes, it basically creates an application that was designed for me, that application maybe runs automation rules, it maybe chooses the right cloud for me, it's really up to me as the admin, what I want to expose to my users. So I wanna log in as the administrator and maybe expand the permissions just a little bit, so luke will be able to do a little bit more. Let's take also just to mention, you can see that there's a little clock icon here, this means that there's a reclamation policy that says that there's a lifetime, there's basically a lifetime on this application, it will be terminated after one day I think, and we can also take a look at as the servers will be coming up soon. All right, so I'm also logged in here as a bit of a more advanced user, and this will be ob1 at scaler.com, so now I'm logged in as the admin, and you can see that I am in the same PCI production environment, the name of the environment is up here, PCI production, if I go back to luke, I'm also here PCI production, and here as the administrator, obviously I have greater permissions, I'm able to see more stuff, so I can see the automation scripts that are gonna be running, and I can more interestingly see all the applications that my end users are provisioning. So let's take a look under farms here, that's what we call applications in scaler speak. I can log, I can scroll down here, and I can take a look at everything that Luke has been doing, look at scaler.com, including the awesome stack application that he just provisioned that is currently running, and I can also take a look at the servers that are being provisioned as a result of it. So if I go into one of these applications, so we saw that open stack application, but let's take a look at a quick other one. As the administrator, this is what it would look like as I'm building these applications. I have three tiers here, I have my front end tier, my database tier, and my load balancer tier, and I have some, I want two servers for each one of these tiers, and I wanna run some orchestration rules on it, event-based automation, or run my Chef recipes. I don't have time to go into this in too much length, but the point I wanna make is that this is what is behind the scenes of that one-click provisioning that Luke was able to do. I had Obi-Wan as the administrator, build this farm, and he created a farm template, which is essentially a textual representation of this application that he was able to see here. He created my farm template, and now I'm able to either deploy this to other clouds or just present this as a catalog item to my end users. The last thing I wanna do is, as we said earlier, I want to give Luke a bit more permission to do more stuff, and I think the simplest thing we can do is let's allow more instance types, because right now we made that decision for him. Let's say that I wanna give Luke the permission to choose between three different instance types. So I'm gonna log out to my management here. As the administrator, I only have access to two environments. This is that PCI production environment. These are some of the clouds that I'm managing from this environment, and let's take a look at the policies that are being enforced here. I have my reclamation policies, my container policies, OpenStack conference. Let's take a look at my OpenStack policies. So this would be my policy around instance types. So for the appropriate OpenStack tenant, I want to basically make sure that, let's say, he can choose between these four instance types here, and I'm gonna go ahead and click okay. And you can also conditionalize this based on the operating system or based on the location that you provision into, and you can get pretty fine grained with this. All right, so we will go ahead and save this. All right, now let's get back to Luke here. Let's get back to our service catalog here. Just hit up with a quick refresh, and we can go ahead and go back to our simple stack. We're going to the same application. Now, when I'll go to here, you can see this little police badge here. This is basically what says that we have a policy running here, and now I'll have the selection that we just prepared for Luke. So this is just one example out of the different policies that you can enforce, but the point here is really that you're able to think of it like as a window of permissions that you can slide as open as you want it to be. There's this whole spectrum of users between Luke and on the other side of that OB1, there's all these different users between them, and you're able to build customized workflows to make sure that they are most productive. And beyond that, you also have the other side of self-service, which is, for example, financial reporting or discovering existing infrastructure that you already have as you're moving to the cloud or the migration stories or those types of things. So in this talk, we really only had time to focus on this side of things, but if this is something that you want to learn a little bit more about, we're at booth B12. We also have Dan here with a bag of t-shirts that he's ready to hand out to anyone who'll give him a high-five, and that's pretty much it. So I'm happy to answer any questions that you might have either here or at the booth, yep. So Scalar has an event-based orchestration engine. Basically that means that an event happens like an IP address was assigned or like an EBS volume attached, something happened, and then you can run a script on your machine. And that script can be in any language that will run on that machine and that can live on GitHub or locally. But instead of that, you can also use your chef recipes or puppet or Ansible. Yeah, that's the long answer. All right, I think I'm pretty much out of time, so thank you so much, everybody. And again, booth B12 right by the entrance. Feel free to stop by and say hello and ask some questions. All right, thanks, everybody.