 Good afternoon, my name is Andy Walton. I'm the director of technical sales for Serba. And today I'm going to talk about densifying OpenStack with predictive analytics. Serba is a analytics provider. We make software based out of Toronto, Ontario. And we've got some of the world's largest and some of the smallest organizations in the world using our software to optimize their hybrid cloud environments. Today I'm specifically going to focus in on OpenStack. So OpenStack and the Enterprise. So obviously one of OpenStack's biggest strengths is KVM. It's cost effective. It's an open alternative. One of its biggest weaknesses is also KVM. And I could actually say this about any of the different hypervisors, whether it be VMware or PowerVM or Hyper-V. Management ecosystem in KVM is still maturing. And Solometer, the technology we use to actually go and get the data, is still going through major changes. It does every single release. But this is the last piece here, the VM placement, density optimization, software license optimization. These present many challenges in the private clouds in terms of getting those answers right. And like I said, that applies to any hypervisor system, not just KVM running underneath OpenStack. So the analytics that you need to understand supply and demand in these environments is critical. And if you've been by our booth or you're going to visit afterwards, hopefully, we liken this kind of to a game of Tetris, where your playing field is all the infrastructure capacity. And every workload kind of represents your different supply and demand. My thing is falling off here. Let's get this thing. It's stuck in my hair. There we go. Hopefully, that's all right. Anyhow, the workload demands are like a piece of Tetris. And you think about different shapes and sizes, workloads that are busy in the morning, some of they're busy in the evening, some disk IO, network IO intensive, CPU memory. And they all have different requirements as well in terms of the compute, the storage, their network requirements, and even the software that runs on them. And many of the environments we see before we get our hands on them, we get our software and our analytics on them, look like a badly played game of Tetris where any of the gray space is stranded capacity and you've got workloads that are potentially exceeding operational boundaries that pose risk to the environment. Now, what we're seeing is software-defined control is enabled by policies, the way you govern the environment, the way you manage it. Let me fix this thing again. This is getting irritating. There we go. Let's try that. So you've got different parameters that you need to actually watch. Things like density, performance, the applications, availability requirements, compliance in terms of which workloads run where and which ones can run together and whether or not they can run internally or in the external cloud. Volatility, operational cycles and software licensing requirements. And if this type of parameter was just required in just one environment, say production critical, density has to be low, performance is high, you need to make sure the applications are running. N plus two availability at the node level, rigorous compliance about where workloads will run, business-defined operational cycles and host-based licensing parameters. Tough enough to do when you're looking at just single environments, but when you're running multiple types of workloads, production IT or cloud or batch or DevOps, what have you, you've got all these different parameters to control. Very difficult to do with the operational tools that are available today, especially in some of the maturing technologies, things like KVM. And really what we're finding is that software-defined control of these types of things. So looking at utilization, over-commit ratio, security compliance, utilization levels, if you can control that using a policy, you're going to have a much more governed environment, especially when you take all these different parameters and all these different environments into consideration. So there's really two pieces here I wanted to talk to you today. One was the supply and the demand. Let's start with supply and optimizing. So if you look at, say, your KVM virtual infrastructure and kind of a simple diagram of a couple different availability zones, the VMs running under the KVM nodes. And there's your poorly played game of Tetris. And the problems here is you've got fragmented capacity, you've got potential contention risk, obviously software license issues if you've got any host or core-based licensing and you want to try to contain those things. And then potentially inefficiency in terms of how you're using the infrastructure both at the VM level and the node level and even at the storage level. So when you look at analytics and actually start taking each one of those workloads and playing a game of Tetris essentially cheating with it and understanding predictively what is the history and what are the requirements of those workloads in terms of all those different utilization parameters and fitting them together, dovetailing those workloads such that the analytics tell us, you know, this is what an environment looks like after the fact. Much denser, making better use of the infrastructure and essentially freeing up capacity. So we find Tetris is a great way to kind of show this analogy and how you can free up this capacity. What does that result in optimized density, software savings at the host and core-based level, lower operational risk, lower volatility because we're predicting what's going to happen. This is not application performance monitoring where we're looking at something and reacting to what happened five minutes ago. We're looking at the history and predicting what's going to happen and then basically fixing the problem before it occurs. The way we visualize this for open stack in KVM or for VMware or PowerVM or what have you is a spectrum essentially and it looks like this. So at the top you've got your availability zones, you've got your nodes in the middle and the guests. We get all our data from Solometer. So we go out and pull that information in and it gives us configuration and relative workloads and performance of each one of these things and we paint it out on the spectrum based on if you're in the red you're at risk. So we violated a policy, that policy we described earlier. So anything in the risk has to be dealt with quickly. Anything in the yellow is essentially waste. So we paint out things in the yellow, we've got the nodes that aren't using their infrastructure efficiently. We've got VMs or guests that are running that, someone's asked for an extra large instance size and over time they're using the equivalent of a small. And in the middle is kind of the just right zone. So this is where you wanna be operating within these policy based goal posts. So this environment after the fact, after we publish our recommendations, essentially you'll see things kind of squish into the middle. So in the middle is where we wanna be operating. So we get rid of the risk, we reduce the waste wherever possible. And this is the operating zone and it's basically there's the recommendations that help make that zone or make that so, whether it be recommendations for placements or allocation changes. One other key element of the software license control is looking at, how are you using the host or the core based licenses. So placements is also a key consideration for savings on these types of licensing models. And if we use something simple, just something like Windows or Linux VMs, and typically what we see is they're scattered across the environment, some customers don't care about those costs, maybe they're an education institution, a higher learning, what have you. But other ones we see, they do pay a fair bit of money for host or core based licenses. And if they're scattered across and you're paying on a host basis, you're spending a lot of extra money potentially. Some people artificially try to contain those things within boundaries or availability zones. And other ones, when you get more than one software license that you're trying to control that becomes very, very difficult. So using analytics to try to say on a daily basis, let's try to make these things actually fit within a smaller subset of the host and the environment. So there's big savings around that. The job is number one, to shrink and isolate those types of VMs and optimize the placements. And then on an ongoing basis, you can't just do this once and then hope you stay within compliance. It'll sprawl up the next day as things come in and things move around. And we can configure the filter scheduler to use that, work off these containment boundaries, which we've established them by our predictive analytics. So that's a key component and it's not just say Windows or Linux, it can be any host or core based license, Oracle, some customers SQL server, what have you. So that's the supply side optimization. Let's talk a little bit about demand now. And what we typically see, brand new workloads coming online, somebody using the OpenStack, the customer portal and someone's asking for a new application to come online. And it maybe represents whatever, 510 VMs, requires Red Hat OS, Oracle database, it has customer data on it, it needs certain sand tier storage and it should run on the West Coast. So this is almost like a manifest of requirements. And what the software does and what it's designed to do, well, that's how you first, the issue with doing that is it's simple scheduling today. So if you're doing this, there's no consideration for the existing loads that are running environment. And again, I'm not just pointing this out, say on KVM and OpenStack, we see this across other platforms, Private Cloud. There's no fit for purpose analysis beyond the simple filters. And so what we're trying to do here is programmatically determine through a routing API, so we take those requests in and saying, where is the best place to run this? It's almost like hotels.com. You've got a guest with requirements, you've got hosts with different capabilities and available rooms, so where should I go? So what the software does programmatically in real time and suggests, where is the best place to run that workload? Policy-based analysis. The thing is that it's not just a certain type of, we call it the self-service type of request. There's other types of requests that will come in. You've got enterprise app deployments, so great big apps that are coming online, say, I don't know, two, three months lead time, big application, migration and consolidation activities, you're moving from one platform to another or you're bringing in physicals into the virtual environment, or you've got existing applications that are migrating around. So this represents different kinds of demand. Again, there's no predictive element to determine from a time basis when these things come online and say 30 days, do I have enough space? Do I have enough compute and storage? So we need to be able to reserve the capacity and there's no future infrastructure modeling, say, what if I move out all this different kinds of hardware I'm running today? What if I replace it with something else? And then the rerouting analysis for workloads that are currently running, what if I take it from this environment and move to another one? So again, what we're doing is taking in these different types of demand models, ones that aren't necessarily coming online immediately, but sometime in the future. And what you get with this demand management, the optimization here is best execution venue routing, where's the best place to run these workloads again? The detailed VM placement, so which nodes specifically should these workloads run on and what's the sizing and the allocation we should give them? Local workload rebalancing and then global workload rerouting to different environments, not just necessarily OpenStack, but other hypervisors may be running internally on your private cloud, VMWare, or even into the public cloud, is getting it to AWS Azure soft player. And what this gives you is this closed loop, this demand management and forecasting, as you're doing the supply and the demand modeling, we're continuing analyzing and determining, do we have enough supply say in 60 days to meet the demand? The way we visualize this is essentially, now this is the interface we have, and again we'll show you a full demo if you wanna come by the booth, but essentially all the new applications coming on the left side, and we programmatically and in real time match them to the different destinations that you have available, whether that be internal environments, OpenStack or even external public cloud, what have you. Wanted to walk through a little more detailed workflow here of a real world example, so this is Cisco, and this is their network services orchestration tool, as they provision in real time, NFV types of workloads, routers and firewalls, switches, et cetera. So at the top you've got the Cisco layer, which is essentially capturing the demand and doing the work around the provisioning and the orchestration, so essentially think of them as the arms and legs and the environment. Service sitting in the middle, as the analytics software defined infrastructure control with a real time API, the routing and reservation capability, and a continual supply optimization and at the bottom, typically with working with Cisco, the supply is an OpenStack with KVM. So the first thing we need to do is the supply optimization, so that piece on the continual analysis and moving workloads around and changing allocations, so a continual analysis of that so we get an understanding of just how much supply we have available. The second piece, so think of this as the real time workload request, a new workload comes in, demand is initiated through the Cisco, the NSO system, and the routing comes to us through a real time API, so it's the manifest of the requirements of that workload of that application, whatever they're asking for. We do an analysis of any outstanding reservations in the demand and the supply pipeline, we look at the forecast of the supply and the demand, and essentially what it does from a high level is make a routing decision as to which environment can support that. So at the high level, we're just looking at, you know, is it the OpenStack environment sitting in Denver or is it some other location? So we'll come back and we'll actually confirm that reservation saying, yeah, we got space here, here's the best place. That programmatically, we create that reservation and we decrement, lock that capacity inside of our control module, send back the request back up through to NSO, and it confirms its demand reservation and captures the details of where that workload is going to run. Now, this can be on-demand typically with this model, it's an on-demand model, so it's a self-service request. So we come back now where the provisioning is immediately initiated. For a future workload request, it's basically going to sit there until at such time as that workload is designed to come online, and then it would now initiate the demand. But what we're doing now is we figure out, we analyze the latest state and bookings. So if it's a future request, obviously things may have changed. If it happens immediately, then we determine, okay, which host, which storage are we going to put this workload on? And so we'll send back the information. It's kind of like when you check into a hotel, you know which hotel you're staying at when you arrive at the front desk, that's when you get your room number. So we determine which node does it go on, we send that information back to NSO, it initiates the provisioning and executes that in real time. And then that demand is actually placed on the supply. Again, think of us more of as a brain and we're talking to the arms and the legs because the analytics isn't, we're not trying to recreate the wheel here. Provisioning is confirmed, it's completed, the reservation is fulfilled, and then there's a reconciliation process that happens where we look at the latest state and then we decrement the supply or increase it as it were if it's a reservation that's coming out of the system. So this is just kind of a high level description of how we work and again, some of these products and names can easily change. You know, we've seen other ones here where it might be something like a vRealize automation or something from IBM, the cloud orchestrator. But the idea is that the process is very similar. So again, just from a high level, we're highly automated and integrated, we basically sit as the control brain between the cloud management platforms whether it be OpenStack or VRealize or ICO. We have an API that talks to those functions and does the global routing and the local optimization and through policy or sorry, local optimization and policy and automation. We're talking to various hypervisors, whether that be KVM or VMWare, Rev, Hyper-V or Power-VM and then reserving and locking the capacity for all those different components, compute, storage, software. The benefits of this, you know, why are customers doing this? You know, the supply optimization yields typically what we see is an increase of around 48% density, which relates to a hardware savings around 33% and software license savings of doing this intelligent placing workloads in the right place around 55%. There's also kind of soft benefits, reduced capacity risk and of course the ability to increase the automation in your private cloud. I wanted to just mention one other component here, which is something we just launched. Everything we're doing right now, we put on-prem, so when we're working with the OpenStack KVM environments, we install an application in its own VM or a physical system and it's got a backend database. We just announced a SaaS-based offering, which today is just VMWare in public cloud, but essentially what it does is there's a connector that sits on your-prem, it analyzes the local private cloud and it pushes those analytics up to a cloud instance that we're running an IBM software where there's a person there that actually is watching and essentially an extended member of the operations management team. So I just wanted to mention that it's not just an on-prem deployment, we now have a SaaS-based offering. Two minutes left. I just want to thank you very much for your attention. We're right over there. If I throw this water bottle, I can hit our booth. You can see it right there near that tower. So come on by if you'd like to see a live demonstration we've got all the software running and we can talk to you folks about how we could do this for your environment. So thank you very much for your attention today.