 All right. You hear me? Good. All right. Good afternoon, guys. My name is Andy Walton. I'm Director of Technical Sales for Serba. And today, I'm going to speak to you about densifying OpenStack environments with software-defined infrastructure control. Basically, analytics that help you scour through your cloud and virtual environments. We're going to focus, obviously, on OpenStack today. Let's see if this thing is working. Oh, there it is. All right. Let's talk about OpenStack and the enterprise. So one of OpenStack's... Something's moving around here. I'm going to have to stay close to this here. One of OpenStack's biggest strengths is KVM. It's cheaper, a better open alternative. But what we found working with our customers has also got some weaknesses as well. For KVM, the management ecosystem is still maturing. Solometer has been going through some major changes in terms of the sort of information that delivers. And then certain challenges that we see in our customers, things like VM placements, density optimization and making sure the environments are as densely packed as possible. And software license containment and optimization are still big challenges. So analytics that understand the supply and the demand in these types of environments are critical. And if you go to our booth over here, we're playing this game of Tetris. You've probably seen it at the big blocks and everything. Now many VMs are in our private cloud. So if we use that analogy here and look at the infrastructure capability or capacity of an existing virtual environment, what we see are workload demands are kind of like Tetris blocks. They all have different shapes and sizes depending on the time of day. And they all have certain requirements. Let's see if this works here. Compute, storage, network and software. And as each one of those workloads with different requirements actually falls down, what we find is a lot of stranded capacity. A lot of spaces that you could be filling in better and you also have some workloads that potentially might be at risk that are going above a certain policy level. So this whole Tetris analogy is one that really plays well for what our software does. In order to kind of control where those workloads land, what is the best place to put them in terms of not just utilization but which workloads go together, which one should be kept apart. You need a very good policy engine to actually take advantage and look at things like, what's the optimal density of an environment? What's the performance that each of the VMs and applications needs? The availability of each different workload, what those things should be set at? Are there compliance rules about customer data sitting on these different systems and running in certain environments versus other ones? Volatility, operational cycles, monthly, yearly peaks, and then licensing considerations around the software. So if you look at say a production critical environment, that's going to have certain characteristics in terms of density, certain amount of availability, n plus two on the hosts, rigorous compliance of which workloads must be kept together, which ones should be kept apart, host-based licensing schemes. And you think about this for production critical and you start looking at all the other different environments where you might be running things like OpenStack, KVM, production cloud, pre-prod, dev test. Every one of them is governed differently in terms of how you want to actually operate it. And it's impossible to keep control with just looking at manual levers or information that's sitting in, well, some of the systems today. So that really has to be governed by a very detailed policy engine that can look at these things and basically dial the knobs to actually control what's your density going to be, security requirements, etc. So we found policy inside these kinds of analytics is a critical component. So let's talk a little bit about supply optimization right now before we get into demand optimization and optimizing the density of these virtual environments. If we take a kind of very simplistic look of a KVM virtual infrastructure, different availability zones, the KVM hosts and the VMs themselves. And let's play Tetris with some of these, you know, what a typical environment might look like, as you can see, somewhat fragmented and every one of those workloads having a requirement. You have the fragmented capacity, again, potential contention risk and inefficiencies in terms of software licenses and what's running where. Now essentially what we do with it is playing Tetris with these workloads and using our policies to siphon through and basically let's segment them together. So if you knew what the workloads look like, if you knew what those requirements were in advance and you could cheat at the game of Tetris, this is what the environment's going to look like. And what you're doing now is basically defragment in the capacity and opening up a fair bit of space for your workloads now to run and also keeping some space to the top four policy, N plus one, etc. This results in the optimized density, software savings, lower operational risk for the workloads and the hosts themselves and lower volatility overall because you're now predicting what the environment is going to look like as opposed to guessing. The way we visualize this through an OpenStack KVM console, just a simple kind of view is essentially kind of breaking it out as we get the data coming in from OpenStack and the KVM hosts, availability zones, the hosts and the guests are essentially located in these rows across the spectrum. If you're sitting on the left-hand side, essentially you're at risk. So according to the policy that you have set up, any workloads, the hosts or the VMs that are sitting on the left-hand side of this environment don't have enough resource according to the policy. It could be CPU or memory, it could be IO issues, storage issues, what have you. Things that are sitting on the right-hand side are wasteful and quite often what we'll see in some of these cloud environments where users are asking for their own resources, say through the OpenStack portal, you'll see those VMs sitting on the lower right and batched up because everyone's asking for large infrastructure more than they potentially need. And after, say, 30, 60, 90 days, you'll find is that they're using a fraction of that which they've asked for. And so we often see this in cloud and private cloud environments. And so kind of what governs the policy goalpost and where you want to be operating in the virtual environments is sitting right in the middle. It's not too hot, too cold, just right. A healthier environment is going to look like this. So through placement recommendations and fixing the allocations where those workloads are running, what we're doing is playing Tetris with the workloads. So when you look at a healthier environment, you're going to be de-risking it as well as getting rid of a lot of the stranded capacity. Again, you can't do that by interpreting charts and metrics and things. Really what you need is these recommendations for changes in allocations, changes in placements in order to drive this density higher and free up this white space. So that's what a control console looks like. Another key component, I kind of brushed by it with the policy part when we were talking about software license control and this whole concept of placing the workloads in such a way that you get savings when it comes to analyzing things like software license models that are based on per host or per core. So if I use a really simple example here, say Windows VMs and Linux VMs and typically what you'll see across an environment is that those workloads get scattered across all the different hosts which requires companies to typically license all those host or per core licensing models across all the infrastructure that's there. There are ways to put in containment boundaries and clusters and things. It's typically a very difficult thing to do on an ongoing basis and the guys who care about the savings there are typically not the guys that are actually running those environments. So the analytics as part of its algorithms can consider as well saying perhaps what you want to do is actually minimize these licenses so what it will do is contain those typically licenses like things like, you know, use Microsoft DCL or Oracle or any kind of host or core based license. What we'll do is keep them on a certain percentage of the host based servers and that derives, you know, tens of thousands in some case for some of our customers millions of dollars of savings in their virtual environments. So what you have to do is not only just shrink, you know, the first time analysis which is analyze and isolate and optimize those placements but do it on an ongoing basis and it's not enough just to build the affinity or anti-affinity rules. The filter scheduler can be configured to work off these containment boundaries as well established by Serba. So that is one of the places where some of these rules might live longer term. We go back to this kind of drop zone diagram and now talk about demand optimization. So these are for new workloads coming in. We've looked at the systems as we're analyzing on a day-to-day basis for the hosts and the VMs and availability zones. But what about the new workloads that are coming in? These can be generated by self-service requests coming in from the OpenStack portal. So a user might ask for, as an example, a new request, Red Hat OS with Oracle Database. It's got customer data on it. It needs a certain tier of storage and it should run on the West Coast. So essentially what this is is new workload demand being generated and customers asking for specific requirements. Challenge today if you're doing this and you're accepting these types of requests through the Cloud portal is a fairly simple scheduling algorithm associated with that. There's no solution for what's currently running in there. So you don't know, at least the environment doesn't know how hot or heavy that thing is running right now. There's no fit-for-purpose analytics beyond the simple filters. So understanding exactly where that workload should run across all the different availability zones and potentially if you're running other types of hypervisors in there as well, VMware, what have you. If you add demand management in there in an intelligent routing and reservation system, what we see is the ability to now those types of requests come through and essentially programmatically ask, Serba, Serba, where do I route this workload? What's the best place for this thing to run? The analytics look at the requirements of that self-service request and match them to the amenities and the available space of those existing places where that workload can run. Think of it as like hotels.com. Requirements of a guest. I've got the amenities of the hotel and now what we're doing is matching those things up. That results in really high density, much higher density than you can get from manual basis. Lower service level risk, a compliance enforcement and a cloud process automation system. So essentially the user doesn't know anything different about what's going on. It's just now there's an API call that's coming in. We're coming back and basically saying, here's the best place for it to run. The other thing that's really important is the fact that if those workloads aren't coming online immediately in a lot of cases, test dev, you're going to spin them up right now. We see in the enterprise is more and more that request is saying for 30 days from now, send me, you know, spin up 25 of these things. So the ability to actually route to the right place, but not only that, but reserve the capacity, the storage and the CPU in order to make sure that when those things do come online, that the capacity is available for them. It's not just self-service requests. It's also other types of onboarding into the environment. So you've got enterprise application deployments. So new applications coming online, not just from people that are asking for them for the portal, but new apps that are being created. Migrations and consolidations, a lot of customers we're talking to are moving off one hypervisor, one platform, and moving into OpenStack KVMs. So the idea that I've got a whole bunch of workloads coming on board at one time. And other existing applications may be moving from a legacy virtual environment. So those are all examples of inbound requests. Some of the challenges with this is the fact that cloud stacks don't really manage the existing workloads. They just start new ones. So there's no predictive element of what's going on, what's going to happen when you put these workloads on board, no reservation capability, no future infrastructure modeling, and no rerouting and analysis. So if you're moving from one environment to the next. So those are big challenges that exist today as you take these other types of workloads and onboard them. So again, the same kind of thing will happen. When we can do this and actually route to the different environments, enterprise-wide routing, not just for OpenStack KVMs, but for the other hypervisors in environments as well. A true capacity reservation system that actually holds the reservation for the workloads that are coming on board. External hybrid modeling, and really this whole concept of automation, really without changing the way the customer sees this process working. The reservation console, so the way it looks from a software perspective, and again, if you want to see this, I'm not going to demo the software up front here in the next nine minutes, but you can come and have a look at it. It's essentially that the request looks like this. So the new onboarded applications come in through the environment. They show up as new guests coming on with specific time frames. Some of them will be spun up immediately. Some of them are ones that are scheduled to come onboard, say in whatever, 15, 30 days, what have you. On the right-hand side are all the different infrastructure groups, availability zones, in the case of OpenStack, the workloads that are coming on. Here's the environments that are best suited for these things, as well as the environments that rejected them. They might have rejected them because there's no available space in those environments, or it might be because that the amenities don't match. Someone needs a certain tier of storage, someone needs customer data running in a certain secure environment, and those things don't match. So the software, again, from an intelligent routing perspective, will figure out what's the best place to do these things. We'll go back to the information back to the provisioning engine. This detail diagram gets into probably more details than I want to get to, but what I wanted to show you was the fact that there's a whole process for the demand management. We call this a swim lane diagram. At the bottom of this diagram is really the ongoing supply, so the virtual and physical infrastructure, understanding what's there. That's what we call supply optimization. The process flow looks like this. So when a new demand is initiated, someone asks for new applications to be spun up. Essentially from a user perspective, they don't see anything different. They initiate the routing of demand. That calls us through our API. We do our analytics, figure out the best place to route and reserve that capacity. Sorry, so the demand routing happens. We look at the outstanding reservations, forecast the supply and demand, and see that demand to the right place. The reservation is created, so if it's not being spun up immediately, if it is, then obviously we continue on with the provisioning process. If it's coming on board in the future date, then we create that reservation in the hosting location and lock in the capacity and take it away from the existing supply. That API goes back in. The reservation is confirmed to the user, and again, they don't notice anything different. It's just like, yeah, you got your 10 VMs that are coming on board at that future date. When that day comes, whether it's on demand or in the future, the provisioning execution occurs, we analyze the latest state of the bookings. We get the optimal placement, so it's not unlike a hotel. You came to the hotel, you made a reservation, you knew you were staying in a certain place. When you got there, they said, okay, you're staying in this room. It's the same kind of thing. It's like you're in this specific availability zone, but you're going to be staying on this particular host. So we give that back for the provisioning process to initiate the provisioning and however you're going to do that. Provisioning is confirmed, it's completed, the requesterer understands and is alerted that the instance is now available, and then there's a reconciliation process that occurs to make sure that we've analyzed the latest state in any sort of reservation that kind of expired, so if someone said it was coming online and it never did come online, then we basically remove it from the queue. Just to kind of complete, we talked about here today is policy-based control, supply and demand management. We've been talking a lot today and we just recently announced this whole integration, at least from the hypervisor perspective with OpenStack and KVM on the supply side optimization. We already had the integration with the Nova scheduler on the demand side. This technology works not just with KVM environments, but obviously Hyper-V, Power-VM, Red Hat and VMware across compute, network storage and the software analysis, and then working with the cloud management platforms, OpenStack and the rest of them to do the actual work. So we think of ourselves as kind of a brain and we talk to arms and legs and get monitoring information from the eyes and ears. So this is kind of the complete fit and this is where we kind of see ourselves between hypervisors and resources and the cloud management platform. The final slide I had was really the benefits and why people are doing this today. So when you can do this with intelligent analytics, the average increase in density that we've seen in our traditional customer basis is upwards of almost 50% VM density, which results on an average savings around 48% on the hardware side. And the software license savings when you can intelligently place where these workloads go and minimize where they're actually going to run is somewhere in the neighborhood of 55%. Obviously mileage may vary, but that's what you get when you actually are doing this whole Tetris packing, workloads together safely, eliminating risk and increasing the density. So that reduces the capacity risk, increases the automation. We've got a few minutes left here, but I'm going to close down the presentation. I could probably throw a rock right over to our booth, which is right over here if you wanted to come over, ask additional questions or see a demonstration. So thank you very much for your attention today, and have a good rest of the conference.