 Hello, I'm Travis Newhouse, Chief Architect at Up4Mix. At Up4Mix, our goal is to make it easy for operators to realize reliable infrastructure. Traditionally, as an operator, you'll use monitoring tools that are going to provide a disruption when there's an issue inside your environment. And the operator needs to be involved and solving the problem, debugging the problem. With Up4Mix, our goal is to close that loop with automation. We gather metrics, analyze those metrics, and then take action to improve the reliability and efficiency of your infrastructure. As an overview of our product, we've built a data platform that performs distributed, real-time analysis of metrics across your cloud infrastructure. And on top of that platform, what we have built is an interactive dashboard, an alarm system for monitoring, and state-driven orchestration so that, as we're collecting data, we're analyzing that data, then we're taking action to affect where and how workloads are placed inside your infrastructure to meet an SLA that you configure as an operator. In addition, we take that data and we provide long-term trending reports, as well as chargeback. So you can do billing to your users. You can understand how the capacity of your infrastructure is changing over time. You can understand how resources are being used inside of your infrastructure. Where is the demand by your users for compute, network, storage, and memory? And finally, you can provide a self-service experience to your users. So the same experience that the operator receives to understand utilization across the infrastructure, the users can also have that same experience for their instances and their projects inside of OpenStack. And we've built this product to work across a variety of cloud infrastructure, including OpenStack, Kubernetes, as well as integration with AWS, so that you can have a single pane of glass to manage multiple types of cloud infrastructure in your environment. Our solution is a fully on-premise solution. It's 100% software. It runs inside of your enterprise. It's built as a scalable architecture and with a very simple and easy to install software that's non-disruptive to OpenStack, non-disruptive to Kubernetes. We layer on top of your existing infrastructure and provide you with insight into how resources are being utilized inside your environment. And we allow you to automate and orchestrate workload placement inside of OpenStack and Kubernetes. Today, what I want to do is show you, mostly, focus on a live demonstration here. I will first go into some of the interactive features in our dashboard, showing you about charting, alarms, reports, and capacity planning. And then I'm going to describe how you can configure an SLA and demonstrate two uses for that of automation. One is where we will automatically detect and migrate instances off of a host that's not meeting the service level that you've configured. And in addition, I'm going to show how we have a Nova scheduler plugin that actually places new virtual machines only on hosts that are meeting the SLA that you've configured. Just going to quickly show you the experience here from start to finish. We integrate with Keystone for authentication. So as an admin, you can log in. Or as a regular tenant user, you can log in. The view that you see when you log into App Formix is role-based, and it will depend on whether you are an admin or a regular user. In this case, I've logged in as an admin, and I'm able to visualize and see all of the infrastructure in this OpenStack environment. We automatically discover all the virtual machines, all of the projects, all the hosts, and as well as all the host aggregates. At a top-level dashboard, we give you a snapshot of how your infrastructure is performing. And this performance is based on a configurable SLA, and I'll talk about that later on in the demo. But we can quickly see which instances are unhealthy or at risk, which hosts are unhealthy or at risk, and those are configurable policy to determine those status. A great feature for operations is the ability to quickly find inside of your OpenStack environment some entity that you're looking for, whether it be a project, a host, or an instance. You need to be able to navigate between the virtual world where you have projects and virtual machines and the physical, where you have actual hosts and host aggregates and real resources that are running those workloads. You can quickly search by anything that you want to find, IP address, instance name, project name, if I look for a user's project, I can quickly click on it and see the quota that this project has that has been allocated, how much of that quota is being used by the project, as well as a summary of all the instances that are running inside of that project. I can see and again navigate to the physical by seeing that these instances are running on different physical hosts. I can view a snapshot of resource utilization by these instances, and these are real-time data being streamed from our agents that collect data all the way up to the dashboard. If I want to see the context of which an instance is running on, I can click on that host and see the other instances that are also competing for resources with this same instance on the same physical host. And I can drill down into charts that show me much more detailed metrics about the physical infrastructure. So I can see things like the disk read rate, disk response time, I can understand what's happening in the network, the packet rate, error rate, drop rate, and I can see this on a per instance basis. So each line on this graph represents one instance running on this physical host, and there's one line that shows a summary for the host itself. Again, this is a streaming real-time dashboard where I can view things in real-time to understand and navigate and troubleshoot. I can go back in time to see what's been happening over the last days or weeks. And because I don't always want to be looking at a dashboard 24-7, I can actually set alarms on any of these metrics that we are analyzing. So there's a complete list of metrics ranging from CPU, IO wait time, response time of disk, disk failure prediction, IP table rules, memory, network IO. All these metrics are available to alarm on and you can set both a static threshold if you know that you want to watch for CPU above a certain percentage, or you can set dynamic thresholds where we learn the profile of resource utilization over time and then detect and tell you when resource utilization is outside of that normal band of operation. So in addition to real-time dashboards and alarms, you can do also capacity planning to understand how your infrastructure capacity is changing over time. We have the available capacity and use capacity plotted over a period of time. You can see trends. You can also see if you have spiky workloads that spin things up and spin things down, you can observe the peak usage over time on the 10-minute, one-hour or one-day basis. And you can see the available capacity broken down by flavor. So you can have actual tangible numbers to know exactly how much capacity your users can actually use because it's not sufficient to just know you have a certain number of CPUs available across 50 hosts, but you actually need to know how many flavors of a VM could I spin up? How many largest, extra-large mediums? And those all change depending on how virtual machines are scheduled and placed and how the physical infrastructure gets segmented by OpenStack. In addition to that, we have reports so you can see over time what resource consumption looks like in your infrastructure. This is useful to know what kind of hardware you wanna buy in the future to meet your user's demands. If your users are using lots of memory, you're gonna wanna buy hosts that have a large amount of memory. If they're using a lot of disk or storage I.O., you're going on to plan accordingly and build infrastructure that's gonna meet those application demands. So the reports provide a long-term trend of resource consumption. You can generate a report for any given time period and the summary of resource utilization across that time period is presented in both a graphical format as well as tables that you can drill and sort. Here we're seeing a histogram that shows us the virtual machine CPU utilization as a histogram showing five instances in this project we're using less than 20% CPU over this reporting period. Two of them were between 20 and 40%. So this is a real quick indicator to an operator that maybe they should talk to the user and maybe right size these instances. Perhaps they could use a smaller flavor size or perhaps they don't need so many instances to drive their application workload. Again, that same information is available in a detailed tables format where you can sort and find which instances are the top users, which instances are the smallest users. You can look at them as instance CPU, which is the CPU utilization inside of the instance itself or relative to the host. You can see how much of a particular host an instance is consuming. So these are some interactive tools that an operator can use to understand real-time utilization, do troubleshooting, set alarms, also do reporting. I haven't touched on billing but that's another feature of the reports is that you can set a rate for compute, for memory, for network IO and disk and we will, based on that rate of consumption, we'll charge and you can bill users to various departments that you can figure. But what I'd like to go into now is a little bit around automation and how Adformix can make it easier for operators to really have a reliable infrastructure. In the settings, you can define the SLA that you wanna have for a host and for an instance. I'm gonna look right now at what we call the risk setting. This is something where we don't think the host is offline but we actually think it's maybe not gonna meet the SLA that the user wants to configure. So the user can configure any number of the metrics I showed for the alarms, you can build up a set of rules that defines the risk profile. I'm gonna change over to another cluster here that I have set up for this orchestration demo and show you what we have is that in this infrastructure, the host risk profile is based on CPU load and I've done a simple rule because I wanna be able to demonstrate in this demo that you have, if the CPU utilization is above 90% for 30 seconds sustained, then we're gonna mark that host as at risk and what we wanna do is we don't want our applications to be running on that host anymore because we think it's not gonna meet the SLA that our users expect to receive for their applications. What you can see right here is that I have three hosts and right now there's two instances that have been scheduled on each of these hosts. What I'm gonna do is I'm gonna start a, I have an application running that's listening for notifications. When the status of that host changes, we push out a notification that you can respond to. We integrate with PagerDuty, we integrate with ServiceNow, Slack and you can also write your own custom endpoints which is what I've done here to create an infrastructure controller. And what I'm gonna do is I'm gonna start up some CPU load on this host, Compute2. As that CPU load increases for that 30 seconds, the state of that host is gonna become at risk and a notification is gonna be pushed out to the listener and that listener is gonna actually initiate migration inside of OpenStack. So it's gonna call out to a Nova API to live migrate the instances off of that host because it's no longer meeting the SLA that I have configured. So the CPU load should be increasing right now. As for 30 seconds, we will see the risk change here to at risk. We can see the CPU has spiked up here in real time. Now the host is at risk, it's telling us CPU utilization is above 90%. If I move back here, we're gonna start to see these instances be migrated off of the host. So we see the status of the instances migrating. OpenStack will be moving that to another available host inside the infrastructure that Nova schedules it on. The migration will just take a little minute here or two, but what I also wanna show now is that once those instances have migrated off of the host, we also don't wanna schedule any new instances on that host because again, it's not meeting the SLA. We don't wanna see an instance get scheduled there and then need it to be notified that it should be migrated off and that will take a lot of time and potentially is even slightly disruptive. We see these instances now have both migrated off. So I'm gonna move over to Horizon and spawn up a couple other instances here. Now with that Formix, we also have a Nova scheduler plugin that is going to be aware of the state of the hosts that are meeting the SLA or not and it's going to tell the Nova scheduler not to put any instances on the hosts that are not meeting the SLA. So I'm just gonna spin up six instances here while these are spinning up. Normally what would happen is the Nova scheduler tries to basically spread instances evenly across the infrastructure. So I created six new instances and typically what we would have seen is that we had three instances running on two of the hosts and zero on the host that we had evacuated. Normally OpenStack would have put two instances across each of those hosts and tried to balance it but because Formix has detected that this host compute two is not meeting the SLA, our scheduler plugin is not allowing OpenStack to put any instances on that host until it becomes healthy once again. And I can quickly show you that if I stop the load. We saw that those additional instances that I spawned up were only scheduled on compute one and compute three. We're seeing here a little bit of load on the compute one because of all those instances that were being spawned up. And if we look at this host compute two we'll see the CPU utilization should start dipping down after I've stopped that load, just double check. So all hosts are getting healthy. If I were to spin up additional instances we will see those populate now again on the compute two because that's now once again available for OpenStack to use. It's meeting the SLA that we've configured. And there's the four instances they've shown up for tiny instances that we created. If you have any additional questions we'd like to learn more about at Formix I welcome you to join us at our booth. We're located near the exit to the marketplace and I look forward to talking to you some more. Thank you.