 In a cloud, all of our applications are running on a shared infrastructure. What that means is that all of our applications are competing for resources with each other. There are no guarantees. They're all fighting with each other, and they are unable to get the resources that they need to perform. Traditional functional silos, when we work in independent teams like network teams, storage teams, compute teams, we use different tools. We use different methodologies, and we are unable to identify these problems and solve them. This new environment, this new cloud environment, it requires us to get new sets of data to identify the problems. Here's what's fundamentally happening. Our applications, they're receiving unpredictable performance. The reason that's happening is because they are contending for shared resources. They're fighting with each other. They're not getting them. What do we do in turn? We spin up more resources. We are constantly sitting in this loop of our applications not performing and are spinning up more resources. This method is extremely wasteful. The reason we spin the resources up is because we don't have the adequate visibility into our infrastructure. We don't know the real reason why the application is not performing. Furthermore, we don't have any kind of control available to fix these performance problems when they happen. So we are stuck in this loop. AppFormics provides a real-time software solution that allows you to monitor your cloud infrastructure. We give you real-time resource monitoring and full-stack visibility. We have granular visibility about everything that's happening on your host, everything that's happening on the hypervisor, the virtual machines, the containers, and the application. The entire stack is made visible by us. We give you the control capability that allows you to dynamically change how much, how much, or how little resource any application gets to use. And in doing that, you can ensure that your applications deliver better performance and meet the SLAs that you need. So our software is an agent that sits on the host. This single agent can make visible to you all of the resources that are being used on the host, how they are being consumed by the virtual machines, how the applications that are running within the virtual machines, how they are consuming the resources. And if you're running containers, how the containers are consuming the resources as well. We are an agentless solution. What that means is that you do not need to change anything in your virtual machines or anything in your containers or anything in your applications. It's a single agent that installs on the host that makes the full stack visible to you. If your data center is a software-defined data center and you're expanding into the cloud, then in that situation, we also give you an agent that can sit in that virtual machine that you spin up on the cloud and make that entire, your entire cloud visible to you. The entire system can be centrally managed. It's all driven using APIs. It's a real-time system, gives you real-time visibility and control over all of your resources. The system is currently deployed with several customers and these are the four key scenarios where we have serious impact. So the first one is we provide you the real-time application-level visibility of your infrastructure. In the next case, in this new environment where we are always sharing the services in our cloud infrastructure, for example, you have a shared storage service and you're spinning up virtual machines, you're spinning up containers and all of these VMs and containers and these apps, they're accessing this common shared storage service. What ends up happening is that as we scale out the applications, they content for their resources on the storage service and the storage service starts to kind of not perform. It's almost like you're having a snowball DDoS effect on your infrastructure from your own applications. We can come in and prevent that situation. The other problem that happens very frequently in virtualized environments is that as we increase the level of sharing, as we spin up more VMs, more containers, more applications, all of our workloads fight with each other for resources and there's bottlenecks in our infrastructure. We can automatically identify those bottlenecks and help you fix them and eliminate the contention. The fourth scenario is essentially this thing about not being wasteful in our infrastructure use. Without the adequate visibility, we all go on gut feel. It's almost like, hey, I'll spin so many resources up. If it doesn't work, I'll spin up so many more. If it doesn't work, I'll spin up so many more. We provide you with state-driven infrastructure planning. It's real-time, it's driven by real data, real analysis and gives you better overall infrastructure utilization. Our solution, I mean, it's completely integrated into OpenStack. If you use OpenStack, it's a one-click deployment. You install the agents on the host and everything is automatically on everything from the OpenStack controller and pull everything up on the dashboard for you. We are partners with Mirantis, so if you use a Mirantis OpenStack install, it's a very simple, easy build for you as well. So I'm gonna demonstrate a few things about our software today, both on the monitoring side and the control. What happens when you start spinning up more and more virtual machines or more and more containers on a physical host is that they all start competing for resources on that shared infrastructure. So you start out with maybe two containers. Next thing you know, you've got a couple VMs running on that machine or a couple more containers and each one is getting fewer and fewer resources. And eventually what happens is your application can't get enough resources to meet the performance requirements that you need for a good user experience. So that kind of problem is an absolute lack of resources to get the performance that you need. A second kind of problem that occurs is that the workload on the applications that are running on the same host as you is constantly changing. And as it's changing, the amount of resources that your application can get is also changing and therefore you end up with unpredictable performance and an unpredictable user experience. And we can see that here in our monitoring tool. So this is a view of a single host and it has six virtual machines running on it. And what we're displaying is the memory, the CPU and the network IO of each virtual machine. And what you can observe is that as the workload of some virtual machines comes and goes, the other virtual machines are getting more or less resources. And if you think about the application in yellow at the bottom where it's very clear to see that as the blue workload comes online, the application in yellow is getting fewer and fewer resources and then when the blue workload goes away, the application gets more. And so what's happening here is the application owner is gonna be asking why is my performance not doing what I expect? I've tested my application, I know that it can handle the load that I expect but when I deploy it into production, I see that the application doesn't performance I expect or performance as I expect sometimes but not others and you need the monitoring tools to identify where the problem exists and why you're not getting that performance. And in addition to just showing the resources on the host, what we also can show are application metrics so that application owners can find out if the problem maybe is not in the resources that are being available on the machine, it could be like what a problem that's happening inside the application itself. So for instance, for HTTP, we can actually show the time to first byte. So the time the packet arrives at the physical host to the time that the application sends the first byte of the response back to the client and we can display that in real time. Another thing we can show is the amount of requests that are being received. We can even filter it by the endpoints so that you wanna know what endpoints are getting hit the most in your application, you wanna know what the response codes were coming back for those requests and we can display that in real time as well so you can get deep insight into what your application performance is doing. And I was describing contention on a single host just now but there's also contention in shared services. So more and more applications are being built around shared infrastructure where you might have a shared storage server, a shared database, shared identity service and the contention now is not happening at a single host for CPU and memory but it's also happening at that shared service across the network. So you want to have insight into what's going on in your entire data center, not just on a single host and that's another thing that our farmers can provide through our monitoring tool. So in this view, we've taken a different view instead of looking at a single host we're looking at multiple virtual machines, a logical view of the data center. So instance one, two and three might be running on host A whereas client one might be running on host B and if client one suddenly creates a massive demand for the storage server, then the amount of storage IO available to instance one, two and three diminishes. And if you were just looking at the host A where instance one, two and three are running you would see that they're getting less resources but you wouldn't necessarily know why. You need the view of the entire data center to see oh, there's a sudden burst of demand from client one that caused instance one, two and three to get fewer storage IO resources. And that's what our tool enables is that view that you can see across virtual machines across your whole infrastructure you can see into the applications so that you can identify what bottlenecks are causing the performance problems. And once you've identified those bottlenecks what you need next is the ability to solve the problem. You need to have control to assign and allocate resources to the applications that are highest priority to you. App Formix has an API driven controller that is centrally managed and lets you allocate resources to the applications that have the highest priority and you can do it both in real time or at the time you're provisioning the application. And so just to give you a simple example of what our REST API looks like if you wanted to at real time change the amount of network allocation given to a virtual machine with a simple curl command you can put, make a REST API, put API call to set the network allocation to 200 megabits per second. And I'm gonna demonstrate that in just a moment but I also want to point out that you can do this preemptively if you know an application maybe is a backup job and you want it to not take up too much resources at runtime you might want to say at the time I configure this job when I'm setting up this container I right now when you create the container configure it with this network allocation and this shows an extension we've given to the YAML file for Docker compose that lets you set that at the time the containers created. So if we go back to our example where one virtual machine was overusing the storage and causing other services to get less resources at runtime applied that configuration so that we allocate 200 megabits to client one and now you see in real time that the amount of storage IO client one is getting is only 200 megabits and the other virtual machines are now able to get more storage IO. The key here is that we haven't changed the total aggregate throughput. The aggregate throughput of all the VMs is still the same. We've just reallocated what VMs get which resources to prioritize the VMs that are most important to us and the workloads that are most important. So once again we are for makes and what we have is a real-time software solution that allows you to monitor all of your resources whether they're running on-prem or in the cloud. We give you one central dashboard on which you can visualize the resource consumption of all of your resources by your VMs, by your containers, by your hosts. We give you the fine-grained control. Again in real time now you can change the allocation. How much resources an application consuming? How much is the VM consuming? How much storage IO do they get? All of it in real time and in doing that we can ensure a better performance for your applications and better SLAs. You're welcome to join our early access program here on the web at appformics.com. Thank you.