 My name is Peter Nikolov, I'm Chief Technology Officer for Opsani and I will tell you about continuous optimization as a server. First, a few words about me. I have been building cloud infrastructure from the early days of cloud before it was even called cloud, but I have also used cloud in for many different applications and for many large systems at scale. I have co-founded Opsani where our mission is to improve cloud operations using machine learning. I want to share a war story with you, something from that we were helping a customer complete their digital transformation. The customer was a large online event in the new conferencing provider. They moved from their on-prem data center to the cloud. They were fantastic. They finished that in just a few months, which is amazing. It was a really happy and successful experience for them. They were a poster child for DevOps. Their developers were carrying pagers, just operating their services had really the power to do what they needed and bring features quickly. It also allowed them to expand their market. In addition to the larger companies that they had as customers, moving to cloud allowed them to move to smaller customers where instead of a whole server they could use a fraction of a server. And that is something that cloud enabled for them. But it did bring in its own problems and before you go into the problem, I will tell you the answer is 7.5 quintillion. So let's see first, we will return to this, but let's see first what the problem was and what their experience was when they were onboarding, especially on the smaller accounts. So their developers will build the code for their application. They would deploy it, run through different environments, following best practices, and then once they put it in production, they would notice performance lags. They needed to be able to share the same knowledge between multiple customers having different amount of resources but sufficient to be able to operate. So of course they did the thing that everybody does. They started monitoring real user performance. They started building low generators, synthetic low generators, and they started using complicated ATM tools that were able to get high visibility into the performance of the application. And with very significant amount of work to understand why the performance is what it is and try to tune it. Of course that takes a long time. So in the meantime, the solution is over provision. Give it more resources, hopefully that you will get enough performance and you do by putting a lot more resources than what the application needs. You will able to get that higher performance. And then with more in-depth analysis of the data, things that were taking pretty much six to eight weeks at a time, they were able to feedback into the testing process, more details, as well as to development. Which on the surface sounds good, but when you realize that one of the benefits of cloud is moving to DevOps and to high velocity, they were having releases every week. So if it takes them six weeks to optimize a release, you understand that they are significantly behind on that. So the result was that they ended up over provisioning a lot and then they put a lot of work into manually optimizing the services and they were always looking back. So this is kind of like driving on the freeway by looking in the rear view mirror. So let's look at an example. How should this work? What is exactly the problem? We'll use as an example the online boutique. It is a reference application that Google built to demonstrate best practices for microservices and generally help people understand how to use Kubernetes. So this is a e-commerce application. It consists of 11 different services and you see them here on the diagram that work together. And then if you want to, and it's a good example of any application that you, probably most applications that you would want to deploy. If you want to put this application into production, you have to tune it and make sure that you can reach the performance goals and work with a number of customers that you have and you need to be able to handle the performance you need. So it is an 11 tier application and it comes pre-configured with certain parameters and parameter values as their defaults. And if you just deploy it like that, you're probably wasting a lot of resources and you don't know if you're meeting your performance objectives. So when you look at the configuration, what is that configuration? There are two configurations per service. That is the CPU and the memory. And again, that is a very simple case. Most applications, a lot more. They may have Java, you may have network configuration, kernel parameters and so on. So there may be a lot of other configuration, but let's just focus on these two. Even if we decide that there's only eight possible settings for the CPU and for the memory value on these 22 different parameters, you end up with 75 quintillion different configurations that you may use. And only one of them is the optimal. So what does this mean? 75 quintillion, okay, we have computers, they can do that quickly, right? Well, if you compute that, you will see that if you can try 100 different configurations every second, which is hard. You have to run a lot in parallel, but say you can do 100 configurations a second. It will take 23 billion years to complete and to go through the full space. And you really don't have that time because only in five billion years the solar system will disappear. And your business has even less time than that because your development team releases a new version every week. So you really don't have that long. So what we did is we connected the online boot tip to our continuous optimization service and decided to see what the actual tuning should be for running it in production. And the results were that we were able to trim costs to 79% or reduce costs by 79% and we were able to increase throughput more than 2X. The resulting efficiency was an increase of over eight times on the number of transactions that it could do per dollar. Now the most interesting part here, so first this is by itself is amazing, but we were also able to do that with only 20 minutes of setup, one of our solution engineers configured it and connected it to our optimizer. And it took about two days to tune and get this result. And here we will show the dashboard, show you what continuous optimization looks like in our product. So this is the application that we have optimized, the Google online boot tip. And then if you look at the configuration, these are all the 22 different parameters, CPU and memory for each of the microservices. And you can see what their original values are as they came from the manifest from GitHub. So, as our system went through a number of steps, they find an optimal value or optimal set of values on the exact goal deluxe configuration that is exactly perfect with respect to meeting the performance objectives and not over provisioning, not adding too much resources. And then you can see the results in improved performance, reduce significantly reduced cost, and the efficiency in the performance cost ratio efficiency is improved. Coming back to our presentation. Once you have continuous optimization as a service, how does this change the process? How does this improve the picture that I showed you earlier? Well, by adding continuous optimization in production, then you have real time autonomous changes to the configuration and tuning of those configuration to match the current load and the current application behavior. And that is the core. This is the first thing because that works immediately that works on your production system and it is working at the moment. It adapts to the current services. If you like, you can also use that in your pre-code environment or in performance test environments to shift left the performance tuning and experience to see how this works even in development. So this allows you to start working with performance even earlier. So the result of this, of having a powerful tuning is that you have higher performance, lower costs, and actually what we have observed is also better stability. So what is continuous optimization as a service and where can you be applied? You can work on a single application. You can attach it to one service if you have just one SaaS application and that's your main app. You can attach it to that. Or if you're a larger company, you have a service delivery platform where all of your applications or development groups run their applications on that platform. You can attach it to the platform as a whole and work with all services. It yields to thousands of services and that process is autonomous. It onwards all the applications that it finds on the platform. And then what the tuning that happens, happens continuously. It happens every time when there's a new code release, it happens when the load profile changes. You have different customers or different mix of requests are coming or if you have an infrastructure upgrade, I say you use new machine types, different cloud provider, different third party libraries. These are all things that can significantly impact the performance of your application and how it behaves and the resource use that it has. So on each of these changes of optimization can be performed and it will tune it to the specific circumstances. And then the value that it delivers continuously is the higher performance, better performance, more stable, improve the availability of the application and lower cost. So the result of this is, and you have probably seen this in your own projects is that manual workload tuning just doesn't work anymore, not at these speeds, not at the sizes of projects that we have. We have weekly or even daily releases. So the frequencies is so high. There are applications are running on multiple different platforms and have immense variations of different configuration, different configuration settings, many tunable parameters, many things that affect the performance and the cost. So combining all of these things, if you try to do this with manual tuning, or not doing, then you can only work reactively to improve performance. And it is really hard to get to good results and know that you are actually not spending too much money. So I hope I was able to show you that manual workload tuning in today's environment is just impractical. Our systems and my applications change too often. They run on variety of different platforms that have very complicated configurations. The result is that you can see massive waste of resources for performance and as a result also lower availability with continuous optimization as a service, you can outperform your competitors, because your applications are always tuned. You can outsmart them because your developers are spending more time building features instead of tuning and doing trial and error adjustments, and you can obviously outclass them because you have to use your resources more efficiently. If you want to see how to use machine learning to tune and run continuous optimization for your applications, please come to see us and our team at our booth. Thank you.