 Thank you. My name is Siva Balan. I work at G-Digital as a performance engineer, and I'm here to talk about in five minutes, just to give you a brief overview of how we do performance engineering, and what tools we use, and what kind of tests we do, and some of the interesting findings that we got as part of doing performance engineering on Cloud Foundry, which is very different from what you do for on-prem applications, and not Cloud-based applications. So one of the some of the tools and techniques that we use, first with two type of tests that we do, capacity tests, scalability, endurance, stress, and chaos monkey. So we try and implement chaos monkey test, which some of you might know that in Netflix, it started with their Amazon EC2 instances, bringing down at random times in production. But as part of capacity test, we try to first make sure the service that's deployed to Cloud Foundry is well-tuned for the type of load that you'd expect on one instance, and then we start doing the scalability test by scaling out to multiple instances by 10, 50, or 100 instances as well. And the endurance test is primarily to detect resource leaks, failures, memory leaks, CPU utilization that goes haywire in some instances. And stress test is to basically find out when, how your application or service is able to recover itself after you've exhausted all the resources. And chaos monkey, of course, is to randomly pull down instances. And we try to stop and start instances at random times to make sure that it's able to detect failures and it's also able to recover gracefully. And some of the tools that we use is Jmeter for load generation itself. APM tools, like we use Neuralik, AppDynamics, or Dynatrace that we just saw a few minutes earlier. And Jalokia for JMAX. And this is slightly different from what typically people use for getting JMAX, and I'll tell you why. And ELK is to persist the actual load itself. I'm sorry, the data that comes back from Jmeter. And how we do it, so this gives you a very brief representation of our performance test framework. As you can see, the code gets checked out from GitHub, and the code gets pushed, it gets built, it gets pushed out to Cloud Foundry. We scale the number of instances, and there is a monitoring hooked up from Neuralik, or AppDynamics, or in this case, we use Neuralik, but you can use AppDynamics or Dynatrace, or any of the APM tools that's part of your organization. We use Jalokia primarily because for security reasons, the Docker SSH is shut off in our organization. So we had to have some kind of HTTP wrapper on top of JMAX for us to get the JMAX metrics. So Jalokia helps us really well there. And as you can see, the output of Jmeter gets pushed into RabbitMQ, and that gets a lockstash subscriber, which persists the data for us in the elastic search, and we use Kibana for visualization of the data. And this is how it looks. A typical dashboard looks like this for us, where we have data coming in from Jmeter, coming in from Jalokia, coming in from JMAX for us to get a visualization of a good representation of what the service looks like. In this case, this is a UA service that we use in our organization. How different is troubleshooting in Cloud Foundry? So we found some lessons learned, which I thought I could share. We didn't have access to our MyPort, so we couldn't use Jconsole, so we had to use Jalokia to get JMAX monitoring up and running. JVM crashes, crashes the container. It actually doesn't let you get any data out of your container once it crashes. So we had to rely on application logs that are stored in elastic search. It gives us a little bit of data, but not a whole lot of data, unfortunately, because once the JVM is crashed, it's gone forever. Traditional tools like Jvisual VM and Jprofiler don't work for your troubleshoot issues, so we end up relying on APM tools like Neuralek, AppDynamics, or Dynatrace for us to give us as much information as possible before the container crashes. And some of the exceptions that gets thrown right before crashes get logged into these APM tools, and it can help you a little bit to understand where the problem is. And finally, I would like to end with the industry finding that we came up with. So there was a Springboard app that was constantly crashing every two hours. We couldn't find any leaks in our Java code. There were no full GC observed before the crash. And the problem with that was that it was using a lot more native memory than what we had given it. And we had to previously use the memory limit variable, but now we don't use it anymore. We can use the Java Buildpack 4.x which actually gives you different memory segments on how you can size the different memory segments. 15 seconds over, I will end really quick. And the other one was actually the monitoring agent itself was causing memory leak for us, so we had to go and unbind it and try to test again. But then the key takeaways, start early, go deep, use a good monitoring tool, don't fly blind, and enable the developers and give access to the reports for them. If you have more questions, come to our booth, predicts booth in the foundry, and you can reach me at the Twitter handle or at the email address. Thank you very much. Thank you.