 Good morning everyone, we are the JMeter team. We are a team of six members. We are Shishmita, she is from Kate College, Dhruv Joshi and myself Manje Chaudhary from NIT Rao Kela, Naman and Shekhar from NIT Jamshedpur and Surabhi from NIT Surat. So we are a six team member working on enhancements of JMeter. First we had the introduction of JMeter and we would directly go to the enhancements part. Let's see the enhancements of JMeter. We quite had a struggle to go through all the problems and you know the source code and reading through it and then understanding what are the problems, then understanding that what are the plugins available, what are the solutions already available with us and then finding out that what could be the possible new solutions or problems that any tester or any person wanting to use JMeter would face and then we found out solutions. So first of all it is dynamic bandwidth throttling. We realize that bandwidth would be different for various users who are trying to use an application. So for that we did a bandwidth throttling element and this is a dynamic element because when we are testing based upon the response, the errors, this dynamic bandwidth throttling element dynamically changes the bandwidth and when the error decreases the bandwidth is again raised. So this is one of the enhancements done. The second enhancement is in IP spoofing. We realize that when one uses a JMeter, all the requests they go and they are from one specific IP. But in a real life scenario you understand the geographically people have different locations and different IP address. Therefore, we had an IP spoofing element where the different IPs would be created and these would go under a CSV element and from the CSV element the different IPs would be selected and thus the testing would be from different IPs. Then we have the auto CSV generation element. We understand that when we are testing and when there is a form for some web application, then that form needs data which has to be redundantly put again and again and upon that there has to be a CSV element which has to be built by the tester. To reduce the work of the tester, what we did is that we collected this information from the database and we directly put it into a CSV file and this file is now used with the JMeter such that it makes the work of the tester very easy. The redundant work is removed. Then the fourth element is an auto PCC testing element. This is a very important element and very vast element in our work. This is basically the TPC benchmarking that we already have and for that we realized that there is always a server testing and for this server testing we required that there is a lot of users to be generated and for this the JMeter is a perfect tool to be used but what we require is a testing script, a sampler and all these things were built. We would say it is a preliminary stage. We cannot assert that one could actually get a perfect TPCC testing but well developed upon this it could be done. It is quite done and it could be made to more perfection to make it a perfect TPCC testing tool and then we have other small enhancements we would say. We filtered the results table. We realized that in a results table a tester would like to know that some results are beyond certain parameters but there was nothing to put the parameter into the testing script. So, we did put a GUI to implement that. Then we have a constant increasing timer. This was in the very initial stage when we just learned the code. We first did a small timer element and this was something like a similar to the other timers but this gives it constant ramp up time to the timings of the threads. And then we have an enhanced assertion results. The assertions results table we saw it never had the passing results. After the failing results were shown we added the passing results and some more details about the results. When we used we realized that that was a demerit. So, we enhanced it that way and then for SMTP testing there wasn't a default element. So, we made a default element a small configuration element as all other protocols have as default elements. So, these are the basic enhancements we have in details people speaking about all the enhancements and we also have tests ready with us but as time won't permit therefore we have it with us anybody interested can have it. So, that is also made. Did you create the test data for the PCC and everything? Yes, sir. You did that. What was the size? Sir, for each warehouse about 120 MB of data is generated in the database specified. 120 MB? Yes, sir. For each warehouse? For each warehouse. And then upon the warehouse we are using how much database is already there. So, typically for the large data... I am just talking about warehouse because TPCC is not about warehouse. It's a transaction processing. Yes, sir. For transaction processing it basically emulates a supplier and the supplier is concerned with the number of warehouses. Oh, the warehouse not in the data warehouse. No sir, not data warehouse. It's the standard TPCC. Yes, sir. Exactly. I wish we have gone up to 1 GB size at least. Sir, we have gone up to 3 GB. So, we have created in this GB. 3 GB. Good work, sir. Thank you, sir, for your time for us. And we also thank you, sir, for providing us an environment where we could work. It was not just a learning experience but we have very beautiful experience to know all the people around the world and work with them and learn the various cultures. Thank you very much, sir. We would also take the opportunity to thank our sir, Nagesh sir. He has been very much cooperative and has always given us very beautiful ideas. Thank you. Let's go back and start from the beginning of it. Introducing JMeter. Let's understand that why would we actually require a JMeter? It's a testing tool. Suppose we go down to a very simple application or web application. We would require that the web application is tested with a number of users after development of it. So, when it is a number of users, we cannot have people sitting and then testing our application. It may be a number of hundred or thousand. So, what we do is that we use a JMeter, a testing tool, and it virtually generates users. It virtually creates a one-by-one step-by-step procedure to send requests. And thus, these requests are being processed by the application under use towards the server and this enables us to perform a real-time testing for our application. So, this is about a JMeter that why should we use it? Then let's come upon what is the testing like? A testing basically revolves around three things. User, data, and time. How users, the multiple number of users that we are using for testing, data, it's about the response data that we receive, the data that we feed in, and about the timing, about how we time the request being sent. Now, let's come to what is the performance testing? Let me tell you that JMeter is a performance testing tool basically. And therefore, we test performance by testing these things. Load that is virtually generating lot number of users and thus the load is generated. Then we test for stress. Stress is generated when we remove the resources as then for hardware and software resources, let it be some RAM is decreased, some timings have been decreased, and this enables us to create a stress test. And then we have a scalability testing. Here we scale the device. That is maybe the server, we enhance the hardware and software components such that we would see that up to what level the application can perform when given all kind of other necessary conditions to it. And thus we calculate. What do we calculate? We calculate the response, latency and throughput and so on. These are the metrics. Let's go to a simple test plan. One can see a test plan. It basically shows all the components of a JMeter. At first one can see a thread group, the outermost block. The thread group comprises of all the other things like samplers, configuration elements, post-processors, pre-processors, logic controllers, listeners and so on. So, what are these basically? Samplers are the request. If I want to test a web application I would send HTTP request and these request would be lined under the samplers, HTTP sampler for specific HTTP test. Then we have configuration elements. These configure your data. Like if you have multiple requests, you can have a default element which would make common settings for all the requests. Then we have some logic controllers. We have a logic and depending upon the success of that logic, true evaluation of that logic, some other processes would be taken. Then similarly, post-processors and pre-processors process the data received or the data to be put towards the server. We have listeners, the most important component. This generates graphs and statistical reports. And with this, we can actually analyze that what an application is doing and how we can enhance it. This is basically the performance metrics which are provided by the listeners. Let's go next. We would see some plugins. These plugins basically they have been explored by us only to understand that what is already available as solutions to problems faced by testers. This as being an open source tool, the JMeter, we have a lot of plugins for it. Some plugins being the thread group plugins. If you want to time the threads, so we would require that we would have some thread groups such that we can give our respective timings. So two plugins available for that. And timeline graphs, we realize that the graphs of the JMeter are not very effective. We want better graphs. And for that, we have a lot of plugins for graph and analysis. And thus, now let's come to the enhancement. As I already mentioned the eight enhancements, we would be one by one saying about four of the particular enhancements. Mines is the auto CSV generation. This is basically a CSV file generated automatically. Let me explain you the scenario which was existing before we developed this element. Whenever we need to do a test, we see that for a particular request which has a form in it, we need to fill in data. And suppose we have a lot, big number of users, then for that, we need to put in that many number of, you can say combination of data in the CSV file. And these combinations is then feed it into the JMeter and it's being used. So what is hectic is on a tester's part to manually build such a CSV file. So now we need to curve that. And for that, we have developed this element. This element does what is this? Just go back to the GUI. In the GUI, we put the details of, you know, the setting up with the server and the communication details. And then we mentioned the table and the specific database. And from that particular database, after connections being built, the data is picked up and put into a CSV file. And this CSV file is automatically put into the bin folder of the JMeter, such that JMeter's other scripts can use it. Let's go next. We'll see a CSV already generated. This is an example of the CSV that is generated. And then we see the test plan. Here in a test plan, we have a small element, already present element CSV data config. This element takes the name of the particular CSV file which is taken and the parameters associated from which the data has to be picked. So this when set with one of the particular samplers, you can see a particular sampler which has the username and password as parameters and takes parameters from the particular CSV. Now if we run the test, we get the parameters from CSV and the test is executed based upon the parameters taken. So in the results table, we can see that post data are those data picked up from the specific CSV file and thus this indicates that successfully it has been incorporated. Thank you. Now my friend Shekhar would continue. Good morning to one and all. I'm Shekhar Kaurav and I'll be explaining about bandwidth throttling and dynamic bandwidth throttling in jmeter. Bandwidth throttling is intentional slowing down of internet services. It is often used by internet service providers to control the amount of bandwidth used by the users. So actually what happens in real-world scenario when people access web services, they actually use different type of internet connections and when they are using different type of internet connections, actually using different bandwidths, like if you are using a mobile internet connection, then you are very low bandwidth as compared to the one who is using broadband connection. So if a tester wants to create such a test plan in which he can simulate slower connections, bandwidth throttling is needed. So we have incorporated this bandwidth throttling GUI where the tester can select the used bandwidth throttling and then he can specify the bandwidth that the tester wants to use. For showing the working of bandwidth throttling in jmeter, we have created a sample test plan. Here we have used two thread groups, each having five thread count and loop count of five and one HTTP sampler. So both of the thread groups are having 25 samplers. When we ran the test, first of all, I'll go to the next slide. In the two HTTP request defaults, we have specified one Mbps bandwidth for one of the thread group and one Kbps bandwidth for the next thread group. When we ran the test, here is the result table for thread group 1, thread group 2 and here is the comparison table. We can see that for the first thread group, in the topmost corner, there is written 5 by 10. It means that when the first thread group completed, the five threads of the second thread group were still running. In the latency column, you can see that latency is for the first thread group is around 800 milliseconds to 3,000 milliseconds. While for the same case, for bandwidth of one Kbps, the bandwidth, the latency is around 16,000 milliseconds. So there is a quite difference when we are using this. In the same test plan, we are using two types of thread groups and having different type of bandwidths. Here is the aggregate reports. Here you can also see the difference in throughput. The first thread group is having a throughput of 3.0 requests per second and in terms of Kbps, it is 42.1. While for the second thread group, the bandwidth was of one Kbps, the throughput is 9.1 per minute, which is actually 0.15 requests per second. And in terms of Kbps, it is just 2.1. So we saw that in the same test plan, we can now specify different bandwidths and different thread groups will use different bandwidth. So actually we are able to simulate different variable bandwidth in jmeter. Now I will be explaining about dynamic bandwidth throttling. Dynamic bandwidth throttling is actually a variation of bandwidth at runtime without a manual intervention. Actually, what happens that in situations like server is having too much load or having too many requests that it cannot serve, then it starts dropping the request. So in this situation, dynamic bandwidth throttling can help. Like if the error rate is crossing some threshold value, the bandwidth is sorted and reduced so that the number of requests going to the servers are also reduced in the meantime. And when the servers recover from the error, the bandwidth is again set to the maximum level. So the again full quota of bandwidth is available. So we have incorporated extended the GUI of bandwidth throttling and added the component of dynamic bandwidth throttling in that only. Here also the tester can select the bandwidth throttling part and then he has to specify the minimum applicable bandwidth which is the value below which in any stage of the test plan, the bandwidth will not go below this value and maximum permissible error is 7% which is the threshold value around which the bandwidth will vary. So to show the working of dynamic bandwidth throttling, we have created a test plan where the thread group is one and we have 1000 threads in one thread group. The timeout period is 22 seconds and number of samplers are 11. So in all there are 11,000 samples and we have set the permissible error to 7% applicable bandwidth to one bandwidth and minimum applicable bandwidth to one kVPS. When we ran the test, we got this aggregate report. We will see the actual what we are concerned with. Air percentage throughput and kVPS. In the starting of the test plan, when we ran it, there was 0% error because all the requests were being served and we were getting a throughput of around 82.9 requests per second and in terms of kVPS it was 56.9. But later on as the number of requests increased, the error percentage grew, it was around 20% to 9%. It is for different samplers that we used and along with it the throughput was decreasing. But as we were using dynamic bandwidth throttling, later on the server was recovering from the error and the error was going below 7%. So at the later stages of the test, we saw that the throughput was increasing finally in terms of throughput, the request per second also and in terms of kVPS per second also. We will see this Jmeter log report also. Initially the x-axis is showing the percentage error and the y-axis is showing the bandwidth changes. So initially when the percentage error was around 0%, we were using the maximum bandwidth that was 1 mVPS that we specified. But as the percentage error crossed 7% math, the bandwidth gradually decreased to a minimum of 1 kVPS and it continued from there. For the next part it is from the same test plan. Again when the server recovered from the error and the percentage error was going below 7%, 7, 6 and so on, the bandwidth was again increased to 1 mVPS. So in the same test plan we found that dynamically the bandwidth was changing according to the error rate and hence we implemented dynamic bandwidth throttling in Jmeter. Now I will ask you to continue for IP spoofing. Thank you Shaker for introducing IP spoofing, bandwidth throttling to us. Now I would like to speak about IP spoofing. What is basically IP address spoofing? It's concealing the identity of the user or impersonating another computer system which is actually used in Jmeter testing. In Jmeter when we run a test plan, many times the IP address that goes, even for thousands of users, the same IP address is recorded on the server side. So in case when we have an application which is dependent on the IP address of the user, the test plan by Jmeter fails because thousands of users use the same IP address and hence it doesn't practically suit the scenario. To eliminate this, we used IP spoofing. Let's take an example. Without IP spoofing, we have a load balancing server and there is only one computer generating thousands of users. Only one server of the load balancing server will be used and which doesn't happen in the real world. But when we are using IP spoofing, different IPs are generated and all these different IPs go through different servers as in a distributed manner the test is conducted. In Jmeter we introduced IP spoofing by first we need to provide the IP address of the system, then we need to provide the subnet of the network and then we need to provide the number of IPs to be generated by Jmeter. In Jmeter, the script runs in the back end which allocates the IP address of the system making those the virtual IP addresses and hence the IPs are allotted. This is the GUI for IP spoofing. We have to provide IP address. We have to provide subnet mask and we have to provide number of IPs to be generated. Next. These are the virtual IPs that are created and we can see all these IPs are allocated to the same machine. So when the request goes, these all IPs are utilized on the server side. Here we can see without IP spoofing the request goes to the server side. All of them come from the same local IP addresses and next we see when the IP spoofing is active. All these requests that are recorded on the server side, they are from different IPs that were generated by Jmeter and allocated to the machine. So this is how we run a proper Jmeter IP spoofing test. Thank you. I would like Solbi to continue with the IPCC. Good evening everyone. I am Sudabhi Mohan. So I would like to introduce TPCC. Jmeter is a load testing tool and with the introduction of automating TPCC we have made Jmeter capable of carrying out a preliminary TPCC test. So what is TPCC? TPCC stands for Transaction Processing Council. It basically carries out transaction database and benchmarking and delivers trusted results to the industry. A number of benchmarks are carried out by TPCC, some of which are TPC-APP, TPC-H and TPC-C. So next, why did we include TPCC and not any other? Basically many of the benchmarks that are carried out by TPCC are deprecated. TPCC is currently in use and it is a rather complicated process. Secondly, also many industries go for TPCC testing to showcase their performance in terms of OLTP performance as in how speedy their server is. Also, the actual TPCC testing is rather complicated and a very time taking plus costly affair. A preliminary test would be highly useful as in an industry can test its server with a preliminary TPCC test and see where its server lacks and hence improve it. This is a typical benchmark model. Here we have the benchmarking software installed and on the other hand we have the database. The basic functionality of the benchmarking software would be first to create the concurrent users for transaction against the database and secondly to check the data consistency of the database under test. This is the model for TPCC. TPCC is basically carrying out the emulation of a supplier who wants to sell, manage or distribute its products. So basically the company has n number of warehouses. Each warehouse is approximately 10 districts and each district is serving approximately 3,000 customers. These are all specified in the TPCC document itself and we are following all the standards. These are the nine tables which implement the TPCC schema. They store all the information regarding the orders placed, the history of the customer and the items that the warehouse has of the supplier. These are the transactions. These are the five transactions that describe any order, any processing of a new order of a TPCC schema. Now we come to the TPCC workflow. This is the actual workflow. Firstly as I said these are the five transactions. New order payment, order status delivery and stock level. These five transactions do not take place with equal probability. All have different weightages that are specified in the TPCC document and these weightages are taken care of in our test script also. So firstly the emulated user will select one of the transactions from the menu. Then the input screen will be displayed in which the input parameters for the transaction will have to be specified. This is termed as the key in time for the user. Next the transaction is fired and the response time for the transaction is noted. Next in the output screen the output for the transaction will be displayed. Now from the time that the output parameters is displayed in the screen till the time that the user selects the next transaction is termed as the think time and the loop repeat. This is a typical TPCC workflow that we have emulated in JMeter. Now why did we choose JMeter to automate TPCC? Firstly as we have seen JMeter is already capable of spalling a large number of users to emulate real users interacting with the system under test. So that part is already done in JMeter. Secondly also the transactions that are fired. The response time and the throughput is already being calculated in JMeter. So as we can see a part of the process of entire TPCC test is already there in JMeter. So we thought that we could include the other parts as well and make a complete TPCC preliminary test for JMeter. Now Naman would continue with how we actually carried out the TPCC testing. First of all we created this sampler TPCC sampler which actually creates and starts the transaction. It requires the user to fill in the following parameters the database URL the driver the JDBC driver class the username and password of the corresponding database. The name with which he wants the database to be created and currently we have included this test for MySQL and Oracle database. So we have two options when the user clicks on the create database button when the user clicks on the create database button the nine tables and the different procedures first of all are created and depending on the number of warehouses mentioned the different tables are scaled up for each warehouse about 120 MB of data is generated in the database. When he clicks on the start test button the different transactions are fired which actually carry on the TPCC testing. This is the include controller which is an element of JMeter which allows us to test the nine tables. We have made two test scripts one for testing MySQL and one for Oracle depending on the choice of the database by the user he can include either of the test scripts for his testing. This was our test we conducted with one warehouse you can see all the nine tables have been generated and the corresponding data have been filled up for each warehouse 10 districts have been generated and for each district about 3000 customers have been generated these are the five procedures that have been created. This was our test consisting 33 warehouses. This is a JDBC connection configuration element it basically all the JDBC request that are fired from JMeter it contains a connection configuration so as you can see rather than hard coding the connection values it is taking the value from the sampler that we have created and these are to the JDBC connection configuration. These are the controllers use loop controller basically allows us to take the time or even forever to create infinite loop this is a random order controller which allows the sampling to be random rather than sequence apart from these we were also told to take into care the think time and the keying time which have been taken into consideration as for TPCC standards this is the basic transaction call as we said those five procedures represent the transactions now those transactions need to be fired with the actual parameters this is the basic payment transaction it required 33 parameters different classes have been created in JMeter which actually generate the random values to be passed this is the function helper dialogue which shows all the functions that have been created by us procedure request this is a request that is being generated you can see that the functions that we have passed have been resolved to the proper random values and now these are being fired to the database this is a response that we get from the database for the payment transaction all the values are correct this is the aggregate report it shows that every of the process being carried out as per TPCC standards it only takes into consideration the number of new order transactions per minute the other transactions are just to create traffic so the number of new order transactions is the actual metric TPMC transactions per minute carried out by TPCC this is the transactions per second graph we carried on this test for a time interval of 10 minutes so it shows at what particular moment of time what transactions were thrown this is the aggregate graph it shows the average and the 90th percentile time by 90th percentile time I mean that the at what amount of time the 90% of the samples and the dose transaction have been thrown and this is a response over time graph it shows the different response times for the different transactions that were carried out apart from these there were lot of challenges that were faced with us when we were doing this jmeter testing first of all jmeter is a very complex structure it has more than 5000 classes and reading the source code and even emulating a small change was very difficult apart from that the different challenges we faced for each element was for automatic CSV generation getting the values directly from the database was a problem for dynamic bandwidth throttling any change in runtime to jmeter is not possible because we need to restart the jmeter then the entire source code was seen for it and changes subsequent changes were made to incorporate this IP spoofing just by providing a IP you need to check what all IPs are generated and what IPs can be used for testing that was a challenge and finally for tpcc testing tpcc have specified lot of standards incorporating everything and apart from that creating the test for separate databases was a big challenge now I will like to call Sushmita to provide the conclusion and future reference good afternoon everyone I am Sushmita from Kyve future work regarding to this project can be incorporating other benchmarks suppose like tpce tpch etc into jmeter and automation of scripting test scripting as in user is not possible to test script and jmeter can be do for the tester by the using techniques like webcrawling etc and in the instability of jmeter can be could be work out with some solutions bringing large download efficiency into jmeter and better automatic test results results produced by jmeter via some complex results and going to the conclusion the basic aim of this project can be enhancement of jmeter with some introduction of some additional features the task has been completed by the team successfully and we have made jmeter capable of performing more practical test with the introduction of IP swooping and bandwidth rottling then user instability of the user friendliness of jmeter has been improved with the introduction of automatic csv generation results filtration and finally jmeter has been extended with the introduction of tpcc benchmark testing tools apart from working upon jmeter we also did work upon some educational applications as we all know that we have been sponsored by mhrd and therefore we thought of doing some work from mhrd like these are some teaching aid tools and one can see educational application first of it is just go back first of it is a simple application for small kids there are some randomly generated shapes of different colors so jmeter or our work done sir now we told it to everybody so everybody knows and none of these people know anything about jmeter true sir true sir that is very true including me i do not know jmeter why this work has not been published in the jmeter community and secondly all these four things you talked about at least four are they in the form of a plugin right now they are just merged with the source code they are not as separate plugins any time you do something you have done a lot of work i can see that there is nothing against it but doctor phytox thing has been india has to contribute to open source community and you have failed so we can pass that because we have been studying what is the process of developing it in form of a plugin so i understand now remember i will tell you we can develop it as a plugin there was one for doing something do some tools i do not even remember what ok same promise yes we will put it as a plugin and we will put nothing as happens so far ok and that was being run by one of the mtech people who is getting passed out now so what is your plan you are the people who are non-knowledgeable so mentor is not going to be able to do anything ok the reason i am saying that is ok i have no idea i have no idea if your work is of a quality accepted by the jmeter gurus ok they are the people who can judge you nobody else so be proud and put it up it does not matter if you get hit ok how are we hiding so we do not intend to hide otherwise but the problem being the time was very small we took more than a month to first explore and by the time we finished developing these enhancements we could not make a plugin