[ Team LiB ] Previous Section Next Section

A Performance Testing and Tuning Example

In this section, we will discuss performance testing and tuning techniques in the context of a real-world application. Our example will be focused on an application called The Global Auction Management System (GAMS). GAMS is a WebLogic application representing an online auction where

  • Guests can browse for items by category, keyword, and whether or not it is a featured item. They can also view the existing bids for an item.

  • Guests can register so that they can place items for sale and bid on items as well.

  • Sellers can list items for sale. They are charged a listing fee and a selling fee.

  • Buyers can bid on items until an auction ends. At auction's end, the buyer with the highest bid is awarded the item.

  • The GAMS accounting department can run reports on listing and selling fees.

In the following sections, we will demonstrate the performance testing of the GAMS application with a tool called Grinder.

The Grinder Load-Testing Tool

Grinder is an open source load-testing tool written in Java that is easy to use and provides extremely useful feedback for benchmarking WebLogic applications. Grinder has many very useful features, and you should refer to http://sourceforge.net/projects/grinder for the latest release of Grinder and the most up to date information. The book titled J2EE Performance Testing with BEA WebLogic Server by Peter Zadrozny, Philip Aston, and Ted Osborne also contains some useful information about Grinder.

The Grinder Architecture

Grinder consists of worker processes that execute test scripts that simulate user behavior. The worker processes can run alone or they can report their performance back to the Grinder console. As shown in Figure 25.14, the console is in charge of stopping and starting one of more worker processes.

Figure 25.14. The Grinder console controls Grinder worker processes using IP multicast. The workers report their performance to the console.

graphics/25fig14.gif

The Grinder console dialog is shown in Figure 25.15. It is used display performance statistics collected by one or more Grinder clients. The performance information collected can then be saved to disk to compare with other performance information.

Figure 25.15. The Grinder console provides graphical and numeric displays of performance statistics.

graphics/25fig15.gif

Grinder clients are able to execute Grinder property files. These files consist of parameters related to communicating with the console, how many users to simulate, simulated client requests, and the number of times to cycle through the client requests. Client requests can come in many forms and Grinder plug-ins are used to execute these requests. The most common plug-in is the HTTP plug-in, and the most common request is connecting with a URL in the application to be load tested. The HTTP plug-in contains many features, including cookie support, and the ability to use HTTP authentication and to use data files when posting to a URL. This plug-in can also check URL responses and determine whether they were successful. To enable this, you need to define a string that will be contained within a successful URL response.

The sample Grinder properties file shown in Listing 25.3 instructs Grinder to performance-test connecting with www.yahoo.com and www.monster.com. It will create three Grinder worker processes, each one running in its own JVM. Each JVM will then spawn 10 threads. This will, in effect, simulate 30 simultaneous clients. In this case, it will cycle through the two URLs a total of 10 times and then it will stop. If we want the Grinder worker process to run until it is explicitly stopped by the console, we can set the number of cycles to 0.

Listing 25.3 A Sample Grinder Properties File
grinder.processes=3
grinder.threads=10
grinder.cycles=10
grinder.jvm.arguments=-Xint

grinder.consoleAddress=localhost
grinder.consolePort=6372
grinder.grinderAddress=228.1.1.1
grinder.grinderPort=1234

grinder.plugin=net.grinder.plugin.http.HttpPlugin
grinder.test0.parameter.url=http://www.yahoo.com
grinder.test1.parameter.url=http://www.monster.com

Grinder worker processes will create log files containing valuable information such as connection times and error information. Each thread within a Grinder worker process will create a data log file that contains response times and the number of connection errors that occurred. Each thread will also create an error log file that contains error details if one or more errors occurred on this thread.

Another possible entry in a Grinder properties file is the JVM options. When we use Grinder to load-test our applications, we want the performance of Grinder to remain constant so that we can detect performance changes within our WebLogic application. As shown in Listing 25.3, we pass the -Xint argument to the JVM. This argument turns off code optimization within the JVM of the worker process and stabilizes its performance.

GAMS Benchmarking Environment

As shown in Figure 25.16, the environment used to benchmark GAMS consists of three machines connected by a 100 Mbps Ethernet switch. We have separate machines for WebLogic Server and the Oracle 8.1.7 database. Additionally, we use a separate machine to run the Grinder console and the Grinder worker processes.

Figure 25.16. The Grinder console and its client(s) should be on separate machines from WebLogic Server and the RDBMS so that performance statistics are not skewed.

graphics/25fig16.gif

Specifying Performance Goals for GAMS

To begin testing, we first must define requirements for the performance of the system. The performance goals for GAMS are as follows:

  • Guests should not have to wait more than 3 seconds for an item search.

  • Guests that register should not have to wait more than 5 seconds when registering.

  • Sellers should not have to wait more than 5 seconds to list an item without a picture. If they are listing an item with a picture less than or equal to 50KB in size, they should not have to wait more than 10 seconds.

  • Buyers should get immediate response after bidding and should see their bid posted asynchronously within 15 seconds.

  • The GAMS accounting department should be able to run a report within 15 seconds.

The preceding response times should hold with up to 100 simultaneous users. Because our benchmarking environment is a 100Mbs LAN, we are not taking into account the transmission time that an Internet client would experience nor are we measuring the render time that a browser needs to display the HTML responses.

Understanding User Behavior

After speaking with our business people, we are told that at any given time, 75% of our user population will be guests, 5% will be registering, 10% will be sellers, and 10% will be buyers. Of course, a single person will probably be all three at some point, but these percentages are just a ballpark estimate for simulation purposes. We are also told that a member of the GAMS accounting department could run a report at any time.

Creating User Scripts

For each user type, we generate a test script. That is, a set of Web pages that each type of user will typically visit.

  • Guest User— Visit home page, do a category search, display an item, do a keyword search, display an item

  • Registering user— Register

  • Seller— Log in, list an item, list another item, log out

  • Buyer— Log in, bid on item, log out

  • GAMS accounting— Log in, run report, log out

Because we are using Grinder, each one of these user scripts is actually a Grinder properties file. Grinder properties files can be created by hand, but doing so is a tiresome and error-prone task. The TCPSniffer tool that comes with Grinder can automatically create properties files. TCPSniffer tracks a user's HTTP requests through a Web browser and can record them to a Grinder properties file. It also records the amount of time between requests. We used TCPSniffer to create the user scripts for the four types of users explained earlier. Listing 25.4 is an excerpt from the guest user script.

Listing 25.4 An Excerpt from guest.properties
grinder.test100.parameter.url=http://localhost:7001/header.jsp
grinder.test100.sleepTime=2684
grinder.test100.description=header.jsp

grinder.test101.parameter.url=http://localhost:7001/menu.jsp
grinder.test101.sleepTime=0
grinder.test101.description=menu.jsp

grinder.test102.parameter.url=http://localhost:7001/FindItemsServlet?featured=true
grinder.test102.sleepTime=40
grinder.test102.description=FindItemsServlet Featured

grinder.test103.parameter.url=http://localhost:7001/FindItemsServlet?catid=5
grinder.test103.sleepTime=2564
grinder.test103.description=FindItemsServlet Category

grinder.test104.parameter.url=http://localhost:7001/DisplayItemServlet?itemid=65
grinder.test104.sleepTime=3385
grinder.test104.description=DisplayItemServlet

grinder.test105.parameter.url=http://localhost:7001/FindItemsServlet?keyword=e
grinder.test105.sleepTime=4006
grinder.test105.description=FindItemsServlet Keyword

grinder.test106.parameter.url=http://localhost:7001/DisplayItemServlet?itemid=67
grinder.test106.sleepTime=3284
grinder.test106.description=DisplayItemServlet

The first three URLs, due to the use of frames, represent the GAMS home page. This is followed by a search for all items in category number 5 (Computers), display item 65, search for all items with the letter e in the title, and display item 67. Notice the sleep time (measured in milliseconds). TCPSniffer added this line and it represents the amount of time the Grinder worker process will wait before requesting the next URL. This simulates the amount of time that a user would need to read the Web page currently in front of him before moving to the next one.

Initializing Our Environment

To better simulate real-life usage of GAMS, we should have some actual data already in the database. This involves having some registered users, items, and bids.

We start with three users. Every item and every bid will belong to one of these three users.

We decided to pre-fill the ITEM tables with 100 items having item IDs of 1 to 100, spread out evenly over six categories, with half the items being featured items. The titles of the items will consist of their category name followed by a number 1–100. The description will be the title name plus the word description.

We made the end date of these items to be 7 days away so that their auctions would not end while we were testing. We also created five random bids per item to more closely simulate actual data. We disabled all email messages being sent so that we would not overflow the email server.

We also set the GAMS Connection Pool initially with five connections and a maximum of five connections. The JVM heap is initialized at 128MB with a maximum of 128MB.

Selecting a JVM

One of the first steps to benchmarking is selecting the JVM that gives you the best performance. Because we are running WebLogic Server 8.1 on Windows NT, we can use Sun's JDK 1.4.1 or BEA's JRockit, both of which are included with WebLogic. You should load-test your WebLogic application against each certified JVM and select the JVM that offers optimal performance. For the rest of this example, we will assume the use of Sun's JDK.

Testing Strategy

We have five types of users represented by five Grinder properties files. We ultimately want to test performance with all types of simulated users running at the same time, but doing so right away will make it more difficult to solve performance issues. Instead, we will run each script by itself, solve performance issues, and then run the scripts simultaneously.

When running each script by itself, we would like to uncover performance issues quickly. Therefore, for this part of our performance tuning, when we run the individual Grinder properties files, we will make all the user's think times equal to zero. This will represent a worst-case situation in which users are inundating our application with requests. Running the scripts in this manner will cause our performance issues to show up sooner so that we can resolve them more quickly.

Initial Benchmarking

Our initial benchmarking of GAMS will load-test with 10 simulated guests to get an idea of performance with a small number of users as shown in Listing 25.5.

Listing 25.5 Simulating 10 Users
grinder.processes=1
grinder.threads=10
grinder.cycles=0
Using Random Client Parameters

Looking back at Listing 25.4, you can see that every time this user script is run, it will query for the same item category and the same item IDs. We need a way to randomize this information. The Grinder HTTP plug-in supports this through the use of string beans. A String bean is a user-defined Java class that works with the Grinder HTTP plug-in to create strings in any way you see fit.

In our guest.properties file, we will change the URLs for requests 3, 4, 6, and 7 as shown in Listing 25.6. Where the hardcoded values were, we now have function names wrapped in < and >.

Listing 25.6 Hardcoded Values Have Been Replaced
grinder.test103.parameter.url=http://localhost:7001/FindItemsServlet? catid=<getCatID>
grinder.test104.parameter.url=http://localhost:7001/DisplayItemServlet? itemid=<getItemID>
grinder.test106.parameter.url=http://localhost:7001/FindItemsServlet? keyword=<getKeyword>
grinder.test107.parameter.url=http://localhost:7001/DisplayItemServlet? itemid=<getItemID>

As shown in Listing 25.7, we have created a simple Java class called GuestStringBean with the public methods getCatID, getItemID, and getKeyword. The Grinder client will call these methods in creating the request URL.

Listing 25.7 The GuestStringBean Class
package com.gams.performance;

public class GuestStringBean {
 private int NUMCATEGORIES = 6;
 private int NUMITEMS = 100;
 private String vowels[] = {"a", "e", "i", "o", "u" };

 // return a random number between 1 and NUMCATEGORIES
 public String getCatID() {
 int catid = (int)(Math.random() * NUMCATEGORIES) + 1;
 return new Integer(catid).toString();
 }

 // return a random number between 1 and NUMITEMS
 public String getItemID() {
 int catid = (int)(Math.random() * NUMITEMS) + 1;
 return new Integer(catid).toString();
 }

 // return a random vowel
 public String getKeyword() {
 int letternum = (int)(Math.random() * vowels.length);
 return vowels[letternum];
 }
}

The GuestStringBean class needs to be compiled and added to the classpath. It can then be added to guest.properties like this:


grinder.plugin.parameter.stringBean=com.gams.performance.GuestStringBean
Installing and Configuring Grinder and Test Application

To set up Grinder on your machine, follow the following steps:

  1. Unzip the distribution .zip file to a location on your hard drive. We'll refer to this as GRINDER_DIR.

  2. Add GRINDER_DIR\lib\grinder.jar to your classpath.

  3. Compile the GuestStringBean and RegisteringUserStringBean classes.

  4. Place the path to the compiled classes in your classpath.

Starting the Console

Start the Grinder console by opening a command prompt and entering java net.grinder.Console. Then open another command prompt and enter java net.grinder.Grinder guest.properties. By default, the Grinder client will execute the grinder.properties file unless we specify a different properties file. The Grinder client will output its command-line parameters and then state that it is waiting for the console to start it, as shown in Figure 25.17.

Figure 25.17. The Grinder client is waiting to be started from the console.

graphics/25fig17.gif

Go back to the Grinder console and click on the Graphs tab. You will see seven test areas for each of the seven URL requests in the guests.properties test script as shown in Figure 25.18. If you look at the Results and Samples tabs, you will see seven rows of information. Click on the Processes tab and you will see that one process with 10 threads is in a ready state.

Figure 25.18. The Grinder console is ready to collect performance data from the Grinder clients.

graphics/25fig18.gif

Before starting our Grinder client, make sure the Sample Interval setting is set to 1000ms. This means that worker processes will report their performance to the console once a second. Also, the line grinder.reportToConsole.interval=1000 has been added to guest.properties to reduce the CPU usage of the Grinder worker process.

Starting Our Grinder Clients Threads

We now go to the menu of the console and choose Start Processes or click the leftmost button in the toolbar. Each of the client threads will now execute the guests.properties test script and you can monitor their performance. Click on the Results tab to see performance information on each URL. We noticed our initial mean times were very high and they tended to go down over time.

We waited until we had collected 100 samples of performance information and then stopped collecting information by going to the Action menu and choosing Stop Collection. Next, we stopped our worker processes by choosing Stop Processes from the Action menu. Looking at the Results tab reveals the performance statistics in Table 25.5.

Table 25.5. 10 Users, 0 Sleep Times, 30 Samples

Description

Rounded Mean Time

header.jsp

234

menu.jsp

151

Featured Search

318

Category Search

150

Display Item

225

Keyword Search

165

Display Item

148

Total

199 (Avg)

The Grinder worker process has created two log files, out_nameofmachine-0.log and data_nameofmachine-0.log. The latter file contains the actual response times per URL, per thread, and per cycle, as shown in Listing 25.8.

Listing 25.8 Excerpt from data_XXX-0.log
Thread, Cycle, Test, Transaction time, Errors
1, 0, 100, 2994, 0
9, 0, 100, 2994, 0
7, 0, 100, 2994, 0

Because this file is comma delimited, we can import it into spreadsheet software such as Microsoft Excel. After importing the data into Excel, we can sort it by cycle number, thread number, and test number. Then we can create a pivot table as shown in Figure 25.19. From it we can see that the average response times for all tests in cycle 0 is 1465ms. This number then drops to 168 in cycle 1 and then 142 in cycle 2. These first three cycles take longer than other cycles because WebLogic Server takes a little time to get to optimal performance. One of the reasons for that is because WebLogic Server must convert JSP files to servlet Java files and then compile them the first time they are accessed.

Figure 25.19. Excel's pivot table enables us to view our average response times per cycle per test.

graphics/25fig19.gif

We would like the Grinder console to disregard response times from the first three cycles. On the console GUI, we can set the Ignore X Samples parameter. In our last benchmarking, we took 30 samples and ended up running through around 30 cycles of our URLs, so we almost have a one-to-one correspondence between the number of samples and number of cycles. Therefore, if we ignore the first three samples, we should end up ignoring the first three cycles of performance data. Because we are taking 30 samples, we can afford not to use performance from more than three cycles. We want to make very sure that we are not incurring the initial startup times for our application, so we decide to set the ignore parameter in the Grinder console to 5, restart WebLogic Server, and rerun our load-test.

As you can see in Table 25.6, without the initial performance numbers factored in, our mean times have gone down. Not only have they gone down, but they now are more intuitive. Clearly, the requests that involve the database take considerably longer than the ones that do not (header.jsp and menu.jsp).

Table 25.6. 10 Users, 0 Sleep Times, 30 Samples, Ignore First 5 Samples

Description

Rounded Mean Time

header.jsp

76

menu.jsp

65

Featured Search

302

Category Search

144

Display Item

208

Keyword Search

174

Display Item

126

Total

157 (Avg)

Increasing the Number Of Users

Now let's double the number of users and see how the application responds. The results, shown in Table 25.7, display the mean times with 10 and 20 users side by side. We can see that the AART (Average Average Response Time) has more than doubled from 157 to 324 and each of the individual mean times have doubled as well.

Table 25.7. 10 and 20 Users, 0 Sleep Times, 30 Samples, Ignore First 5 Samples

Description

Mean Time 10 Users

Mean Time 20 Users

header.jsp

76

133

menu.jsp

65

244

Featured Search

302

604

Category Search

144

301

Display Item

208

392

Keyword Search

174

311

Display Item

126

268

Total

157 (Avg)

324 (Avg)

When we increased the number of users to 30, we were refused a connection to header.jsp three times. To solve this problem, we changed the AcceptBacklog parameter from the default value of 50 to 200 and reran the load-test at 30 users. This produced the performance data in Table 25.8. We now started tracking total transactions per second. This represents the throughput of our application and WebLogic Server. At 30 users, we have a total TPS of 55.1.

Table 25.8. 30 Users, 0 Sleep Times, 30 Samples, Ignore First 5 Samples

Description

Mean Time 30 Users

Mean Time 30 Users (AcceptBacklog="200")

header.jsp

296 (3 errors)

243

menu.jsp

336

344

Featured Search

912

793

Category Search

480

452

Display Item

558

547

Keyword Search

465

436

Display Item

355

398

Total

490 (Avg)

461 (Avg)

Total TPS

52.8

55.1

We then ran tests at 40, 80, and 120 users, as shown in Table 25.9. When we reached 120 users, Featured Search was almost at our maximum acceptable response time (MART) of 3 seconds. Because of this, we felt that by the time we reached 140 users, we would be at more than 3 seconds. As we progressed from 40 to 140 users, our total throughput fell from 53.0 to 48.2. We were serving more clients, but we were getting less actual work done per second.

Table 25.9. 40, 80, 120, 140 Users; 0 Sleep Times; 30 Samples; Ignore First 5 Samples

Description

Mean Time 40 Users

Mean Time 80 Users

Mean Time 120 Users

Mean Time 140 Users

header.jsp

369

478

930

1740

menu.jsp

233

605

761

1730

Featured Search

981

1690

2880

3410

Category Search

688

1340

2520

2770

Display Item

815

1740

2750

3110

Keyword Search

597

1190

2150

2590

Display Item

492

1090

1980

2580

Total (Avg)

606

1170

1940

2590

Total TPS

53.0

53.2

49.8

48.2

Tuning the Database

It was noticed that the Category Search and the Keyword Search were taking the longest amounts of time. Both of these searches relied on finder methods within the Item Entity EJB. Oracle 8.1.7 includes Oracle DBA Studio, which can be used to monitor database connections and to study Oracle's execution plans for SQL statements. The SQL associated with the Featured Search, the Category Search, and the Keyword Search did not use an index and required full table scans of the ITEM table. With only 100 items, we were seeing a drop in performance. Because more items would be added to the table when the application was live, we knew these performance problems would only get worse. It was time to create indexes as shown in Listing 25.9.

Listing 25.9 Three Indexes Added to the ITEM Table
CREATE INDEX IDX_ITEM_CATEGORY ON ITEM(CATEGORYID);
CREATE INDEX IDX_ITEM_TITLE ON ITEM(TITLE);
CREATE INDEX IDX_ITEM_ISFEATURED ON ITEM(ISFEATURED);

We then reran the load-test at 140 users. Unfortunately, we did not see any performance gain in the three searches. Then we noticed that all three finder methods create SQL prepared statements. JDBC connection pools, by default, do not cache these statements, but caching can be explicitly turned on. We created a cache that could hold up to 20 prepared statements. When a prepared statement is already in the cache, the SQL does not have to be re-parsed and performance is improved. As you can see in the third column of Table 25.10, adding the cache reduced the time of our searches considerably.

Table 25.10. 140 Users, 0 Sleep Times, 30 Samples, Ignore First 5 Samples, Indexes Created

Description

Mean Time 140 Users Added Indexes

Mean Time 140 Users Added PreparedStatementCacheSize="20"

header.jsp

1950

810

menu.jsp

1340

664

Featured Search

3500

3160

Category Search

2870

2800

Display Item

3040

2580

Keyword Search

2570

2220

Display Item

2790

2060

Total

2620 (Avg)

1990

Total TPS

48.0

55.6

CPU Usage

> 90%

> 90%

RDBMS CPU Usage

~ 18%

~ 13%

At this point, we tried many options to improve performance. We tried both decreasing and increasing the number of execute threads, but performance went down in both cases. The same was true for the maximum number of connections in the JDBC connection pool. We also used the WebLogic Server Administration Console and noticed that there was plenty of memory available in the Java heap.

Looking at Architecture

Having exhausted the possibilities for tuning WebLogic Server, we investigated how to fine-tune our application. We were already taking advantage of the performancegains of the façade pattern and coarse-grained value objects.

Using Straight JDBC Code

We were curious to see whether using straight JDBC code for our Category Search could improve performance over using the Item Entity EJB finder method. We created a method called getItemsInCategory2() in GAMSFacadeBean that mimicked the functionality of getItemsInCategory() but instead of using the ItemBean EJB, it used direct JDBC calls. We then changed FindItemsServlet to call getItemsInCategory2() instead of getItemsInCategory(). We reran our load-test at 140 users and observed the mean time of the Category Search. Unfortunately, this had virtually no effect on performance because this JDBC code was not able to take advantage of the entity cache.

Using Optimistic Locking

We noticed that the only attribute that could change within an Item EJB was its current bid. We could change ProcessBidBean to make a direct JDBC update and then change ItemBean to read-only EJB. We would then explore the most optimal read-timeout value that was also satisfactory for our users. In this way, we could avoid calling unnecessary ejbLoad() from our ItemBean. We decided to pursue this option if extra tuning was necessary, but because we were already below the required response times, we decided not to pursue this experiment.

Registering Users

Now it was time to simulate guests that are trying to register so that they can buy and sell items on GAMS. We created a registeringuser.properties file as shown in Listing 25.10.

Listing 25.10 Excerpt from the registeringuser.properties File
grinder.plugin=net.grinder.plugin.http.HttpPlugin
grinder.plugin.parameter.stringBean=com.gams.performance.RegisteringUserStringBean

grinder.test200.parameter.url=http://localhost:7001/registertemp.jsp
grinder.test200.description=RegisterUser
grinder.test200.parameter.ok=Welcome
grinder.test200.parameter.post=registeruser.dat
grinder.test200.parameter.header.Content-Type=application/x-www-form-urlencoded

When we call registertemp.jsp to create a new customer, we post the contents of the registeruser.dat file as shown in Listing 25.11. This file was created by TCPSniffer and then modified by us. Notice the <getGamsAlias> and <getEmailAddress> parameters. The createNewCustomer() in the GAMSFacadeBean will check for duplicate aliases and email addresses; therefore, we need to create unique values for these fields.

Listing 25.11 registeruser.dat File
firstname=joe&lastname=shmoe&gamsalias=<getGamsAlias>&password=aaaaaaaa
graphics/ccc.gif &password2=aaaaaaaa&emailaddress=<getEmailAddress>
graphics/ccc.gif &address=123+main+st&city=mycity&state=ny&zipcode=12345
graphics/ccc.gif &cctype=Amex&ccnum=3623723623&ccexpdate=12%2F02

Also notice the line grinder.test0.parameter.ok=Welcome. So far, we have been testing for success only by whether we were able to connect with a URL. When called successfully, registertemp.jsp will respond with the message Welcome new user. The line grinder.test0.parameter.ok=Welcome instructs Grinder to look for the word Welcome in the response. If Grinder doesn't see Welcome, it assumes that the test ran with an error.

For this to work correctly, we must be able to generate unique GAMS aliases and email addresses. We could have getGamsAlias and getEmailAddress generate some random characters, and most of the time we would probably get lucky, but we would like a more reliable method. Grinder allows for advanced String beans that actually implement the StringBean interface. The methods initialize, doTest, endCycle, and beginCycle are called by Grinder as well as getGamsAlias and getEmailAddress. The initialize method is passed a PluginThreadContext. From this object, we can find out what our process and thread numbers are and use them to generate unique aliases and email addresses. We create a class called RegisteringUserStringBean that implements the StringBean interface as shown in Listing 25.12.

Listing 25.12 RegisteringUserStringBean.java File
package com.gams.performance;

import net.grinder.plugin.http.StringBean;
import net.grinder.plugininterface.PluginThreadContext;
import net.grinder.common.Test;

public class RegisteringUserStringBean implements StringBean {
 private PluginThreadContext context;
 private String processIDString;
 private int count = 0;

 public void initialize(PluginThreadContext context) {
 this.context = context;
 String idString = context.getGrinderID();
 int split = idString.lastIndexOf("-");
 processIDString = idString.substring(split+1);
 }

 // return alias based on value of count
 public String getGamsAlias() {
 count++;
 return "sampleuser" + processIDString + "A" +
   context.getThreadID() + "A" + count;
 }

 // return email based on value of count
 public String getEmailAddress() {
 count++;
 return processIDString + "A" + context.getThreadID() +
   "A" + count + "@abc.com";
 }

 public boolean doTest(Test test) { return false; }
 public void endCycle() { }
 public void beginCycle() { }
}

First we executed registeringuser.properties with 10 simultaneous users for 30 samples (30 seconds) and ignored the first 5 samples. This created 358 new users with a mean time of 979ms. Then we erased these test users from WebLogic Server security and from the CUSTOMER table in the database. We created a class called com.gams.util.RemoveTestUsers to do just that. We then executed with 50 simultaneous users and got a mean time of 2790ms with throughput of 17.8 TPS. Because this type of user represented a small percentage of the total users, we were happy with this performance. Also, at this performance level, we can potentially register almost 18 new customers a second. This equates to over 1.5 million new customers a day.

Simulating Sellers

To simulate sellers, we created three data files: login.dat, listitem1.dat, and listitem2.dat. These files were created by TCPSniffer and then modified by us. These files are used with seller.properties as shown in Listing 25.13. The file login.dat will hold the username and the password of one of our pre-registered customers. We post this file to j_security_check to use J2EE Web application security. Then we call ListItemServlet twice: once with listitem1.data and once with listitem2.data. Then we call logout.jsp. listitem2.data includes 19KB of picture information.

Listing 25.13 seller.properties File
grinder.processes=1
grinder.threads=1
grinder.cycles=1

grinder.jvm.arguments=-Xint

grinder.receiveConsoleSignals=true
grinder.reportToConsole=true
grinder.logDirectory=c:\\grinderlogs

grinder.plugin=net.grinder.plugin.http.HttpPlugin
grinder.plugin.parameter.useCookies=true

grinder.test300.parameter.url=http://localhost:7001/j_security_check
grinder.test300.description=Login
grinder.test300.parameter.post=login.dat

grinder.test301.parameter.url=http://localhost:7001/ListItemServlet
grinder.test301.description=ListItem1
grinder.test301.parameter.header.Content-Type=multipart/form-data;
graphics/ccc.gif boundary=---------------------------7d2d1165296
grinder.test301.parameter.post=listitem1.dat
grinder.test301.parameter.ok=listed

grinder.test302.parameter.url=http://localhost:7001/ListItemServlet
grinder.test302.description=ListItem2
grinder.test302.parameter.header.Content-Type=multipart/form-data;
graphics/ccc.gif boundary=---------------------------7d2d1180122
grinder.test302.parameter.post=listitem2.dat
grinder.test302.parameter.ok=listed

grinder.test303.parameter.url=http://localhost:7001/logout.jsp
grinder.test303.description=Logout

Executing seller.properties with 10 and 100 users for 30 samples and ignoring the first 5 samples elicits the mean times shown in Table 25.11. These times are well within our performance requirements.

Table 25.11. Mean Times for seller.properties

Description

10 Users

100 Users

Login

222

1960

ListItem1

339

2500

ListItem2

639

4890

Logout

124

2530

Total

328 (Avg)

3000 (Avg)

Total TPS

26.1

29.6

Buyers

As shown in Listing 25.14, we use a similar approach to simulating buyers as we used when simulating sellers. The login and logout process is the same. In this case, we call PlaceBidServlet posting the contents of placebid1.dat. This file was created by TCPSniffer.

Listing 25.14 buyer.properties File
grinder.test401.parameter.url=http://localhost:7001/j_security_check
grinder.test401.description=j_security_check
grinder.test401.parameter.header.Content-Type=application/x-www-form-urlencoded
grinder.test401.parameter.post=login.dat
grinder.test401.description=Login

grinder.test402.parameter.url=http://localhost:7001/PlaceBidServlet
grinder.test402.parameter.header.Content-Type=application/x-www-form-urlencoded
grinder.test402.parameter.post=placebid1.dat
grinder.test402.description=Place Bid

grinder.test403.parameter.url=http://localhost:7001/logout.jsp
grinder.test403.description=Logout

Executing buyers.properties with 10 and 100 users for 30 samples and ignoring the first 5 samples elicits the mean times shown in Table 25.12. These times are well within our performance requirements.

Table 25.12. Mean Times for buyer.properties

Description

10 Users

100 Users

Login

218

1480

Place Bid

196

1130

Logout

27.4

1100

Total

147 (Avg)

1240 (Avg)

Total TPS

57.8

73.0

GAMS Accounting Fee Report

At any point in time, a member of the GAMS accounting department may log in and run the Display Fee Report. This report lists the 50 most recent selling and listing fees charged by GAMS. To simulate these users, we created the test script shown in Listing 25.15. We post the file acctlogin.dat to j_security_check. This file contains the username and password of the GAMS accounting person.

Listing 25.15 accounting.properties File
grinder.test500.sleepTime=1000
grinder.test500.parameter.url=http://localhost:7001/j_security_check
grinder.test500.description=AccountingLogin
grinder.test500.parameter.post=acctlogin.dat

grinder.test501.sleepTime=1000
grinder.test501.parameter.url=http://localhost:7001/accounting/displayFeeReport.jsp
grinder.test501.description=DisplayFeeReport
grinder.test501.parameter.ok=Display

grinder.test502.sleepTime=1000
grinder.test502.parameter.url=http://localhost:7001/logout.jsp
grinder.test502.description=AccountingLogout

Running accounting.properties with 10 and 100 users for 30 samples and ignoring the first 5 samples produces the mean times shown in Table 25.13. because there would be only one accounting person on GAMS at a time, this was an academic exercise.

Table 25.13. Mean times for accounting.properties

Description

10 Users

100 Users

Accounting Login

220

3980

Display Fee Report

9650

15400

Accounting Logout

11

Did not run

Total

2510 (Avg)

5560 (Avg)

Total TPS

2.8

2.76

Running All Tests Together

Finally, we need to run all of these test scripts at the same time to simulate all user types accessing simultaneously. If we want to simulate 100 users and we use our initial user type percentages, we should create 75 guests, 5 registering users, 10 sellers, and 10 buyers. With 100 users, we are well within the Maximum Acceptable Response Rates (MART) we discussed earlier, as shown in Table 25.14. With 200 users, most of the tests are within the MART, except for header.jsp and Display Item, which does not get called.

Table 25.14. Mean Times for All Scripts

Description

100 Users

200 Users

header.jsp

445

7480

menu.jsp

491

248

Featured Search

1210

2960

Category Search

640

1650

Display Item

145

756

Keyword Search

627

561

Display Item

302

?

Register User

821

1010

Login

420

1760

ListItem1

487

1010

ListItem2

1520

1580

Logout

434

604

Login

881

1110

Place Bid

340

511

Logout

175

381

Accounting Login

1260

2460

Display Fee Report

860

1080

Accounting Logout

356

383

Total

602 (Avg)

1780 (Avg)

Total TPS

40.2

43.6

    [ Team LiB ] Previous Section Next Section