[ Team LiB ] Previous Section Next Section

How to Performance Test

It is very important to have a well-defined methodology when performance testing your applications. That methodology should include precise goals that have to be met. Without goals and a methodology to achieve them, performance testing could potentially never end. For example, if your goal is to make the application run as fast as possible, when will you know that you have achieved that goal? Your team might spend months to make your application perform 1% better. In most situations, that amount of time and resources would not warrant the outcome. In this section, we discuss how to quantitatively measure performance, establish reasonable performance goals, and identify performance issues within the system.

Measuring Performance

It is important for everyone involved in performance testing to have the same definition of measuring performance. This includes business people, quality assurance people, project managers, and software developers. Performance is usually tied to the type of application. Most applications fall under one of the following types:

  • Interactive applications— These applications are used by people and involve a user interface. Users enter data, submit requests, and wait for a response. The performance of the application is based on how long users must wait for a response. This is referred to as response time and is measured in seconds or smaller units, such as milliseconds. Average response time (ART) is calculated by adding the total response times for an operation and dividing it by the number of times the operation is executed.

  • Backend Applications— These applications perform batch processing of a large number of tasks or transactions. It is very important that everyone agrees on what a transaction is in the context of the application. The performance of these applications is based on how many transactions can be performed in a specific amount of time. This is referred to as transactions per unit of time. The unit of time is typically a second, hence the acronym TPS or transactions per second.

Defining Goals

The performance goals for an application are distinctly tied to the type of application and user load. For example, a performance goal could be that with 500 simultaneous users, the maximum response time of our WebLogic application should be no more than 5 seconds, except for user registration, which can take up to 9 seconds.

Benchmarking

After we have defined our goals, we execute our WebLogic application using a simulated number of users and we measure its performance. Load testing tools come in many shapes, sizes, and prices from open source to commercial versions that run in the thousands of dollars. LoadRunner for BEA WebLogic from Mercury Interactive and OptiBench Enterprise from Performant are load testing tools that work with WebLogic Server. More information about LoadRunner is available at http://www-heva.mercuryinteractive.com/products/loadrunner/. Visit http://www.performant.com/solutions/ for more information about OptiBench.

Using a load testing tool, we run functionality within the application multiple times and use an average response time or transactions per second to make our readings more accurate.

Profiling

After benchmarking your WebLogic application and discovering performance issues, the next step determining what is causing the problems. Although the sources of the problem can vary, we have a variety of tools at our disposal to narrow the possibilities. To find our bottlenecks, we use tools to monitor network usage, CPU utilization, memory allocation, database execution plans, and other profiling tools. We now take a closer look at using these tools to profile the various system components.

Profiling WebLogic Server

The Administration Console can be used to monitor memory usage and garbage collection activity at the server level. Garbage collection activity is detected by noticing large drops in the current memory usage. The Administration Console can also be used to monitor the throughput and the activity of the default execute queue. JDBC connection pools can be monitored from the Administration Console as well.

Profiling the Database

Every major RDBMS includes monitoring software to display the number of connections to the database and the SQL statements that are executing. Developers can ask the database for an execute plan of how a SQL statement will be run. They can then tune the SQL and the database to optimize performance.

Profiling Network Traffic

With the abundance of cheap, fast, reliable networking hardware available these days, the network itself is rarely a source of resource contention in enterprise applications. Still, it is important to have some quantitative understanding of the network bandwidth an application requires so that the appropriate hardware can be acquired. To this end, there are several network sniffing applications available that can inspect all the traffic on a TCP network. One such tool, Ethereal, is available at http://www.ethereal.com.

Ethereal enables you to see all the network traffic coming and going through a network interface. You can filter this data by port, destination, and protocol to give you a better picture of your network utilization. For example, capturing all the data moving across port 7001 of your WebLogic server machine will enable you to see how many bytes per request your application is receiving and sending.

Application Code

To see how our WebLogic application is behaving, we use a Java profiler. Although the HotSpot JVM itself offers several options for monitoring heap allocation and CPU usage, they are not very helpful when trying to profile only your WebLogic application and not WebLogic Server itself. However, there are many commercial profiling tools, such as JProbe and OptimizeIt, to track the resources (such as memory and CPU usage) that your WebLogic application is using, as well as the specific Java code that is potentially using these resources inefficiently. Both of these products will help reveal hotspots in the application that are caused by using excessive CPU cycles or by allocating too much memory. Information about JProbe is available at http://www.sitraka.com/software/jprobe/ and information about OptimizeIt is available from http://www.borland.com/optimizeit/.

Interpreting Results

After the WebLogic application has been analyzed from several viewpoints, the results have to be interpreted and the solutions must be implemented.

Although the process of optimizing WebLogic applications can seem daunting at times, you will typically find that performance problems are associated with a lack of resources. The problem can sometimes be solved by re-architecting the application or by performance tuning. Other times, there is no alternative to adding more resources. The following are common places to look for a performance problem due to lack of resources:

  • CPU— While conducting load tests, there will be times when the application is not meeting performance requirements and the CPU of the WebLogic Server machine is at or near 100%. The causes of these can either be the WebLogic application itself or garbage collection. Later in this chapter, we will discuss how to monitor garbage collection.

  • I/O— There are times when the performance of a WebLogic application suffers due to its inability to interact quickly with the database or the network.

  • Database— In many cases, WebLogic applications will be bottlenecked by the RDBMS. Sometimes this is due to bad application architecture in which too many round trips to the database are occurring. In other cases, the database itself should be tuned, such as by adding indexes.

  • Network— Although it is a rare case because networks are quite fast these days, they can become saturated with traffic to a point where client requests take too long to reach WebLogic Server and then to return to the client.

    [ Team LiB ] Previous Section Next Section