|[ Team LiB ]|
How to Performance Test
It is very important to have a well-defined methodology when performance testing your applications. That methodology should include precise goals that have to be met. Without goals and a methodology to achieve them, performance testing could potentially never end. For example, if your goal is to make the application run as fast as possible, when will you know that you have achieved that goal? Your team might spend months to make your application perform 1% better. In most situations, that amount of time and resources would not warrant the outcome. In this section, we discuss how to quantitatively measure performance, establish reasonable performance goals, and identify performance issues within the system.
It is important for everyone involved in performance testing to have the same definition of measuring performance. This includes business people, quality assurance people, project managers, and software developers. Performance is usually tied to the type of application. Most applications fall under one of the following types:
The performance goals for an application are distinctly tied to the type of application and user load. For example, a performance goal could be that with 500 simultaneous users, the maximum response time of our WebLogic application should be no more than 5 seconds, except for user registration, which can take up to 9 seconds.
After we have defined our goals, we execute our WebLogic application using a simulated number of users and we measure its performance. Load testing tools come in many shapes, sizes, and prices from open source to commercial versions that run in the thousands of dollars. LoadRunner for BEA WebLogic from Mercury Interactive and OptiBench Enterprise from Performant are load testing tools that work with WebLogic Server. More information about LoadRunner is available at http://www-heva.mercuryinteractive.com/products/loadrunner/. Visit http://www.performant.com/solutions/ for more information about OptiBench.
Using a load testing tool, we run functionality within the application multiple times and use an average response time or transactions per second to make our readings more accurate.
After benchmarking your WebLogic application and discovering performance issues, the next step determining what is causing the problems. Although the sources of the problem can vary, we have a variety of tools at our disposal to narrow the possibilities. To find our bottlenecks, we use tools to monitor network usage, CPU utilization, memory allocation, database execution plans, and other profiling tools. We now take a closer look at using these tools to profile the various system components.
Profiling WebLogic Server
The Administration Console can be used to monitor memory usage and garbage collection activity at the server level. Garbage collection activity is detected by noticing large drops in the current memory usage. The Administration Console can also be used to monitor the throughput and the activity of the default execute queue. JDBC connection pools can be monitored from the Administration Console as well.
Profiling the Database
Every major RDBMS includes monitoring software to display the number of connections to the database and the SQL statements that are executing. Developers can ask the database for an execute plan of how a SQL statement will be run. They can then tune the SQL and the database to optimize performance.
Profiling Network Traffic
With the abundance of cheap, fast, reliable networking hardware available these days, the network itself is rarely a source of resource contention in enterprise applications. Still, it is important to have some quantitative understanding of the network bandwidth an application requires so that the appropriate hardware can be acquired. To this end, there are several network sniffing applications available that can inspect all the traffic on a TCP network. One such tool, Ethereal, is available at http://www.ethereal.com.
Ethereal enables you to see all the network traffic coming and going through a network interface. You can filter this data by port, destination, and protocol to give you a better picture of your network utilization. For example, capturing all the data moving across port 7001 of your WebLogic server machine will enable you to see how many bytes per request your application is receiving and sending.
To see how our WebLogic application is behaving, we use a Java profiler. Although the HotSpot JVM itself offers several options for monitoring heap allocation and CPU usage, they are not very helpful when trying to profile only your WebLogic application and not WebLogic Server itself. However, there are many commercial profiling tools, such as JProbe and OptimizeIt, to track the resources (such as memory and CPU usage) that your WebLogic application is using, as well as the specific Java code that is potentially using these resources inefficiently. Both of these products will help reveal hotspots in the application that are caused by using excessive CPU cycles or by allocating too much memory. Information about JProbe is available at http://www.sitraka.com/software/jprobe/ and information about OptimizeIt is available from http://www.borland.com/optimizeit/.
Although the process of optimizing WebLogic applications can seem daunting at times, you will typically find that performance problems are associated with a lack of resources. The problem can sometimes be solved by re-architecting the application or by performance tuning. Other times, there is no alternative to adding more resources. The following are common places to look for a performance problem due to lack of resources:
|[ Team LiB ]|