9 Common Myths about Performance Testing

Posted on Jun 14 2012 - 9:22am by Raj

Common goals for Performance Testing initiatives are to validate that an application performs properly under a defined set of load conditions, conforms to the performance needs of the business, early identification and resolution of performance problems, and validation of hardware configurations required to meet the stated performance requirements, in addition to anticipated future demands of the application.
Performance testing initiatives are not always “silver bullets” that quickly and dramatically can mitigate the risks of late detection of a poor performing application or infrastructure. Thus, the goals should be set in such a way so that it enables the group to experience “quick wins.”
Performance testing provides us the details for speed, scalability and robustness of the application under given conditions related to system configuration, hardware and number of users.

Myth # 1: The application must pass a functional test before Performance testing can be conducted.

Performance testing should be carried out as soon as the software application under testing is stable. The performance testing can occur in each test phase. It can be done for specific code set in form of white box testing. Developers also sometimes use Query Analyzer tools to validate the performance of query and fine-tune the database aspects of software. This helps in fixing the performance bugs while application is under development rather than towards the release of software application.

The advantage of early performance testing is to have reduced cost of quality by fixing performance issues early in the cycle. During each testing cycle, we can perform incremental performance testing of integrated modules. The incremental performance test should be compared with bench mark to perform analysis in terms of progress made with regard to performance, if it is improving or decrementing.

The scalability and stability aspects can be tested when we have entire functionality implemented and tested.

Myth # 2: Performance testing is the extension of functional testing. It is performed to cover all the features.

The main purpose of the performance testing is for system tuning. As stated above it is performed to test the Response time, Stability and scalability of the software application in conjunction with the hardware used.

The primary objective for Performance Testing is to identify application and infrastructure issues that surface under extreme conditions such as high concurrency, heavy loads, or inadequate infrastructure.  Specifically, performance testing will identify issues with memory, processor cycles, network bandwidth, and disk capacity issues that result from unexpected usage patterns. Additionally, performance testing may identify inefficient or sub optimized application code, intrusive database software code, and poor configuration of database server or server configurations.  With early identification of these risks and proper mitigation planning, development and testing teams may have an opportunity to address the issue before it becomes a production incident. In the simplest terms, the primary objective is to stress a system to the point of breaking as early in the project as possible, in order to reveal harmful conditions that may results in loss of data and provide the fastest possible early detection.    

The objective of performance testing is not to validate the functionality of the application. It is done to validate the application against predefined bench mark related to User experience in terms of response time, application behavior under load testing with the number of users, number of concurrent users (number of users performing same action), capacity of a system – data capacity, file server capacity.

To define the framework and methodology of performance testing, these are basic steps to be considered:
• Identify the business, system, and user requirements that define the benchmark.
• Identify system usage and key metrics, such as response time, that can measure the benchmark.
• Develop a test plan and performance test that use real business transactions and data.
• Install and prepare the testing environments, tools, and resource monitors.
• Script the performance tests as designed.
• Execute, monitor and validate tests, test data, and execution results.
• Analyze the test results. 

Myth # 3: System throughput increases linearly with the increase in concurrent users.

System throughput and number of concurrent users have linear relation during the initial levels of load. However, with the further increase in concurrent users, the throughput increase is not linear. When the number of concurrent users reaches a certain value, the system capacity tends to reach the saturation level (and possibly hardware condition reaches a critical value). This results in slowing down the response time of the application under test. With the further increase in the number of concurrent users, the systems processing capacity decreases and can lead to eventually crashing of the system. Thus, if we plot the relationship between system throughput and concurrent users, it will be a sinus curve.

Myth # 4: We must find ways to achieve 100% of the performance indicators defined by the customer.

Indicators provided by the users are more to conduct a feasibility analysis. These indicators in the ideal state can be achieved. For example, specification of a server states that machine is able carry 10,000 users per second at the speed of 200kb transmission. However, maintaining the same response time under various circumstances can be challenging with the same infrastructure and achieving the same can result in heavy investment on hardware part of the system.

Successful performance testing engagements rely on a proven approach that yields accurate metrics for analysis and recommendations.  In general, the approach must be able to reproduce workload conditions for both user load and volume of data, reproduce key scenarios, and support interpretation of key performance metrics. Listed below are the key activities that are required in a well-defined approach:
• Identification of key scenarios
• Workload patterns
• Key scenarios and the associated user specific data
• Target Load Levels

Myth # 5: Different type of testing such as Stress testing, load testing, capacity testing, should be performed separately, one after the other.

Based on the objective of the software application under testing, different types of performance testing – load testing, Stress Testing and Endurance testing is performed. The objective of performance testing is to ensure that the system behaves as expected under real production environment hence overall system performance test scenarios should include the various types of tests.
Typical reporting details include details covering scenarios related to all types of performance testing:
• # of users running (total)
•  # of tests or requests (tests per second)
• Response times (seconds, or milliseconds)
• Throughput (kb/sec, total and for each vuser)
•  Processor Resource Utilization (%, kernel, user)
•  Memory Resource Utilization (% of total, bytes in use)
• Network Resource Utilization (% bandwidth, bytes)
• Disk Resource Utilization (bytes written, read)
• Request Metrics (requests/sec, # connections, queuing)
• Database Metrics (memory, locks, deadlocks, blocking transactions, timeouts)  
The result report should provide an overview of how the testing was conducted, the conditions of the test, the load or stress on the system, the number of users, think times (Delays that occur while users view or enter content in applications are termed as think times), user delays, scenarios executed and a summary of pass or fail conditions.   An in-depth analysis of each bottleneck and possible cause or solutions described.  

Myth # 6: Relying on a performance testing tool will be able to accurately locate system bottlenecks.

Although performance test tools reduce the effort and simplify the creation of scripts, the tools have the potential to generate poorly designed scripts.  It is critical that the performance engineers have sufficient training on the use of the tool, methodologies and best practices for performance testing initiatives.  For organizations that are executing the first performance testing project, the project team resources should be required to have several years of actual hands-on experience.  Modern performance tools can provide a wealth of information; however, in the hands of an inexperienced developer, these scripts can create erroneous performance results that foster wasted time.  There are no substitutes for real-life experiences.  It is extraordinarily important to understand the current maturity level of your processes and the capabilities of your team members. Successful performance testing projects that meet the stated goals require the maturation of people, process and experienced in the use of tools.

Myth # 7: The number of online users is the number of concurrent users. The higher number of concurrent users means a large number of PV (page views).

Number of concurrent users * users access to the number of pages = PV

Myth # 8: We can improve the performance of the system by improving the hardware configuration.

Adding hardware not necessarily can improve the performance of the system; neither can it guarantee achieving the expected bench-mark. The issues related to bad programming such as memory leaks can consume the memory over a period of time, thus causing the system to crash. Similarly badly managed database connection configuration information, the database deadlock, algorithm logic problems can lead to the slow processing. All these issues can be uncovered during performance testing.

Myth # 9: The performance test is independent of the functional test.

When to start performance testing initiative is always a challenging decision. If the application is not functionally testing and functionality are not implemented correctly, this volatility results in performance testing scripts that are continuously under rework or maintenance. If the application system features are incomplete or code to run is inefficient it often leads to some performance problems. Hence functional testing should be performed beforehand to find these problems.

Myth # 10: It is easy to find the environment for performance testing.

During the planning stage, the test lab hardware and software architecture for the application must be defined, procured and implemented.  The planning document should detail the hardware, software and network architecture required to support the application and the performance test monitoring requirements.   

To ensure accurate test results, the test lab should be engineered and configured as close to the actual production environment as possible.   Prior to the execution of each performance test, a review of the configuration and the readiness of the lab should be performed.   Validation would ensure the diagnostic tools were in a ready state for collection and monitoring.   Activities to perform prior to test execution include:

• Install and Configure test hardware, including servers and storage
• Install and Configure test network with the proper routing and simulators
• Install the testing tool components, including load agents and controllers
• Enable and Configure performance monitoring software, and diagnostic tools
• Install the supporting software required for the application
• Install and Configure the application you are testing
• Import application test data used for the test execution and scripts

After execution, the configuration should be validated against the reported results to ensure that graphs and other reporting metrics are in alignment with the actual hardware configuration.  For example, if the server has been configured with 16 Gig of RAM, but after the execution of the test, the analysis report only shows a graph with a maximum of 4 Gig of RAM, the system should be validated that it has been properly configured or the report is not being misrepresented.

About the Author

Leave A Response