How to get the most from the performance testing? Performance testing could be laborious, fragile, complex process that requires special skills to get it up and running. It requires even deeper skills when interpreting the results especially when analyzing root causes of performance and scalability failures. How to get most of value of it with the available resources on a timely basis?
Start with the customer in mind. What would the customer perceive as an acceptable or delightful performance? Consider the following as a starting point.
Perceived Performance Key Ingredients
- User stories. While it’s not practical to test every possible end user story it’s imperative to identify those that the end user cares the most – they may well be quite different than those that the engineering or marketing thinks. Consider these prime suspects: most used stories, stories with specific timing (for example, morning checkup), stories that involve complex processing but must be very reliable (example, pulling data from different heterogeneous sources), high visibility stories (for example, executive report). Failure to capture stories that are not on top of the end user mind would lead to massive investing resources for nothing. Not a good place to be.
- Responsive UI. What matters is how fast the end user is able to interact with the application. While it’s harder to get the data itself fast to the end user, there is no reason the user should be staring at a blank screen while the data is retrieved. As the data may not be available for consumption, at least the UI can start rendering right after user’s action. Having UI rendered is useful as the user getting ready to consume the data that in turn would drive his decision how to interact with the application. Getting the data is next.
- Data rendering. Getting the actual data presented to the end user in an easy to consume way is the ultimate goal. One of the key exercises here is to identify what kind of data would be most representative and what volumes of data are anticipated. Few key risks that can render the performance test effort worthless are: the data retrieved in less than efficient way; the data retrieved is irrelevant to the user stories; the data volumes are not representative. To mitigate these risks it’s useful to conduct performance modeling that should clearly state for each user story in question what type of data is most representative, what anticipated target data volumes are, and what are the methods implemented to retrieve the data and bring it to the UI.
- Resilience to high load. Assuming the application is designed for multiple users, it’s important to consider impact of the high volume of active users on perceived performance. The performance model should take into account this goal and the test design and implementation should follow it. Performance test success criteria should be validated in a way where the response time is a function of the user load. Normally the response time would grow as a function of user load and at some point will hit the point considered as broken goal or broken SLA. The accuracy of the performance test depends on how well the active user designed. The design should mimic key user stories, it should take into account relevant data and the data volumes sent and received, and it should take into account end users pace of navigation.
- Resources utilization. Consider the situation where the performance test takes into account all the aspects above – relevant user stories, realistic active user design, relevant data, and user volumes. Further more, running the test it produces desired results such as response time under the load that satisfies the original goal. To make this picture complete last ingredient is resource utilization, at a minimum CPU and memory utilization. The risk that should be taken into account is resource starvation. For example, if the CPU utilized at peak levels in sustained manner chances the underlying OS may decide the the process that consumes the resource – the application – is not healthy and may kill it resulting in failures and dropped users. To mitigate this risk clear resource utilization goals must be set and if crossed the test should be rendered as failed.
Questions to consider
Ask the following questions to assess your performance efforts driven by customer’s perceived performance.
- Are the key user stories identified?
- Is UI responsiveness considered?
- Is performance model available?
- Does the performance model call out for each user story what request/response data is representative, what target data volumes are, and what are the methods to retrieve the data and bring it to the UI?
- Are the clear goals set for response time, number of active users, and for the acceptable levels of resource utilization?
- Is active user designed to mimic real user?
- Are resource utilization goals set?