Wouldn’t you want to know about problems earlier vs. later?How often one should perform performance testing? Is it once successful build is available? Is availability of a good build a trigger to run performance test? What are other constraints and limitations should be taken into account when committing on a timeline? Let’s break it down to key parts.
- Good Build. Normally a good build considered when it compiles without errors and passes basic functional tests. It may be sufficient for a monolith application, but it’s way far from sufficient for a highly distributed system. From performance perspective a good build is the one that was tested for successful deployment on target environment and smoke tested for basic functionality on that environment. Failure to meet this criteria would cause performance team spend time on hitting setup failures and functional bugs vs. focusing on running actual performance tests. As a result that severely affects performance test cadency, let alone ability to plan it.
- Performance Engineers. End to end performance testing normally would involve skilled performance engineer to build automation in first place. In highly distributed complex systems it would require the performance engineer to maintain the test harness from build to build, and that’s due to changing UI and API, data refresh, and even pre-requisites and infrastructure configuration refresh. Something that is sometime unpractical to automate all up. The number of available performance engineers who make all these adjustments directly affects the cadence of performance testing times number of different variations of the test matrix – say, different hardware, different application scenarios, different data sets, etc.
- Lab resources. Lab resources is another constraint that would limit the cadency of the performance testing. Having enough engineers but lacking required hardware or virtual machines would pose a challenge. Maintain inventory of the list of all machines with the backdrop of target environment. For example, if the target environment needs 5 machines of type X and 2 of type Y, have a handle of how many of 5X’s and 2Y’s you can accommodate at once.
- Deadlines. Performance testing is part of overall delivery process. Have clear view on the overall cadency with key milestones – shiproom meetings cadency, sprint cadence, code complete cadence, RTM, etc. On other hand have a clear understanding what would it take to deploy, test, and analyze the failures of each performance test run. Map one to another, it should give you clear picture of your ability to deliver performance results with the overall timeframe.
Questions to consider
Consider the following questions to validate your ability to commit on predictable performance test cadence:
- Is the good build tested for successful deployment on target environment?
- Is the good build functionally smoke tested on the target environment?
- Is the performance test matrix clear to all involved parties and approved by the stakeholders (hardware configurations, functional scenarios, data sets, etc.)?
- Are there enough skilled performance engineers available to map the performance test matrix?
- Does the lab inventory allow multiple target environment runs at once?
- What does it take to run performance test – internal deliverables and the timeline breakdown?
- What’s the overall development cadence and it’s milestones?