By April 11, 2014 2 Comments Read More →

Performance Test Cadence

Wouldn’t you want to know about problems earlier vs. later?How often one should perform performance testing? Is it once successful build is available? Is availability of a good build a trigger to run performance test?  What are other constraints and limitations should be taken into account when committing on a timeline? Let’s break it down to key parts.

Moving parts

  • Good Build. Normally a good build considered when it compiles without errors and passes basic functional tests. It may be sufficient for a monolith application, but it’s way far from sufficient for a highly distributed system. From performance perspective a good build is the one that was tested for successful deployment on target environment and smoke tested for basic functionality on that environment. Failure to meet this criteria would cause performance team spend time on hitting setup failures and functional bugs vs. focusing on running actual performance tests. As a result that severely affects performance test cadency, let alone ability to plan it.
  • Performance Engineers.  End to end performance testing normally would involve skilled performance engineer to build automation in first place. In highly distributed complex systems it would require the performance engineer to maintain the test harness from build to build, and that’s due to changing UI and API, data refresh, and even pre-requisites and infrastructure configuration refresh. Something that is sometime unpractical to automate all up. The number of available performance engineers who make all these adjustments directly affects the cadence of performance testing times number of different variations of the test matrix – say, different hardware, different application scenarios, different data sets, etc.
  • Lab resources. Lab resources is another constraint that would limit the cadency of the performance testing. Having enough engineers but lacking required hardware or virtual machines would pose a challenge. Maintain inventory of the list of all machines with the backdrop of target environment. For example, if the target environment needs 5 machines of type X and 2 of type Y, have a handle of how many of 5X’s and 2Y’s you can accommodate at once.
  • Deadlines. Performance testing is part of overall delivery process. Have clear view on the overall cadency with key milestones – shiproom meetings cadency, sprint cadence, code complete cadence, RTM, etc. On other hand have a clear understanding what would it take to deploy, test, and analyze the failures of each performance test run. Map one to another, it should give you clear picture of your ability to deliver performance results with the overall timeframe.

Questions to consider

Consider the following questions to validate your ability to commit on predictable performance test cadence:

  • Is the good build tested for successful deployment on target environment?
  • Is the good build functionally smoke tested on the target environment?
  • Is the performance test matrix clear to all involved parties and approved by the stakeholders (hardware configurations, functional scenarios, data sets, etc.)?
  • Are there enough skilled performance engineers available to map the performance test matrix?
  • Does the lab inventory allow multiple target environment runs at once?
  • What does it take to run performance test – internal deliverables and the timeline breakdown?
  • What’s the overall development cadence and it’s milestones?
Posted in: Uncategorized

About the Author:

This blog is dedicated to share simple practices I that get me results.

2 Comments on "Performance Test Cadence"

Trackback | Comments RSS Feed

  1. Lew Sauder says:

    Good article Alik. Too often, organizations consider performance in their initial deployment (at best) and then never address it again until it’s a problem. That is too late. A proactive approach as you highlight here is money and time well invested.

  2. alik levin says:

    Lew, thank you.
    For the last decade I had a privilege of having first row seat watching how perf hurts business directly if not addressed early on. I watched the agony of throwing more hardware and/or more consultants on perf issues only to see more money spent for nothing.
    There is way more to perf than this tiny piece, of course, and I have a backlog of more posts on the way.

Post a Comment