Saturday, January 11, 2014

Regression: A Test By Any Other Name ...

I recently posted a blog on how context influenced my analysis of a book I recently read: http://www.softwaretestingclub.com/profiles/blogs/context-and-nothing . In the post, I referred to a discussion on context I had read a long time ago in a book on web services. (I think it was titled "Web Services", but weren't they all?). At that time, the state of web services was immature to say the least: many implementations were essentially the same old stove-piping that was repackaged in a new wrapping. The author referred to these as "the Legacy" and went on to identify other contexts that he would discuss in the book:

  • The Legacy refers to the way things were done in the past. ("We used to ...")
  • The Now refers to the way things are currently done. ("We are ...")
  • The Future refers to the way things may be done in the future. ("We (eventually) plan to ...")
  • The Ideal refers to the ultimate concept of how it should be (ideally) done. ("We should ...")
The author made the point that mixing these contexts in the same discussion can lead to misunderstanding and confusion. I have found that to be a good way to analyze arguments both online and offline. When someone starts switching contexts in the middle of an argument, it is time to call them out on it before proceeding.

In that post, I associated software regression testing to the Legacy and test planning to the Future. After thinking about it, I decided that test planning actually belongs to the Now with the reasoning that anything that can be conceived of Now belongs to the Now. I added a comment to the post that the Future belongs in that area of mind maps and test plans that should be labeled as "Beyond Here Lies Dragons", to borrow a 16th century map legend.

Then I looked at regression testing. Does regression testing belong in the Now? It is certainly part of test planning, but what exactly differentiates it from "normal" testing? Why even use the term if it is simply "testing"?

There are probably twice as many definitions of regression testing as there are testers in the world. As a result, I will be talking about what I have referred to as regression testing on previous software efforts. Specifically, I define regression testing as having the following characteristics:

  1. It consists of tests procedures that have previously been identified as useful for automated checks. These were once tests that were converted to checks for specific procedural paths, what I like to call "trip wires".
  2. These automated checks have been incorporated into one or mores suites of automated checks that are run on a regular basis during feature development for the application under test.
  3. The automated checks are run "as is" with no metrics associated with them. In other words, a successful  run of the automated checks is not used to determine the relative maturity of the current feature development. (Thus, the association as "trip wires".)
  4. An unsuccessful run of the automated checks is investigated to determine the reason for the failure. Once the cause is determined, one of the following actions is performed.
    • The check is re-run to determine if the failure is intermittent.
    • The check is modified to make it more robust or to change the procedure to reflect recent (approved) changes in the software.
    • The check is retired as no longer useful in the context of the current software business rules. This is done when the changes would essentially create a new check. Instead, the new check is created and the old check is dumped.
  5. The population of automated checks are controlled to retain only the most useful and is culled sufficiently to support maintainability and run time requirements (in the case of GUI automation checks.
Note that even though each individual script or automated procedure is technically a "check", the overall interactive investigation of check failures of characteristic #4 make the overall process a "test". As a result, I refer to them as "checks" or as "tests" depending on the context of the discussion I am having, sometimes using both terms in the same discussion.

Finally, the nature of the automated checks as retention of the legacy procedures in the current environment firmly places them in the context of the Legacy. Essentially, they answer the question "Does the current application build act the same as the legacy version in the context of the legacy test procedures. Their use in the test planning of the Now makes it necessary to separate them from other tests based on this Legacy context.