Saturday, October 8, 2011

Spinning Down

Ok, so we just got our latest release out the door - it's time to relax, right? Working as the single tester on a small product test team has some advantages, but at the end of the day the buck stops here. Stress leading up to a major release is manageable, but what do you do after the release is out the door?

That adrenaline keeps pumping for a while and it takes time to spin down. Here are some thing I do to help manage the days after a release:

• Don't arrange a family commitment the weekend after a scheduled release. No matter how well you prepare, there may be a scheduled delay. That is one pressure you can do without.

• If your spouse asks how the release is going, the answer is "Fine" - no matter what. You aren't lying, just reassuring. Being asked about stress at work is also one pressure you can do without.

• Plan on spinning down gradually. It is out the door, but you could still have last-minute install problems crop up.

• Keep the schedule sane in the week leading up to and following a release. Not getting enough sleep means making mistakes.

• Don't make formal arrangements for a "relaxing" activity. Step back into your normal routine. Plan nothing and let it happen.

• Don't try to force yourself back into a "normal" sleep pattern. Starting a week or so before a release, my body tends to wake up at 2:00 AM no matter what. That continues for a while after a release and I just don't worry about it too much.

• Mental and physical go hand in hand during stressful periods. With me, it's stress, coffee, and spicy food - pick any two out of three. Around a week or so before and after a release, it's usually time to go cold turkey on the caffeine.

Oh, last one ... write a blog. Sharing with others online is a good way to cut the stress. Hey, it really does help!

Tuesday, August 23, 2011

Using Automated Scripts for Test Workflow Automation

Test automation covers a large area from large, traditional test management suites to simple text editors. This discussion focuses on using scripted automation tools to support and improve the overall test workflow. This workflow support can be provided using scripted automation at various interfaces: database, development environment module interfaces, or GUI interfaces to provide some examples.

Traditionally, scripted automation has been used to run checks to verify established application functionality (e.g. for regression checks). An alternative usage for automated scripts is to assist in executing the overall test workflow. Two methods of accomplishing this are presented here:
  • Performing smoke tests
  • Creating complex configurations to support test sessions

Smoke Tests
Smoke tests are a special type of test that does not belong in the category of regression testing. Regression tests are intended to thoroughly verify functions at a broad spectrum of interface points. The smoke test is intended to perform a specific function: to provide a minimum gateway for allowing development builds into the QA environment.

The smoke test performs a quick check of the overall application "happy paths" to identify major functional failures. Here a "major" failure is defined as one that prevents the testing of a significant portion of the application. Unlike regression checks, the smoke test is not intended to thoroughly verify any particular function. In fact, if properly designed, it should not be susceptible to minor failures at all. Instead, it should interact with a minimum of interface objects to limit the likelihood of a minor failure.

In addition, the time box limitation of a smoke test puts an emphasis on getting "the biggest bang for the buck". The smoke test should be continually tweaked to include as many major functions as possible and still complete within the designed time limit (typically 30 minutes to 1 hour), especially for GUI test automation.

Creating Complex Configurations to Support Test Sessions
When considered as a workflow framework, the scripted automation takes on the role of performing a set of tasks as opposed to verifying the functionality of the application. The size and complexity of the automated scripts can be critical. This is due to the fact that the likelihood of a critical stoppage grows exponentially with the size of the script. For example, a critical stoppage early in the processing of a large script would impact the entire flow. Dividing the overall test workflow into ten separate scripts, may limit the impact of a critical stoppage in one of the scripts to 10% of the overall testing.

The size and placement of the scripts in the overall test process should be balanced between usability, run time, maintenance, and the frequency of script stoppages. In general, the scripted automation should be targeted for tests that have very complicated setup procedures or involve a large amount of redundant setup steps that would be a large burden on the tester if performed manually.

These are just two example of using scripted automation to support test workflow. If implemented properly, the injection of small, well designed scripts into the test process can provide a significant improvement in the overall test quality.

Sunday, August 21, 2011

Scripted Automation as a Magic Eight Ball

First, a little background. In American billiards, the game of "Eight Ball" is a game where the main goal is to sink the number "8" ball after you have sunk all of your others (either striped or solid). However, if you "scratch" (sink the cue ball) while trying to sink the eight ball, you lose the game. As a result, the game outcome is always in doubt.

A brilliant person once thought of the idea of creating a "Magic Eight Ball". This is a toy that looks like an eight ball, but has a flat side with a window in it. Inside is a fluid with a multi-sided object that has a phrase written on each side. You ask the Magic Eight Ball a question, shake it, then turn it over and see the displayed answer, which is always a vague answer like "It is possible" or "Who knows?".

So, how does this relate to scripted automation? A major problem with the larger test tool suites is that they are set up to relate script outcomes directly to requirements on a pass/fail basis. This leads to automated reports that provide coverage in terms of requirements passed, etc. This can be not only misleading, but dangerously so. It inevitably leads to a false sense of security or panic that degrades the credibility of the test team.

Scripted test tools do not test or verify functional requirements. Instead, they check specific parameters at specific interface points. When I report scripted results, I only report a "negative" result and always state what was expected and what was measured. That allows me to quickly validate the test failure before entering a bug.

If you have to associate tests directly to functional requirements, you can't say it "passed" and you definitely can't say it "failed" without a verification. Instead, it would be better to phrase it in less absolute terms. For passing tests, you could report that "The outcome is unclear" and for failing tests you could report "The signs are troubling". That way, no one jumps to conclusions and you end up performing exploratory test sessions to provide more specific outcomes. Add a little randomization and ... voila!  The Magic Eight Ball test automation suite that combines the simplicity of direct association to requirements combined with just enough uncertainty to insure thorough testing.

I wonder if I could add a tarot card interface for projected test completion dates? Hmmm ...

What this Blog is About

I am a tester at heart and switched careers in the past to whatever field is currently willing to pay me to do what I love doing.

As a software tester, I am learning the software testing profession by scouring the web for techniques and applying those my job on a daily basis. One of my firm beliefs is that to learn, you have to teach. Web forums provide an excellent method for doing that.

Currently, I am a hardware tester learning the intricacies of the National Instruments TestStand test management framework interfacing with LabWindows/CVI.

This blog is intended to provide me a forum for displaying ideas without overrunning or dominating the posts on those other community web sites.

Enjoy!