Tuesday 24 April 2012

Mechanics of writing test cases


                 i.    Analyzing requirements
To write a good test case, a tester needs to understand the requirement. In what context the requirement is described and what needs to be tested and how. What must be the expected result etc?

                ii.    Writing test cases (test designing)
A test case is developed based on the high level scenarios, which are in turn developed from the requirement. So, every requirement must have at least one test case. This test case needs to be wholly concentrated on the requirement.
For ex: Lets take yahoomail.com, in this website, the requirement says that username can accept alphanumeric characters. So, the test case must be written to check for different combinations like testing with only alphabets, only numeric and alphanumeric characters. And the test data what you give for each test case is different for each combination. Like this, we can write any number of test cases, but optimization of these test cases is important. Optimize exactly what all test cases we need and what not.
               iii.    Executing test cases (test execution)
Once all the test cases are written, they need to be executed. Execution starts only after the testing team receives the build from the development. The build is nothing but the new code, which has been developed as per the project requirements. This build is tested thoroughly by executing all combination of these test cases. Please don’t be in a myth that we write test cases after the development is ready with the build. Development and testing has to go parallel. Remember, test designing is done purely on the available valid documentation. While executing test cases, there will always a possibility that the expected result can vary from the actual result while testing. In this case, it is a defect/bug. A defect needs to be raised against the development team, and this defect needs to be resolved as soon as possible based on the schedule of the project.

A test case is identified by ID number and prioritized. Each test case has the following criteria:
·         Purpose - Reason for the test case
·         Steps - A logical sequence of steps the tester must follow to execute the test case
·         Expected Results - The expected result of the test case
·         Actual Result - What actually happened when the test case was executed
·         Status - Identifies whether the test case was passed, failed, blocked or skipped.
·         Pass - Actual result matched expected result
·         Failed - Bug discovered that represents a failure of the feature
·         Blocked - Tester could not execute the test case because of bug
·         Skipped - Test case was not executed this round
·         Bug ID - If the test case was failed, identify the bug number of the resulting bug.

No comments:

Post a Comment