Unit Iii Test Design and Execution
Unit Iii Test Design and Execution
The objectives of the testing are the reasons or purpose of the testing and the
object of the testing is the work product to be tested.
Testing objectives can differ depending on few factors as,
Evaluate work products: The objectives are used to assess work items such as the
requirement document, design, and user stories. Before the developer picks it up for
development, it should be confirmed. Identifying any ambiguity or contradictory
requirements at this stage saves a significant amount of development and testing
time.
Verify Requirements: This objective demonstrates that one of the most important
aspects of testing should be to meet the needs of the client. Testers examine the
product and ensure that all of the stipulated standards are met. Developing all test
cases, independent of testing technique, ensures functionality confirmation for every
executed test case.
Validate test objects: Testing ensures that requirements are implemented as well as
that they function as expected by users. This type of testing is known as validation. It
is the process of testing a product after it has been developed. Validation can be done
manually or automatically.
Build confidence: One of the most important goals of software testing is to improve
software quality. A lower number of flaws is associated with high-quality software.
Reduce risk: The probability of loss is sometimes referred to as risk. The goal of
software testing is to lower the likelihood of the risk occurring. Each software project
is unique and has a substantial number of unknowns from several viewpoints. If we
do not control these uncertainties, it will impose possible hazards not only during the
development phases but also during the product’s whole life cycle. As a result, the
major goal of software testing is to incorporate the risk management process as early
as possible in the development phase in order to identify any risks.
Find failures and defects: Another critical goal of software testing is to uncover all
flaws in a product. The basic goal of testing is to uncover as many flaws as possible
in a software product while confirming whether or not the application meets the
user’s needs. Defects should be found as early in the testing cycle as feasible.
3.2. TEST OBJECTIVE IDENTIFICATION
1. Analyze project requirements
2. Identify stakeholders and quality standards
3. Define test scope and focus
4. Formulate test criteria and metrics
5. Specify test outcomes and benefits
6. Document test objectives
1. Analyze project requirements
The first step in identifying test objectives is to analyze the project requirements and
understand what the software is supposed to do, how it will be used, and what are the
functional and non-functional requirements. You can use various sources of information,
such as user stories, use cases, specifications, design documents, and customer feedback, to
gather and document the requirements. You should also prioritize the requirements based on
their importance, complexity, and risk.
2. Identify stakeholders and quality standards
The next step is to identify the stakeholders and quality standards that are relevant for your
software testing process. Stakeholders are the people or groups who have an interest or
influence in the software, such as customers, users, developers, managers, regulators, and
testers. Quality standards are the guidelines and criteria that define the expected level of
quality and performance of the software, such as usability, reliability, security, compatibility,
and compliance. You should communicate with the stakeholders and review the quality
standards to understand their expectations, needs, and preferences.
Negative : In this factor we can check what the product it is not supposed to do.
User Interface : In UI testing we check the user interfaces. For example in a web page we
may check for a button. In this we check for button size and shape. We can also check the
navigation links.
Usability : Usability testing measures the suitability of the software for its users, and is
directed at measuring the following factors with which specified users can achieve specified
goals in particular environments.
1. Effectiveness : The capability of the software product to enable users to achieve
specified goals with the accuracy and completeness in a specified context of use.
2. Efficiency : The capability of the product to enable users to expend appropriate
amounts of resources in relation to the effectiveness achieved in a specified context of use.
Performance testing can serve various purposes. It can demonstrate that the system needs
performance criteria.
1. Load Testing: This is the simplest form of performance testing. A load test is usually
conducted to understand the behavior of the application under a specific expected load.
2. Stress Testing: Stress testing focuses on the ability of a system to handle loads beyond
maximum capacity. System performance should degrade slowly and predictably without
failure as stress levels are increased.
3. Volume Testing: Volume testing belongs to the group of non-functional values tests.
Volume testing refers to testing a software application for a certain data volume. This volume
can in generic terms be the database size or it could also be the size of an interface file that is
the subject of volume testing.
Security : Process to determine that an Information System protects data and maintains
functionality as intended. The basic security concepts that need to be covered by security
testing are the following:
1. Confidentiality : A security measure which protects against the disclosure of
information to parties other than the intended recipient that is by no means the only
way of ensuring
Integration : Integration testing is a logical extension of unit testing. In its simplest form, two
units that have already been tested are combined into a component and the interface between
them is tested.
In gathering requisites, the project team must dialogue with other invested parties,
including the owner and end users. This interaction serves the purpose of ascertaining
their anticipatory outlooks with respect to distinct functionalities.
Commencing early and frequent exchanges among these entities is a deterrent against
any vagueness. It guarantees alignment of the final product with the end user's or
client's requisites and averts users' need to recalibrate their expectations.
For example, assume that you are planning to test a web shopping application. You are
presented with the following requirement: “Easy-to-use search for available inventory.”
Testing this requirement as written requires assumptions about what is meant by ambiguous
terms such as “easy-to-use” and “available inventory.” To make requirements more testable,
clarify ambiguous wording such as “fast,” “intuitive” or “user-friendly.” Requirements
shouldn’t contain implementation details such as “the search box will be located in the top
right corner of the screen,” but otherwise should be measurable and complete. Consider the
following example for a web shopping platform:
“When at least one matching item is found, display up to 20 matching inventory items, in a
grid or list and using the sort order according to the user preference settings.”
This requirement provides details that lead to the creation of tests for boundary cases, such as
no matching items, 1 or 2 matching items, and 19, 20 and 21 matching items. However, this
requirement describes more than one function. It would be better practice to separate it into
three separate requirements, as shown below:
When at least one matching item is found, display up to 20 matching inventory items
Display search results in a grid or list according to the user preference settings
Display search results in the sort order according to the user preference settings
The principle of one function per requirement increases agility. In theory, it would be
possible to release the search function itself in one sprint, with the addition of the ability to
choose a grid/list display or a sort order in subsequent sprints.
Testable requirements should not include the following:
Text that is irrelevant. Just as you can’t judge a book by the number of words, length
by itself is not a sign of a testable requirement. Remove anything that doesn’t add to
your understanding of the requirement.
A description of the problem rather than the function that solves it.
Implementation details. For implementation details such as font size, color, and
placement, consider creating a set of standards that apply to the entire project rather
than repeating the standards in each individual requirement.
Ambiguity. Specifications should be specific. Avoid subjective terms that can’t be
measured, such as “usually.” Replace these with objective, measurable terms such as
“80%.”
3.5.1. Five Techniques for Creating Testable Requirements
Documenting user requirements is always a challenging phase in software development, as
there are no standard processes or notations. However, communication and facilitation skills
can make this activity easier.
Here are five techniques that can be used for converting user stories into testable
requirements.
1. Mind maps
Mind mapping is a graphical technique of taking notes and visualizing thoughts using a
radiant structure. One of the core values of agile is interaction, so when the team is talking
about requirements, using mind maps for documentation can help capture the context of the
conversation.
The shapes, colors, and other properties of the map help participants in the conversation
remember the situation. A mind map can be a context-embedded memo, just like handwritten
story cards.
2. Process workflow
If user stories involve a workflow of some kind, the item can usually be broken into
individual steps. By dividing up a large user story, you can improve your understanding of
the functionality and your ability to estimate. It will also be easier for a product owner to
make decisions about priority.
Some workflow steps may not be important right now and can be moved to future sprints.
This will certainly limit the functionality of the application, but it does allow a team to review
the completed functionality at the end of the sprint, test it, and use the feedback to make
changes.
3. Brainstorming
Brainstorming, one of the most powerful techniques, is a team or individual creative activity
to find a solution to a problem. For example, teams can brainstorm about the various options
for platforms available to host the application under test.
4. Alternate flows
This technique is useful when there are many flows and it is hard to break down large user
stories based on functionality alone. In that case, it helps to ask how a piece of functionality
is going to be tested. Which scenarios have to be checked in order to know if the functionality
works?
Sometimes, test scenarios are complicated because of the work involved to set up the tests
and work through them. If a test scenario is not very common to begin with or does not
present a high enough risk, a product owner can decide to skip the functionality for the time
being and focus on test scenarios that deliver more value. In other cases, test scenarios can be
simplified to cover the most urgent issues.
5. Decision tables
User stories often involve a number of roles that perform parts of certain functionalities.
These groups, in turn, operate on certain sets of test data to determine the expected output of
a particular functionality. By breaking up that functionality into the roles that have to perform
specific requirements, we more clearly understand what functionality is needed and can more
accurately estimate the work involved.
The ways in which requirements are captured have a direct bearing on a project’s cost, time,
and quality. Implement these five approaches to ensure more effective requirements gathering
in your testing.
Example
Consider an example of an application that accepts a numeric number as input with a value
between 10 to 100 and finds its square. Now, using equivalence class testing, we can create
the following equivalence classes-
Equivalence Explanation
Class
Numbers 10 This class will include test data for a positive scenario.
to 100
Numbers 0 This class will include test data that is restricted by the application. Since it
to 9 is designed to work with numbers 10 to 100 only.
Greater This class will again include test data that is restricted by the application but
than 100 this time to test the upper limit.
Negative Since negative numbers can be treated in a different way so, we will create
numbers a different class for negative numbers in order to check the robustness of
the application.
Alphabets This class will be used to test the robustness of the application with non-
numeric characters.
Special Just like the equivalence class for alphabets, we can have a separate
characters equivalence class for special characters.
Identification of Equivalence Classes
Cover all test data types for positive and negative test scenarios. We have to create
test data classes in such a way that covers all sets of test scenarios but at the same
time, there should not be any kind of redundancy.
If there is a possibility that the test data in a particular class can be treated differently
then it is better to split that equivalence class.
For example, in the above example, the application doesn’t work with numbers – less
than 10. So, instead of creating 1 class for numbers less than 10, we created two
classes – numbers 0-9 and negative numbers. This is because there is a possibility that
the application may handle negative numbers differently.
3.9. Equivalence Partitioning (EP)
In the Equivalence Partitioning technique for testing, the entire range of input data is split
into separate partitions. All imaginable test cases are assessed and divided into logical sets of
data named classes. One random test value is selected from each class during test execution.
The notion behind this design technique is that a test case of a representative value of an
individual class is equivalent to a test of any more value of the same class. It allows us to
Identify invalid as well as valid equivalence classes.
Let’s understand this technique for designing test cases with an example. Here, we will cover
the same example of validating the user age in the input form before registering. The test
conditions and expected behavior of the testing will remain the same as in the last example.
But now we will design our test cases based on the Equivalence Partitioning.
Test cases design Equivalence Partitioning:
To test the functionality of the user age from the input form (i.e., it must accept the age
between 18 to 59, both inclusive; otherwise, produce an error alert), we will first find all the
possible similar types of inputs to test and then place them into separate classes. In this case,
we can divide our test cases into three groups or classes:
Age < 18 – Invalid – ( For e.g. 1, 2, 3, 4, …, up to 17).
18 <= Age <= 59 – Valid – ( For e.g. 18, 19, 20, …, upto 59).
Age > 59 – Invalid – (For e.g. 60, 61, 62, 63, …)
user age validation input form 4
These designed test cases are too much for testing, aren’t they? But here lies the beauty of
Equivalence testing. We have infinite test cases to pick, but we only need to test one value
from each class. This reduces the number of tests we need to perform but increases our test
coverage. So, we can perform these tests for a definite number only, and the test value will be
picked randomly from each class and track the expected behavior for each input.
3.10. State Transition
State Transition Testing is a type of software testing which is performed to check the
change in the state of the application under varying input. The condition of input
passed is changed and the change in state is observed.
State Transition Testing is basically a black box testing technique that is carried out to
observe the behavior of the system or application for different input conditions passed
in a sequence. In this type of testing, both positive and negative input values are
provided and the behavior of the system is observed.
State Transition Testing is basically used where different system transitions are
needed to be tested.
In the diagram whenever the user enters the correct PIN he is moved to Access
granted state, and if he enters the wrong password he is moved to next try and if he
does the same for the 3rd time the account blocked state is reached.
State Transition Table
Correct PIN Incorrect PIN
S1) Start S5 S2
st
S2) 1 attempt S5 S3
nd
S3) 2 attempt S5 S4
S4) 3rd attempt S5 S6
S5) Access Granted – –
S6) Account blocked – –
In the table when the user enters the correct PIN, state is transitioned to S5 which is
Access granted. And if the user enters a wrong password he is moved to next state. If he
does the same 3rd time, he will reach the account blocked state
Example 2:
In the flight reservation login screen, consider you have to enter correct agent name and
password to access the flight reservation application.
It gives you the access to the application with correct password and login name, but what
if you entered the wrong password.
The application allows three attempts, and if users enter the wrong password at 4th
attempt, the system closes the application automatically.
The State Graphs helps you determine valid transitions to be tested. In this case, testing
with the correct password and with an incorrect password is compulsory. For the test
scenarios, log-in on 2nd, 3rd and 4th attempt anyone could be tested.
You can use State Table to determine invalid system transitions.
In a State Table, all the valid states are listed on the left side of the table, and the
events that cause them on the top.
Each cell represents the state system will move to when the corresponding event
occurs.
For example, while in S1 state you enter a correct password you are taken to state S6
(Access Granted). Suppose if you have entered the wrong password at first attempt
you will be taken to state S3 or 2nd Try.
Likewise, you can determine all other states.
Two invalid states are highlighted using this method. Suppose you are in state S6 that
is you are already logged into the application, and you open another instance of flight
reservation and enter valid or invalid passwords for the same agent. System response
for such a scenario needs to be tested.
3.11. Exploratory Testing
Exploratory Testing is a type of software testing in which the tester is free to select any
possible methodology to test the software. It is an unscripted approach to software testing. In
exploratory testing, software developers use their personal learning, knowledge, skills, and
abilities to test the software developed by themselves. Exploratory testing checks the
functionality and operations of the software as well as identify the functional and technical
faults in it. The aim of exploratory testing is to optimize and improve the software in every
possible way. The exploratory testing technique combines the experience of testers with a
structured approach to testing.
Why use Exploratory Testing?
Random and unstructured testing: Exploratory testing is unstructured in nature and
thus can help to reveal bugs that would of undiscovered during structured phases of
testing.
Testers can play around with user stories: With exploratory testing, testers can
annotate defects, add assertions, and voice memos and in this way, the user story is
converted to a test case.
Facilitate agile workflow: Exploratory testing helps formalize the findings and document
them automatically. Everyone can participate in exploratory testing with the help of
visual feedback thus enabling the team to adapt to changes quickly and facilitating agile
workflow.
Reinforce traditional testing process: Using tools for automated test case
documentation testers can convert exploratory testing sequences into functional test
scripts.
Speeds up documentation: Exploratory testing speeds up documentation and creates an
instant feedback loop.
Export documentation to test cases: Integration exploratory testing with tools like Jira
recorded documentation can be directly exported to test cases.
When to use Exploratory Testing?
When need to learn quickly about the application: Exploratory testing is beneficial for the
scenarios when a new tester enters the team and needs to learn quickly about the
application and provide rapid feedback.
Review from a user perspective: It comes in handy when there is a need to review
products from a user perspective.
Early iteration required: Exploratory testing is helpful in scenarios when an early
iteration is required as the teams don’t have much time to structure the test cases.
Testing mission-critical applications: Exploratory testing ensures that the tester doesn’t
miss the edge cases that can lead to critical quality failures.
Aid unit test: Exploratory testing can be used to aid unit tests, document the test cases,
and use test cases to test extensively during the later sprints.
Types of Exploratory Testing
There are 3 types of exploratory testing:
Freestyle: In freestyle exploratory testing, the application is tested in an ad-hoc way,
there is no maximum coverage, and there are no rules to follow for testing. It is done
in the following cases:
o When there is a need to get friendly with the application.
o To check other test engineers’ work.
o To perform smoke tests quickly.
Strategy Based: Strategy-based testing can be performed with the help of multiple
testing techniques like decision-table testing, cause-effect graphing, boundary value
analysis, equivalence partitioning, and error guessing. It is done by an experienced
tester who has known the application for the longest time.
Scenario Based: Scenario-based exploratory testing is done on the basis of scenarios
with the help of multiple scenarios like end-to-end, test scenarios. The scenarios can
be provided by the user or can be prepared by the test team.
Exploratory Testing Process
The following 4 steps are involved in the exploratory testing process:
1. Learn: This is the first phase of exploratory testing in which the tester learns about the
faults or issues that occur in the software. The tester uses his/her knowledge, skill, and
experience to observe and find what kind of problem the software is suffering from. This is
the initial phase of exploratory testing. It also involves different new learning for the tester.
2. Test Case Creation: When the fault is identified i.e. tester comes to know what kind of
problem the software is suffering from then the tester creates test cases according to defects
to test the software. Test cases are designed by keeping in mind the problems end users can
face.
3. Test Case Execution: After the creation of test cases according to end user problems, the
tester executed the test cases. Execution of test cases is a prominent phase of any testing
process. This includes the computational and operational tasks performed by the software in
order to get the desired output.
4. Analysis: After the execution of the test cases, the result is analyzed and observed whether
the software is working properly or not. If the defects are found then they are fixed and the
above three steps are performed again. Hence this whole process goes on in a cycle and
software testing is performed.
Exploratory Testing vs Automated Testing
Below are the differences between exploratory testing and automated testing:
Boundary Value Analysis is based on testing the boundary values of valid and invalid
partitions. The behavior at the edge of the equivalence partition is more likely to be
incorrect than the behavior within the partition, so boundaries are an area where testing is
likely to yield defects.
It checks for the input values near the boundary that have a higher chance of error. Every
partition has its maximum and minimum values and these maximum and minimum values
are the boundary values of a partition.
Valid Test cases: Valid test cases for the above can be any value entered greater than 17
and less than 57.
Enter the value- 18.
Enter the value- 19.
Enter the value- 37.
Enter the value- 55.
Enter the value- 56.
Invalid Testcases: When any value less than 18 and greater than 56 is entered.
Enter the value- 17.
Enter the value- 57.
EXAMPLE -2:
This is a simple but popular functional testing technique. Here, we concentrate on input
values and design test cases with input values that are on or close to boundary values.
Experience has shown that such test cases have a higher probability of detecting a fault in
the software. Suppose there is a program ‘Square’ which takes ‘x’ as an input and prints
the square of ‘x’as output. The range of ‘x’ is from 1 to 100. One possibility is to give all
values from 1 to 100 one by one to the program and see the observed behaviour. We have
to execute this program 100 times to check every input value. In boundary value analysis,
we select values on or close to boundaries and all input values may have one of the
following:
(i) Minimum value
(ii) Just above minimum value
(iii) Maximum value
(iv) Just below maximum value
(v) Nominal (Average) value
These values are shown in Figure for the program ‘Square’
These five values (1, 2, 50, 99 and 100) are selected on the basis of boundary value
analysis and give reasonable confidence about the correctness of the program. There is no
need to select all 100 inputs and execute the program one by one for all 100 inputs. The
number of inputs selected by this technique is 4n + 1 where ‘n’ is the number of inputs.
One nominal value is selected which may represent all values which are neither close to
boundary nor on the boundary. Test cases for ‘Square’ program are given in Table 2.1.
EXAMPLE: 2
Consider a program for the determination of division of a student based on the marks in
three subjects. Its input is a triple of positive integers (say mark1, mark2, and mark3) and
values are from interval [0, 100].
The division is calculated according to the following rules:
Marks Obtained Division
(Average)
75 – 100 First Division with distinction
60 – 74 First division
50 – 59 Second division
40 – 49 Third division
0 – 39 Fail
Total marks obtained are the average of marks obtained in the three subjects i.e. Average
= (mark1 + mark 2 + mark3) / 3
The program output may have one of the following words: [Fail, Third Division, Second
Division, First Division, First Division with Distinction] Design the boundary value test
cases.
Solution: The boundary value test cases are given in Table 2.4
Example 1: Equivalence and Boundary Value
• Let’s consider the behavior of Order Pizza Text Box Below
• Pizza values 1 to 10 is considered valid. A success message is shown.
• While value 11 to 99 are considered invalid for order and an error message will
appear, “Only 10 Pizza can be ordered”
• Here is the test condition
1. Any Number greater than 10 entered in the Order Pizza field(let say 11) is considered
invalid.
2. Any Number less than 1 that is 0 or below, then it is considered invalid.
3. Numbers 1 to 10 are considered valid
4. Any 3 Digit Number say -100 is invalid.
We cannot test all the possible values because if done, the number of test
cases will be more than 100. To address this problem, we use equivalence
partitioning hypothesis where we divide the possible values of tickets into groups
or sets as shown below where the system behaviour can be considered the same.
The divided sets are called Equivalence Partitions or Equivalence Classes. Then
we pick only one value from each partition for testing. The hypothesis behind this
technique is that if one condition/value in a partition passes all others will also
pass. Likewise, if one condition in a partition fails, all other conditions in that
partition will fail.
Boundary Value Analysis– in Boundary Value Analysis, you test boundaries
between equivalence partitions
In our earlier equivalence partitioning example, instead of checking one value for
each partition, you will check the values at the partitions like 0, 1, 10, 11 and so
on. As you may observe, you test values at both valid and invalid boundaries.
Boundary Value Analysis is also called range checking.
Equivalence partitioning and boundary value analysis(BVA) are closely related
and can be used together at all levels of testing.
Decision table testing is a test case design technique that examines how the software
responds to different input combinations. In this technique, various input combinations or
test cases and the concurrent system behavior (or output) are tabulated to form a decision
table. That’s why it is also known as a Cause/Effect table, as it captures both the cause
and effect for enhanced test coverage.
Automation testers or developers mainly use this technique to make test cases for
complex tasks that involve lots of conditions to be checked. To understand the Decision
table testing technique better, let’s consider a real-world example to test an upload image
feature in the software system.
Test Name: Test upload image form.
Test Condition: Upload option must upload image(JPEG) only and that too of size less
than 1 Mb.
Expected Behaviour: If the input is not an image or not less than 1 Mb in size, then it
must pop an “invalid Image size” alert; otherwise, it must accept the upload.
Test Cases using Decision Table Testing:
Based upon our testing conditions, we should test our software system for the following
conditions:
CC = E – N + 2P
Where:
Understanding the upper limit of the number of paths that must be tested to achieve complete
path coverage is made easier by considering cyclomatic complexity.
When determining paths, you’ll also take into account loops, nested conditions, and recursive
calls.
x=0
print(x)
if x > 10
print (‘try again’)
else
print (‘success’)
end
The control flow graph could look something like this:
However, we know that we often have a more complicated scenario than that. Let’s show
what happens when we have a compound statement.
x=0
while x < 10
if x > 2
print (x)
else
print ('x is less than 3')
end
x += 1
end
For step 2, we will determine a baseline path using our control flow graph. Let’s say the
most likely path looks like this:
This would be our first test case. We can use a simple equation called cyclomatic
complexity to determine how many test cases we need for full branch coverage
Path Coverage testing is a structured testing technique for designing test cases with
intention to examine all possible paths of execution at least once.
Creating and executing tests for all possible paths results in 100% statement coverage and
100% branch coverage.
In this type of testing every statement in the program is guaranteed to be executed at least
one time. Flow Graph, Cyclomatic Complexity are used to arrive at basis path
Cyclomatic Complexity
Cyclomatic Complexity is a software metric used to indicate the complexity of a
program.
Cyclomatic Complexity refers to the number of minimum test cases for a white
boxed code which will cover every execution path in the flow.
For example:
if a>b
//do something
else
//do something else
In the above code, cyclomatic complexity is 2 as minimum 2 test cases to cover all the
possible execution path in the known code.
Cyclomatic Complexity is computed in one of three ways:
1. The numbers of regions of the flow graph correspond to the Cyclomatic complexity.
2. Cyclomatic complexity, V(G), for a flow graph G is defined as
V(G) = E – N + 2
where E = number of flow graph edges and N = is the number of flow graph nodes.
3. Cyclomatic complexity, V(G), for a graph flow G is also defined as
V(G) = P + 1
Where P = number of predicate nodes contained in the flow graph G.
Junction Node – a node with more than one arrow entering it.
Decision Node – a node with more than one arrow leaving it.
Region – area bounded by edges and nodes (area outside the graph is also
counted as a region.).
Below are the notations used while constructing a flow graph :
Sequential Statements
If – Then – Else –
Do – While
While – Do
Switch – Case –
Example-1:Consider the following piece of pseudo-code:
1. input(x)
2. if(x>5)
3. z = x + 10
4. else
5. z = x - 5
6. print("Value of Z: ", z)
In the above piece of code, if the value of x entered by the user is greater than 5, then
the order of execution of statements would be:
1, 2, 3, 6
If the value entered by the user in line 1 is less than or equal to 5, the order of
execution of statements would be:
1, 4, 5, 6
Hence, the control flow graph of the above piece of code will be:
Using the above control flow graph and code, we can deduce the table below. This
table mentions the node at which the variable was declared and the nodes at which it
was used:
We can use the above table to ensure that no anomaly occurs in the code by ensuring
multiple tests. E.g., each variable is declared before it is used.
b. Making associations
In this technique, we make associations between two kinds of statements:
(1, (2, t), x), (1, (2, f), x)- This association is made with statement 1 (read x;) and
statement 2 (If(x>0)) where x is defined at line number 1, and it is used at line
number 2 so, x is the variable.
Statement 2 is logical, and it can be true or false that's why the association is
defined in two ways; one is (1, (2, t), x) for true and another is (1, (2, f), x) for
false.
(1, 3, x)- This association is made with statement 1 (read x;) and statement 3 (a=
x+1) where x is defined in statement 1 and used in statement 3. It is a computation
use.
(1, (4, t), x), (1, (4, f), x)- This association is made with statement 1 (read x;) and
statement 4 (If(x<=0)) where x is defined at line number 1 and it is used at line
number 4 so x is the variable. Statement 4 is logical, and it can be true or false
that's why the association is defined in two ways one is (1, (4, t), x) for true and
another is (1, (4, f), x) for false.
(1, (5, t), x), (1, (5, f), x)- This association is made with statement 1 (read x;) and
statement 5 (if (x<1)) where x is defined at line number 1, and it is used at line
number 5, so x is the variable.
Statement 5 is logical, and it can be true or false that's why the association is
defined in two ways; one is (1, (5, t), x) for true and another is (1, (5, f), x) for
false.
(1, 6, x)- This association is made with statement 1 (read x;) and statement 6
(x=x+1). x is defined in statement 1 and used in statement 6. It is a computation
use.
(1, 7, x)- This association is made with statement 1 (read x) and statement 7
(a=x+1). x is defined in statement 1 and used in statement 7 when statement 5 is
false. It is a computation use.
(6, (5, f) x), (6, (5, t) x)- This association is made with statement 6 (x=x+1;) and
statement 5 if (x<1) because x is defined in statement 6 and used in statement 5.
Statement 5 is logical, and it can be true or false that's why the association is
defined in two ways one is (6, (5, f) x) for true and another is (6, (5, t) x) for false.
It is a predicted use.
(6, 6, x)- This association is made with statement 6 which is using the value of
variable x and then defining the new value of x.
x=x+1
x= (-1+1)
Statement 6 is using the value of variable x that is ?1 and then defining new value
of x [x= (-1+1) = 0] that is 0.
(3, 8, a)- This association is made with statement 3(a= x+1) and statement 8 where
variable a is defined in statement 3 and used in statement 8.
(7, 8, a)- This association is made with statement 7(a=x+1) and statement 8 where
variable a is defined in statement 7 and used in statement 8.
Now, there are two types of uses of a variable:
predicate use: the use of a variable is called p-use. Its value is used to decide
the flow of the program, e.g., line 2.
computational use: the use of a variable is called c-use when its value is
used compute another variable or the output, e.g., line 3.
Data Flow Testing Coverage Metrics:
All Definition Coverage: Encompassing “sub-paths” from each definition to
some of their respective uses, this metric ensures a comprehensive examination
of variable paths, fostering a deeper understanding of data flow within the code.
All Definition-C Use Coverage: Extending the coverage spectrum, this metric
explores “sub-paths” from each definition to all their respective C uses, providing
a thorough analysis of how variables are consumed within the code.
All Definition-P Use Coverage: Delving into precision, this metric focuses on
“sub-paths” from each definition to all their respective P uses, ensuring a
meticulous evaluation of data variable paths with an emphasis on precision.
All Use Coverage: Breaking through type barriers, this metric covers “sub-paths”
from each definition to every respective use, regardless of their types. It offers a
holistic view of how data variables traverse through the code.
All Definition Use Coverage: Elevating simplicity, this metric focuses on “simple
sub-paths” from each definition to every respective use. It streamlines the
coverage analysis, offering insights into fundamental data variable interactions
within the code
The strongest of these criteria is all def-use paths. This includes all p- and c-uses.
Definition of a variable is the occurrence of a variable when the value is bound to the
variable. In the above code, the value gets bound in the first statement and then start to flow.
(1, (2, f), x), (6, (5, f) x), (3, 8, a), (7, 8, a).
Statement 4 if (x<=0) is predicate use because it can be predicate as true or false. If it is true
then if (x<1),6x=x+1; execution path will be executed otherwise, else path will be executed.
These are Computation use because the value of x is used to compute and value of a is
used for output.
(1, 3, x), (1, 6, x), (1, 7, x), (6, 6, x), (6, 7, x), (3, 8, a), (7, 8, a).
(1, 3, x), (1, 6, x), (1, 7, x), (6, 6, x), (6, 7, x), (3, 8, a), (7, 8, a).
(1, (2, f), x), (1, (2, t), x), (1, (4, t), x), (1, (4, f), x), (1, (5, t), x), (1,
(5, f), x), (6, (5, f), x), (6, (5, t), x), (3, 8, a), (7, 8, a).
3.17. Test Design Preparedness Metrics
The following metrics can be used to represent the level of preparedness of test design :
1. Preparation Status of Test Cases (PST):
A test case can go through a number of phases or states, such as draft and review, before
it is released as a valid and useful test case.
Thus it is useful to periodically monitor the progress of test design by counting the test
cases lying in different states of design – create, draft, review, released and deleted.
It is expected that all the planned test cases that are created for a particular project
eventually move to the released state before the start of test execution.
It is useful to know the amount of time it takes for a test case to move from its initial
conception, that is, create state, to when it is considered to be usable, that is, released
state.
This metric is useful in allocating time to the test preparation activity in a subsequent test
project. Hence it is useful in test planning.
This is the number of test cases in the released state from the existing projects.
Some of these test cases are selected for regression testing in the current test project.
This is the number of test cases that are in the test suite and ready for execution at the
start of system testing.
This metric is useful in scheduling test execution. As testing continues, new, unplanned
test cases may be required to be designed.
A large number of new test cases compared to NPT suggest that initial planning was not
accurate.
This metric gives the fraction of all requirements covered by a selected number of test
cases or a complete test suite.
The CTS is a measure of the number of test cases needed to be selected or designed to
have good coverage of system requirements.
3.18. TEST CASE DESIGN EFFECTIVENESS
In the world of software testing, test case design is a crucial aspect that can significantly
impact the effectiveness of the testing process. An effective test case not only uncovers
defects but also helps ensure comprehensive test coverage and efficient testing efforts.
This article will explore the art of test case design, providing insights and best practices
for crafting effective test scenarios that contribute to the overall success of your software
testing strategy.
1. Understanding The Importance Of Test Case Design
Test case design is the process of defining the conditions, inputs, and expected results for
individual test scenarios. A well-designed test case serves several purposes:
• Ensures comprehensive test coverage: Test cases should cover all critical aspects of
the application, including functional, non-functional, and integration requirements.
• Reduces testing time and effort: By focusing on the most critical and relevant
scenarios, test case design helps streamline testing efforts and reduces the time needed
for execution.
• Improves communication and collaboration: Test cases serve as a common language
for testers, developers, and other stakeholders, enabling them to understand and discuss
application requirements and expected behavior.
2. Best Practices For Crafting Effective Test Cases
To create effective test cases, QA teams should follow these best practices:
• Focus on user requirements: Ensure that test cases are designed based on user
requirements and cover all critical aspects of the application, including functional, non-
functional, and integration requirements.
• Keep test cases simple and concise: Test cases should be easy to understand, with
clear and concise descriptions, steps, inputs, and expected results.
• Consider positive and negative scenarios: Design test cases that cover both positive
(expected behavior) and negative (unexpected behavior) scenarios to ensure
comprehensive test coverage.
• Incorporate boundary and edge cases: Boundary and edge cases often reveal defects in
the application, so be sure to include these scenarios in your test case design.
• Use a consistent format and structure: Adopt a consistent format and structure for test
cases, making them easier to read, understand, and maintain.
Test Design — This can be done in either Criteria-Based where Design test
values satisfy coverage criteria or other engineering goals or in Human-Based
where Design test values based on domain knowledge of the program and
human knowledge of testing which is comparatively harder. This the most
technical part of the MDTD process better to use experienced developers in
this phase.
Test Automation — This involves embedding test values to scripts. Test cases
are defined based on the test requirements. Test values are chosen such that
we can cover a larger part of the application with fewer test cases. We don’t
need that much domain knowledge in this phase, however, we need to use
technically skilled people.
Test Execution — The test engineer will run tests and records the results in
this activity. Unlike the previous activities, test execution not required a high
skill set such as technical knowledge, logical thinking, or domain knowledge.
Since we consider this phase comparatively low risk, we can assign junior
intern engineers to execute the process. But we should focus on monitoring,
log collecting activities based on automation tools.
Test Evaluation — The process of evaluating the results and reporting to
developers. This phase is comparatively harder and we expected to have
knowledge in the domain, testing, and User interfaces, and psychology
The below diagrams shows the steps & activities involved in the MDTD
Test automation process, make it easy to do regression testing in less time compared
to manual testing, and also it avoids the chance of missing the previous passed test
cases to be tested in the current testing process. MDTD defines a simple framework
to automate the testing process in a structured manner.
The procedural aspect of a given test, usually a set of detailed instructions for the
setup and step-by-step execution of one or more given test cases. The test procedure
is captured in both test scenarios and test scripts.
Here's a general outline of the typical testing procedure:
Requirement Analysis:
Understand and analyze the software requirements to identify testable features and
criteria.
Test Planning:
Develop a comprehensive test plan that outlines the testing approach, scope,
resources, schedule, and deliverables.
Test Case Design:
Create detailed test cases based on the requirements and specifications. Test cases
should cover various scenarios, including normal and edge cases.
Test Environment Setup:
Prepare the necessary test infrastructure, including hardware, software, and network
configurations.
Test Data Preparation:
Identify and create the test data needed to execute the test cases effectively.
Test Execution:
Run the test cases on the prepared test environment. This involves executing the test
scripts and manual testing, if applicable.
Defect Reporting:
Record and report any defects or issues found during the testing process. Provide
detailed information to help developers understand and fix the problems.
Defect Retesting:
After developers address reported defects, re-run the relevant test cases to ensure that
the issues have been resolved.
Regression Testing:
Conduct regression testing to ensure that new changes or fixes do not negatively
impact existing functionality.
Performance Testing (if applicable):
Verify the performance of the software, including aspects such as speed, scalability,
and responsiveness.
Security Testing (if applicable):
Check for vulnerabilities and ensure that the software is secure against potential
threats.
User Acceptance Testing (UAT):
Summarize the testing activities, evaluate the test process against the defined criteria,
and provide recommendations for future improvements.
Then, the developers will fix those defects, do one round of White box testing,
and send it to the testing team.
Here, fixing the bugs means the defect is resolved, and the particular feature is
working according to the given requirement.
The main objective of implementing the black box testing is to specify the
business needs or the customer's requirements.
In other words, we can say that black box testing is a process of checking the
functionality of an application as per the customer requirement. The source code
is not visible in this testing; that's why it is known as black-box testing.
Functional Testing
Non-function Testing
Functional Testing
The test engineer will check all the components systematically against
requirement specifications is known as functional testing. Functional testing is
also known as Component testing.
In functional testing, all the components are tested by giving the value, defining
the output, and validating the actual output with the expected value.
Functional testing is a part of black-box testing as its emphases on application
requirement rather than actual code. The test engineer has to test only the program
instead of the system.
Unit Testing
Integration Testing
System Testing
1. Unit Testing
Unit testing is the first level of functional testing in order to test any software. In this,
the test engineer will test the module of an application independently or test all the
module functionality is called unit testing.
The primary objective of executing the unit testing is to confirm the unit components
with their performance. Here, a unit is defined as a single testable function of a
software or an application. And it is verified throughout the specified application
development phase.
2. Integration Testing
Once we are successfully implementing the unit testing, we will go integration testing.
It is the second level of functional testing, where we test the data flow between
dependent modules or interface between two features is called integration testing.
The purpose of executing the integration testing is to test the statement's accuracy
between each module.
Incremental Testing
Non-Incremental Testing
Incremental Integration Testing
Whenever there is a clear relationship between modules, we go for incremental
integration testing. Suppose, we take two modules and analysis the data flow between
them if they are working fine or not.
If these modules are working fine, then we can add one more module and test again.
And we can continue with the same process to get better results.
In other words, we can say that incrementally adding up the modules and test the data
flow between the modules is known as Incremental integration testing.
Types of Incremental Integration Testing
Incremental integration testing can further classify into two parts, which are as
follows:
In this approach, we will add the modules step by step or incrementally and test the
data flow between them. We have to ensure that the modules we are adding are the
child of the earlier ones.
In the bottom-up approach, we will add the modules incrementally and check the data
flow between modules. And also, ensure that the module we are adding is the parent
of the earlier ones.
3. System Testing
Whenever we are done with the unit and integration testing, we can proceed
with the system testing.
In system testing, the test environment is parallel to the production
environment. It is also known as end-to-end testing.
In this type of testing, we will undergo each attribute of the software and test if
the end feature works according to the business requirement. And analysis the
software product as a complete system.
Non-function Testing
The next part of black-box testing is non-functional testing. It provides
detailed information on software product performance and used technologies.
Non-functional testing will help us minimize the risk of production and related
costs of the software.
Performance Testing
Usability Testing
Compatibility Testing
1. Performance Testing
In performance testing, the test engineer will test the working of an application by
applying some load.
In this type of non-functional testing, the test engineer will only focus on several
aspects, such as Response time, Load, scalability, and Stability of the software or
an application.
Performance testing includes the various types of testing, which are as follows:
Load Testing
Stress Testing
Scalability Testing
Stability Testing
Load Testing
While executing the performance testing, we will apply some load on the
particular application to check the application's performance, known as load
testing. Here, the load could be less than or equal to the desired load.
It will help us to detect the highest operating volume of the software and
bottlenecks.
Stress Testing
It is used to analyze the user-friendliness and robustness of the software
beyond the common functional limits.
Primarily, stress testing is used for critical software, but it can also be used for
all types of software applications.
Scalability Testing
To analysis, the application's performance by enhancing or reducing the load
in particular balances is known as scalability testing.
Stability Testing
Stability testing is a procedure where we evaluate the application's
performance by applying the load for a precise time.
It mainly checks the constancy problems of the application and the efficiency
of a developed product. In this type of testing, we can rapidly find the system's
defect even in a stressful situation.
2. Usability Testing
Another type of non-functional testing is usability testing. In usability testing,
we will analyze the user-friendliness of an application and detect the bugs in
the software's end-user interface.
The application should be easy to understand, which means that all the
features must be visible to end-users.
The application's look and feel should be good that means the application
should be pleasant looking and make a feel to the end-user to use it.
3. Compatibility Testing
In compatibility testing, we will check the functionality of an application in
specific hardware and software environments. Once the application is
functionally stable then only, we go for compatibility testing.
Here, software means we can test the application on the different operating
systems and other browsers, and hardware means we can test the application
on different sizes.
Grey Box Testing
Another part of manual testing is Grey box testing. It is a collaboration of
black box and white box testing.
Since, the grey box testing includes access to internal coding for designing test
cases. Grey box testing is performed by a person who knows coding as well as
testing.
.Automation Testing
The most significant part of Software testing is Automation testing. It uses
specific tools to automate manual design test cases without any human
interference.
Automation testing is the best way to enhance the efficiency, productivity, and
coverage of Software testing.
It is used to re-run the test scenarios, which were executed manually, quickly,
and repeatedly.
In other words, we can say that whenever we are testing an application by using some
tools is known as automation testing.
We will go for automation testing when various releases or several regression cycles
goes on the application or software. We cannot write the test script or perform the
automation testing without understanding the programming language.
Smoke Testing
Sanity Testing
Regression Testing
User Acceptance Testing
Exploratory Testing
Adhoc Testing
Security Testing
Globalization Testing
Let's understand those types of testing one by one:
Smoke testing
In smoke testing, we will test an application's basic and critical features before doing
one round of deep and rigorous testing.
Or before checking all possible positive and negative values is known as smoke
testing. Analyzing the workflow of the application's core and main functions is the
main objective of performing the smoke testing.
.
Sanity Testing
It is used to ensure that all the bugs have been fixed and no added issues come into
existence due to these changes. Sanity testing is unscripted, which means we cannot
documented it. It checks the correctness of the newly added features and components.
Regression Testing
Regression testing is the most commonly used type of software testing. Here, the term
regression implies that we have to re-test those parts of an unaffected application.
Regression testing is the most suitable testing for automation tools. As per the project
type and accessibility of resources, regression testing can be similar to Retesting.
Whenever a bug is fixed by the developers and then testing the other features of the
applications that might be simulated because of the bug fixing is known as regression
testing.
In other words, we can say that whenever there is a new release for some project, then
we can perform Regression Testing, and due to a new feature may affect the old
features in the earlier releases.
In user acceptance testing, we analyze the business scenarios, and real-time scenarios
on the distinct environment called the UAT environment. In this testing, we will test
the application before UAI for customer approval.
Exploratory Testing
Whenever the requirement is missing, early iteration is required, and the testing team
has experienced testers when we have a critical application. New test engineer entered
into the team then we go for the exploratory testing.
To execute the exploratory testing, we will first go through the application in all
possible ways, make a test document, understand the flow of the application, and then
test the application.
Adhoc Testing
Testing the application randomly as soon as the build is in the checked sequence is
known as Adhoc testing.
It is also called Monkey testing and Gorilla testing. In Adhoc testing, we will check
the application in contradiction of the client's requirements; that's why it is also
known as negative testing.
When the end-user using the application casually, and he/she may detect a bug. Still,
the specialized test engineer uses the software thoroughly, so he/she may not identify
a similar detection.
Security Testing
The execution of security testing will help us to avoid the nasty attack from outsiders
and ensure our software applications' security.
In other words, we can say that security testing is mainly used to define that the data
will be safe and endure the software's working process.
Globalization Testing
Another type of software testing is Globalization testing. Globalization testing is used
to check the developed software for multiple languages or not. Here, the words
globalization means enlightening the application or software for various languages.
Globalization testing is used to make sure that the application will support multiple
languages and multiple features.
With its aim to define and specify the steps used for executing test cases as well as the
measures taken to analyze the software item in order to evaluate its set of features, the test
procedure / script specification document is prepared by software testers after the
accomplishment of the software testing phase. The template/format of this document is
universally acknowledged and accepted as it is defined by the IEEE standard 829-1998.
Therefore, the format for test procedure / script specification is:
Purpose: Once a distinctive identification is generated for the document, the purpose
of test procedure is defined. It consists a detailed list of all the test cases covered
during testing procedure as well as the description of each test procedure.
o It also includes details about the special skills and training required by the
team for the test procedure.
Procedures/Script Steps: Finally the actual steps used for the execution of the
procedure as well as the activities related to it are defined by the team at the end of the
specification document. It includes various procedure/script steps that are defined by
IEEE are:
o Log.
o Set Up.
o Proceed.
o Measure .
o Shut down.
o Restart.
o Stop.
o Wrap-up.
Test case management tools offer many features to streamline the process. These
tools enable testers to create and organise test cases, assign priorities and dependencies,
track progress, and generate reports while facilitating team collaboration.
3.21.1. Components of a test case
When creating test cases, you should include several essential components to ensure their
effectiveness and comprehensiveness:
1. Test case ID: A unique identifier or number for easy tracking and reference.
2. Test case title: A concise and descriptive title summarising the test case objective
or goal.
3. Test description: A detailed description of the tested functionality or features,
outlining the inputs, actions, and expected outcomes.
5. Test steps: Clear instructions on executing the test case, including the specific
actions and the expected results at each step.
6. Test data: The input data you must use during the test case execution, including
valid and invalid data sets to validate different scenarios.
7. Expected results: The anticipated outcomes of the test case that need to be
specific, measurable, and aligned with the test objective.
8. Actual results: The actual outcome of the test case execution with the deviations
or discrepancies from actual results.
9. Pass/Fail status: A clear indication of whether the test case passed or failed.
10. Test environment: The specific environment, including hardware, software, OS,
or browsers.
11. Test Priority/Severity: The priority of the test case based on its impact on the
system, and severity refers to the degree of impact a bug would have on the
system’s functionality.
12. Test case author: The name of the person responsible for creating the case.
3.16.2 How to write effective test cases?
Writing effective test cases ensures thorough testing coverage and efficient software QA.
Here are some key guidelines for writing test cases:
Understand the project requirements, user stories, and functional specifications
to define the test cases’ scope and objectives.
Keep test cases clear and concise by using simple language and avoiding
ambiguity. Ensure everyone can understand the steps and expected results without
confusion.
Use a standardised template for test case documentation to maintain consistency
and make the test cases easier to read, understand, and hand over.
Test one functionality per test case and avoid combining multiple test scenarios
into one case, which may lead to confusion. Test scenarios are a better option to
validate an entire flow.
Define preconditions and test data for the test case, such as specific system
configurations or data setups.
Outline test steps sequentially and provide step-by-step instructions on executing
the test case from the initial state with concise, specific, and easy-to-follow.
Include expected results for each step to validate the system’s behaviour and easy
comparison with actual results.
Cover positive and negative scenarios to identify potential defects and ensure
comprehensive test coverage.
Prioritise test cases based on their criticality and impact on the system to manage
testing efforts and address high-priority test cases first.
Let peers review and validate the cases for accuracy, clarity, and coverage and
compare them against the requirements to ensure they align with the expected
behaviour.
Keep test cases maintainable and easy to update as the system evolves, avoiding
hard-coding values or dependencies irrelevant to the test.
Regularly update test cases as requirements change, defects are identified and
fixed, or new features are added.
User Interface Test Cases: These are compiled to ensure the application’s aesthetics
appear as planned.
Functionality Test Cases: This ensures the application’s expected functionalities work
correctly.
Performance Test Cases: These cases help verify the response time and efficiency.
Security Test Cases: These are compiled to secure the application data and restrict the
application’s use to specific users.
Integration Test Cases: These help ensure that interfaces between multiple modules of an
application are working as expected.
Usability Test Cases: These cases help enhance the application’s user experience.
Database Test Cases: These help ensure the application can collect, store, process and
handle data appropriately.
3.21.3. What is the role of test case management?
Test case management is crucial in software testing and quality assurance. Here are some
key roles and benefits of effective test case management:
There are different approaches to test case management, including the following:
Agile test case management includes methodologies like Scrum or Kanban,
focusing on iterative development and frequent software releases. It involves
creating and managing test cases that align with user stories or features defined in
the product backlog.
In this methodology, you continuously refine and update test cases based on
evolving requirements and integrate test execution within the sprint or iteration
cycles. This approach emphasises flexibility, adaptability, and collaboration
between developers, testers, and stakeholders.
It gives a clear idea of the testing activities to a testing team. The team will know
what tests to execute and what to expect if the test succeeds or fails.
It helps to keep track of test cases and group them into categories like resolved,
deferred, ongoing, etc.
It helps manage automated and manual testing more efficiently.
It helps manage a range of test executions for various test cases.
It improves the collaboration efforts between project engineers even if they belong
to different teams.
3.21.6. Levels of testing and test case management
There are mainly four Levels of Testing in software testing :
Unit testing:
A Unit is a smallest testable portion of system or application which can be
compiled, liked, loaded, and executed. This kind of testing helps to test each module
separately.
The aim is to test each part of the software by separating it. It checks that
component are fulfilling functionalities or not. This kind of testing is performed by
developers.
Integration testing:
Integration means combining. For Example, In this testing phase, different software
modules are combined and tested as a group to make sure that integrated system is
ready for system testing.
Integrating testing checks the data flow from one module to other modules. This
kind of testing is performed by testers.
System Testing
System testing is performed on a complete, integrated system. It allows checking
system’s compliance as per the requirements. It tests the overall interaction of
components. It involves load, performance, reliability and security testing.
System testing most often the final test to verify that the system meets the
specification. It evaluates both functional and non-functional need for the testing.
Acceptance testing:
Acceptance testing is a test conducted to find if the requirements of a specification
or contract are met as per its delivery. Acceptance testing is basically done by the
user or customer. However, other stockholders can be involved in this process.
Test case maintenance: Keeping test cases updated and synchronised can be time-
consuming and demands continuous effort. To overcome this, you need to
implement a robust change management process to ensure timely updates to test
cases when requirements, functionality, or user scenarios are changed.
Test case reusability: Identifying and organising reusable test cases across
projects or releases can be challenging. Keeping test cases easily adaptable and
relevant to various contexts without causing false positives or negatives can be a
struggle. To handle this, you need to establish a centralised repository or test case
management tool that allows easy categorisation and tagging of test cases based on
their reusability. Clearly documenting the context and prerequisites for each test
case ensures adaptability while avoiding false positives or negatives.
Test case traceability: Achieving and maintaining near 100% test coverage
requires good traceability to see what requirements have enough tests to them.
However, it can be complex in large-scale projects with changing requirements
and multiple stakeholders, requiring careful management and coordination. If not
available in your test management software, you should develop a traceability
solution that links test cases to the corresponding requirements and regularly
reviews the results to ensure coverage and track any changes in requirements.
Test case version control: Managing different test case versions, especially in
collaborative environments, poses challenges. Maintaining a clear version history,
ensuring the latest versions, and avoiding conflicts can be demanding without
proper version control mechanisms not found in spreadsheet-style management
approaches. One solution might be using version control mechanisms provided by
test case management tools. You should maintain a clear version history, ensure
the latest versions are used, and establish guidelines for resolving conflicts in case
of overlapping modifications.
Test case prioritisation: With limited time and resources, prioritising test cases
becomes vital. Determining the priority of test cases based on risk assessment,
business impact, or critical functionalities can be subjective and challenging,
requiring careful analysis and decision-making. You must conduct a risk
assessment to identify critical functionalities and high-risk areas to deal with them.
Consider the impact on business goals and prioritise test cases accordingly.
3.22. BUG REPORTING
Bug reports must be clear, concise, and correct to assist developers in understanding
and quickly resolving the issue. All bugs must be documented in a bug-reporting system to
identify, prioritize, and fix them promptly. Failure to do so may lead to the developer not
understanding or disregarding the issue, as well as management not recognizing the severity
of it and leaving it in production until customers make them aware
Reporting bugs is a fundamental process involving the documentation and
communication of software defects, commonly known as “bugs,” to relevant developers
responsible for correcting them. These bugs are essentially unintended errors or flaws in a
software system that can result in malfunctions, unexpected behaviours, or disruptions to its
normal operation. This practice holds immense significance within the realms of software
development and quality assurance. Whenever users or testers come across bugs while
utilizing a software application, they initiate the creation of bug reports.
1. All the relevant information must be provided with the bug report
Simple sentences should be used to describe the bug. Expert testers consider bug reporting
nothing less than a skill. We have compiled some tips that will help testers master them
better:
While reporting a bug, the tester must ensure that the bug is reproducible. The steps to
reproduce the bug must be mentioned. All the prerequisites for the execution of steps and any
test data details should be added to the bug.
Try to summarize the issue in a few words, brief but comprehensive. Avoid writing lengthy
descriptions of the problem.
It is important to report bugs as soon as you find them. Reporting the bug early will help the
team to fix the bug early and will help to deliver the product early.
Proofread all the sentences and check the issue description for spelling and
grammatical errors.
If required, one can use third-party tools, for eg. Grammarly. It will help the developer
understand the bug without ambiguity and misrepresentation.
6. Documenting intermittent issues:
Sometimes all bugs are not reproducible. You must have observed that sometimes a mobile
app crashes, and you must restart the app to continue. These types of bugs are not
reproducible every time.
Testers must try to make a video of the bug in such scenarios and attach it to the bug report. A
video is often more helpful than a screenshot because it will include details of steps that are
difficult to document.
While raising a bug, one must ensure that the bug is not duplicating an already-reported bug.
Also, check the list of known and open issues before you start raising bugs. Reporting
duplicate bugs could cost duplicate efforts for developers, thus impacting the testing life
cycle.
If multiple issues are reported in the same bug, it can’t be closed unless all the issues are
resolved. So, separate bugs should be created if issues are not related to each other.
While documenting the bug, avoid using a commanding tone, harsh words, or making fun of
the developer.
Bug reporting can be a complex and challenging process. Some of the common
challenges faced in bug reporting are:
Defect Life Cycle or Bug Life Cycle in software testing is the specific set of states
that defect or bug goes through in its entire life. The purpose of Defect life cycle is to
easily coordinate and communicate current status of defect which changes to various
assignees and make the defect fixing process systematic and efficient.
Defect Status - Defect Status or Bug Status in defect life cycle is the present state
from which the defect or a bug is currently undergoing. The goal of defect status is to
precisely convey the current state or progress of a defect or bug in order to better track
and understand the actual progress of the defect life cycle.
Defect States Workflow - The number of states that a defect goes through varies
from project to project. Below lifecycle diagram, covers all possible states
New: When a new defect is logged and posted for the first time. It is assigned a status
as NEW.
Assigned: Once the bug is posted by the tester, the lead of the tester approves the bug
and assigns the bug to the developer team
Open: The developer starts analyzing and works on the defect fix
Fixed: When a developer makes a necessary code change and verifies the change, he
or she can make bug status as “Fixed.”
Pending retest: Once the defect is fixed the developer gives a particular code for
retesting the code to the tester. Since the software testing remains pending from the
testers end, the status assigned is “pending retest.”
Retest: Tester does the retesting of the code at this stage to check whether the defect
is fixed by the developer or not and changes the status to “Re-test.”
Verified: The tester re-tests the bug after it got fixed by the developer. If there is no
bug detected in the software, then the bug is fixed and the status assigned is
“verified.”
Reopen: If the bug persists even after the developer has fixed the bug, the tester
changes the status to “reopened”. Once again the bug goes through the life cycle.
Closed: If the bug is no longer exists then tester assigns the status “Closed.”
Duplicate: If the defect is repeated twice or the defect corresponds to the same
concept of the bug, the status is changed to “duplicate.”
Rejected: If the developer feels the defect is not a genuine defect then it changes the
defect to “rejected.”
Deferred: If the present bug is not of a prime priority and if it is expected to get fixed
in the next release, then status “Deferred” is assigned to such bugs
Not a bug: If it does not affect the functionality of the application then the status
assigned to a bug is “Not a bug”.