0% found this document useful (0 votes)
169 views78 pages

Unit Iii Test Design and Execution

about test design

Uploaded by

hod.ai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
169 views78 pages

Unit Iii Test Design and Execution

about test design

Uploaded by

hod.ai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 78

UNIT III TEST DESIGN AND EXECUTION

Test Objective Identification, Test Design Factors, Requirement identification, Testable


Requirements, Modelling a Test Design Process, Modelling Test Results, Boundary Value
Testing, Equivalence Class Testing, Path Testing, Data Flow Testing, Test Design
Preparedness Metrics, Test Case Design Effectiveness, Model-Driven Test Design, Test
Procedures, Test Case Organization and Tracking, Bug Reporting, Bug Life Cycle

3.1. Test Objective Identification


3.1.1. Objectives of Testing

The objectives of the testing are the reasons or purpose of the testing and the
object of the testing is the work product to be tested.
Testing objectives can differ depending on few factors as,

 The context of the component


 System being tested
 The test levels
 The software development life cycle model

The facts may include in typical objectives of Testing

Prevent defects: One of the objectives of software testing is to avoid mistakes


throughout the development process. The cost and labor associated with error
detection are considerably reduced when faults are detected early. It also saves time.
Defect prevention entails conducting a root cause analysis of previously discovered
flaws and then taking specific steps to prevent the recurrence of those types of faults
in the future.

Evaluate work products: The objectives are used to assess work items such as the
requirement document, design, and user stories. Before the developer picks it up for
development, it should be confirmed. Identifying any ambiguity or contradictory
requirements at this stage saves a significant amount of development and testing
time.
Verify Requirements: This objective demonstrates that one of the most important
aspects of testing should be to meet the needs of the client. Testers examine the
product and ensure that all of the stipulated standards are met. Developing all test
cases, independent of testing technique, ensures functionality confirmation for every
executed test case.

Validate test objects: Testing ensures that requirements are implemented as well as
that they function as expected by users. This type of testing is known as validation. It
is the process of testing a product after it has been developed. Validation can be done
manually or automatically.

Build confidence: One of the most important goals of software testing is to improve
software quality. A lower number of flaws is associated with high-quality software.

Reduce risk: The probability of loss is sometimes referred to as risk. The goal of
software testing is to lower the likelihood of the risk occurring. Each software project
is unique and has a substantial number of unknowns from several viewpoints. If we
do not control these uncertainties, it will impose possible hazards not only during the
development phases but also during the product’s whole life cycle. As a result, the
major goal of software testing is to incorporate the risk management process as early
as possible in the development phase in order to identify any risks.

Share information to stakeholders: One of the most essential goals of testing is to


provide stakeholders with enough information to allow them to make educated
decisions, particularly on the degree of quality of the test object. The goal of testing
is to offer complete information to stakeholders regarding technological or other
constraints, risk factors, confusing requirements, and so on. It can take the shape of
test coverage, testing reports that cover specifics such as what is missing and what
went wrong. The goal is, to be honest, and ensure that stakeholders fully grasp the
challenges influencing quality.

Find failures and defects: Another critical goal of software testing is to uncover all
flaws in a product. The basic goal of testing is to uncover as many flaws as possible
in a software product while confirming whether or not the application meets the
user’s needs. Defects should be found as early in the testing cycle as feasible.
3.2. TEST OBJECTIVE IDENTIFICATION
1. Analyze project requirements
2. Identify stakeholders and quality standards
3. Define test scope and focus
4. Formulate test criteria and metrics
5. Specify test outcomes and benefits
6. Document test objectives
1. Analyze project requirements
The first step in identifying test objectives is to analyze the project requirements and
understand what the software is supposed to do, how it will be used, and what are the
functional and non-functional requirements. You can use various sources of information,
such as user stories, use cases, specifications, design documents, and customer feedback, to
gather and document the requirements. You should also prioritize the requirements based on
their importance, complexity, and risk.
2. Identify stakeholders and quality standards
The next step is to identify the stakeholders and quality standards that are relevant for your
software testing process. Stakeholders are the people or groups who have an interest or
influence in the software, such as customers, users, developers, managers, regulators, and
testers. Quality standards are the guidelines and criteria that define the expected level of
quality and performance of the software, such as usability, reliability, security, compatibility,
and compliance. You should communicate with the stakeholders and review the quality
standards to understand their expectations, needs, and preferences.

3. Define test scope and focus


The third step is to define the test scope and focus, which are the boundaries and areas of
your software testing process. The test scope determines what features, functions, and
components of the software will be tested, and what will be excluded or deferred. The test
focus determines the aspects, attributes, and perspectives of the software that will be
emphasized, such as functionality, usability, security, or performance. You should define the
test scope and focus based on the project requirements, stakeholders, quality standards, and
available resources.

4. Formulate test criteria and metrics


The fourth step is to formulate the test criteria and metrics, which are the measures and
indicators that will be used to evaluate and report the test results. The test criteria define the
conditions and rules that determine whether the software meets the requirements and quality
standards, such as pass/fail, acceptance, or coverage. The test metrics define the quantitative
and qualitative data that will be collected and analyzed to assess the effectiveness and
efficiency of the software testing process, such as defects, test cases, test execution time, or
customer satisfaction.

5. Specify test outcomes and benefits


The fifth step is to specify the test outcomes and benefits, which are the expected results and
value of your software testing process. The test outcomes describe the observable and
verifiable changes and improvements that the software testing process will produce, such as
reduced errors, increased functionality, enhanced usability, or improved performance. The
test benefits describe the tangible and intangible advantages and impacts that the software
testing process will provide, such as increased customer satisfaction, reduced costs, improved
quality, or competitive edge.

6. Document test objectives


The final step is to document the test objectives in a clear, concise, and consistent manner in
your test plan. You should use the SMART criteria to ensure that your test objectives are
specific, measurable, achievable, relevant, and time-bound. You should also align your test
objectives with your project requirements, stakeholders, quality standards, test scope, focus,
criteria, metrics, outcomes, and benefits. You should review and validate your test objectives
with the stakeholders and the test team, and update them as needed throughout the software
testing process.
3.3. TEST DESIGN FACTORS
Test design is a process that describes “how” testing should be done. It includes processes for
the identifying test cases by enumerating steps of the defined test conditions. The testing
techniques defined in test strategy or plan is used for enumerating the steps.
 For designing Test Cases the following factors are considered:
1. Correctness
2. Negative
3. User Interface
4. Usability
5. Performance
6. Security
7. Integration
8. Reliability
9. Compatibility

Correctness : Correctness is the minimum requirement of software, the essential purpose of


testing. The tester may or may not know the inside details of the software module under test
e.g. control flow, data flow etc.

Negative : In this factor we can check what the product it is not supposed to do.

User Interface : In UI testing we check the user interfaces. For example in a web page we
may check for a button. In this we check for button size and shape. We can also check the
navigation links.

Usability : Usability testing measures the suitability of the software for its users, and is
directed at measuring the following factors with which specified users can achieve specified
goals in particular environments.
1. Effectiveness : The capability of the software product to enable users to achieve
specified goals with the accuracy and completeness in a specified context of use.
2. Efficiency : The capability of the product to enable users to expend appropriate
amounts of resources in relation to the effectiveness achieved in a specified context of use.

Performance : In software engineering, performance testing is testing that is performed from


one perspective to determine how fast some aspect of a system performs under a particular
workload.

Performance testing can serve various purposes. It can demonstrate that the system needs
performance criteria.
1. Load Testing: This is the simplest form of performance testing. A load test is usually
conducted to understand the behavior of the application under a specific expected load.

2. Stress Testing: Stress testing focuses on the ability of a system to handle loads beyond
maximum capacity. System performance should degrade slowly and predictably without
failure as stress levels are increased.

3. Volume Testing: Volume testing belongs to the group of non-functional values tests.
Volume testing refers to testing a software application for a certain data volume. This volume
can in generic terms be the database size or it could also be the size of an interface file that is
the subject of volume testing.

Security : Process to determine that an Information System protects data and maintains
functionality as intended. The basic security concepts that need to be covered by security
testing are the following:
1. Confidentiality : A security measure which protects against the disclosure of
information to parties other than the intended recipient that is by no means the only
way of ensuring

2. Integrity: A measure intended to allow the receiver to determine that the


information which it receives has not been altered in transit other than by the
originator of the information.

3. Authentication: A measure designed to establish the validity of a transmission,


message or originator. Allows a receiver to have confidence that the information it
receives originated from a specific known source.
4. Authorization: The process of determining that a requester is allowed to
receive a service/perform an operation.

Integration : Integration testing is a logical extension of unit testing. In its simplest form, two
units that have already been tested are combined into a component and the interface between
them is tested.

Reliability : Reliability testing is to monitor a statistical measure of software maturity over


time and compare this to a desired reliability goal.

Compatibility : Compatibility testing of a part of software's non-functional tests. This testing


is conducted on the application to evaluate the application's compatibility with the computing
environment. Browser compatibility testing can be more appropriately referred to as user
experience testing.

3.4. REQUIREMENT IDENTIFICATION


 Requirement analysis is the crucial initial step in the development process, laying the
groundwork for the entire software project's success. It demands continuous
communication with the stakeholders and their end users. This allows one to set
expectations, manage application errors or issues, and document all critical needs.
 In the software development process, the requirement analysis is the main phase that
allows understanding, documenting, and defining the expectations of the users and
other stakeholders related to the software application. While developing software,
aligning with the business and software requirements is essential.
 It is essential to have detailed information on the requirements of the software
application before initiating its development so that everything is clear between the
needed product (according to the business and software requirements) and the final
result.
 Software requirements refer to the essential needs that the software must meet to
deliver the product's quality. In other terms, software requirements are the capability
that the end users would need to achieve the goal of software applications.
 These requirements typically represent users' expectations from the software, which
are crucial and must be fulfilled by the software. Analysis, on the other hand, involves
a systematic and detailed examination of something to gain comprehensive insights
into it.
 Consequently, software requirement analysis means a thorough study, review, and
description of software requirements, ensuring that genuine and necessary
requirements are met to address the problem.
 This process involves various activities, some of which are as follows:
 Problem recognition
 Evaluation and synthesis
 Modeling
 Specification
 Review
3.4.1. Purpose and Prioritization of Requirement Analysis
 From the above explanation, it must be clear that the primary purpose of the
requirement analysis is to gather, evaluate, and document the requirements of the
software application. This, in turn, ensures that the developed software
applications successfully meet the desired expectations and satisfy the needs of
the end users.
 However, it is important to note here that not all software requirements are equally
important to each other. Some requirements may have more significant impacts
than others for developing quality software applications. You can understand this
better with an example like one of the top priorities for a mobile banking app
would be ensuring the security of user data and transactions.
 You can follow steps like gathering requirements, understanding business
objectives, categorizing, defining criteria, evaluating, assigning priority levels,
reviewing, adjusting as needed, communicating priorities, and monitoring and
reassessing throughout the software project. These steps will help you work on the
significant aspects of the software applications and deliver its core functionalities.

3.4.2. Requirement Analysis in Software Development Life Cycle


Requirement analysis is the first phase in the Software Development Life Cycle
(SDLC). This is one of the most important phases, as having clear requirements is
necessary to proceed further in the SDLC. This phase involves identifying,
analyzing, and documenting the expectations of the stakeholders and end users.

3.4.3. Importance of Requirement Analysis in SDLC


In software development, requirement analysis accomplishes the following:
 Clearly defines the necessary features and overall product vision.
 Identifying and clarifying stakeholder expectations for the product.
 Having requirements in place helps avoid disagreements and misunderstandings
during the development and testing stages of the project.
 Ensures the final software product adheres to the specified requirements, thus
avoiding scope creep.
 It helps in minimizing rework and optimizing resources.
 Identifies potential risks and enables risk mitigation.
 Focusing on client needs ensures client satisfaction and project success.
3.4.4. Structure of Software Requirement
Requirement analysis results in a well-organized document containing all the crucial
details required to create top-notch software applications. The requirement document
follows the below format with different sections:
 Introduction: This includes information on a brief overview of the software
projects, like the project name, document creation date, version number, etc. It
also details the description and scope of the software project, having pertinent
context.
 Purpose and scope: The software application's purpose and range to be developed
are crucial. It briefs about the software application's intended use, essential
features, and functionalities.
 Functional requirement: You give brief information on the functionalities of the
software here. For example, it may include details on the inputs, outputs, and any
other constraints. You can also have information on the use cases, user stories, or
scenarios illustrating how each requirement should work.
 Non-functional: When developing software requirements, having information on
non-functional software requirements like performance, security, reliability,
compatibility with different devices, browsers, or operating systems, usability, and
scalability is essential. You must state each of the non-functional requirements of
the software applications, along with its criteria for critical evaluation.
 User requirement: This section of the software requirement document focuses on
the needs and expectations of the end-users. It must have information on the user
interface, user experience requirements, and user interactions.
 System architecture: It includes different relevant diagrams and descriptions of the
components or units of the software applications and their interactions. In other
words, this section will detail information on the high-level system architecture
and its design.
 Data requirement: Here, the document specifies the data needed by the software
application, including data formats, storage, and data handling rules.
 Assumptions and constraints: The documents give details on any assumptions
made during the requirement analysis process or conditions that may affect the
project are documented in this section. In later sections, we will further discuss the
different assumptions made by stakeholders and others.
 Dependencies: Here, you discuss the need for the external system, software, and
hardware that requires integration with the software application.
 Risk and mitigations: The software requirement document has information on any
potential risk associated with the software application to be developed. Based on
the identified risk, mitigation strategies are developed.
 Traceability matrix: A traceability matrix requirement maps each source to the
requirement document (e.g., stakeholder, business objective) and may also link
details to design and testing artifacts.
 Approval and sign-off: The software requirement document also has a section for
stakeholders to approve and sign off. This section is crucial as it shows the
acceptance of the documented scope and objective.
3.4.5. Assumptions of Requirement Analysis
In software requirement analysis, stakeholders make certain assumptions to better understand
the software development process, describing and verifying the software requirement. Using
assumptions, you can simplify the analysis and decision-making process in software
application development. Here are some of the common assumptions made:
 Stable requirement: It is assumed that the initial set of requirements provided by
stakeholders represents a regular and comprehensive understanding of their needs.
However, it's essential to acknowledge that requirements can evolve and change over
time.
 Clear communication: In requirement analysis, it is assumed that communication
between the business analyst, development and testers team, and other stakeholders is
clear.
 Consistent terminology: The terminology used in the requirement analysis for better
descriptions is even and constant throughout the documentation.
 Feasibility: The software requirement is feasible even with the time, resources, and
technology involved in developing software applications.
 User representative: The sample of users or user representatives involved in
requirement analysis adequately represents the broader user base.
 Budget and schedule: The budget and schedule of the software application are clearly
defined and appropriate to meet the requirements of the end-users.
 Stakeholder availability: Assumptions are made that all the stakeholders involved in
structuring and analyzing software requirements are available to give their valuable
inputs.
 Regulatory compliance: It is assumed that the software applications follow the
regulatory standards of the software industry.
 Non-functional requirement clarity: It is assumed that stakeholders clearly understand
the non-functional requirements of the software applications.
 Prioritization: Assumed that requirements prioritization accurately shows their
importance to the software project's success.
 Scope boundaries: The boundaries and scope of the software requirements are nicely
defined and agreed upon by all stakeholders.
 User involvement: The users involved in the software requirement analysis actively
associate and give their feedback to check if the requirement meets their exact needs
and expectations.

3.4.6. Stakeholders Involved in Requirement Analysis


During the process of making the software requirement analysis, different stakeholders are
involved, and each plays a vital role in gathering, defining, and validating the software
requirements. The main stakeholders are
 Clients/Customers: They are the most crucial stakeholders in analyzing the
requirements of the software. The clients or customers can be individuals or
organizations who own the software application. They have specific software
application needs or want their application to solve particular problems. Hence, the
client or customer provides an initial overview of the software application to be
developed and its high-level requirements.
 Users: In requirement analysis, addressing the factors impacting the user experience is
essential. Here, the users are the individuals or groups who will directly interact with
the developed software. Understanding their needs and expectations is essential to
address functional and non-functional requirements effectively.
 Business Analysts: They are the interactive body between clients and development
teams. Business analysts are responsible for eliciting, analyzing, and documenting
requirements.
 Domain Experts: Domain experts have good knowledge about industry practices,
business processes, or specific domains where the software applications will be
utilized. Their valuable insights ensure that requirements align with real-world needs.
 Project Managers: Project managers are responsible for requirement analysis as they
handle and manage all aspects of software development, including requirement
analysis. They also ensure project progress, meet deadlines, and adhere to allocated
budgets.
 Software Architects: Software architects have a crucial role in requirement analysis as
they contribute by designing the complete structure and components/units of the
software applications. They also address the feasibility and scalability during
requirement analysis.
 Developers/Programmers: Developers play a crucial role in creating software
applications by engaging in software requirement analysis. Their deep understanding
of these requirements is vital for the project's success.
 Quality Assurance (QA) Team: The QA team or the software testers are the
individuals who test the developed software application to identify and fix the bugs.
They ensure that both specified requirements and quality standards are met during the
validation and verification stages.
 Regulatory/Compliance Officers: In developing software applications, specific
standards or legal regulations must be applied. For this, regulatory/compliance
officers ensure adherence to such guidelines when defining software requirements.
 Marketing/Sales Representatives: In some cases, marketing/sales representatives may
gather valuable market insights and identify end-user needs that can impact software
requirements and their analysis.
 End Users' Representatives: For large-scale projects or software applications serving
many end users, representatives may act as voices for the collective user base.
 Effectively engaging and communicating with these stakeholders is crucial to
accurately capture software requirements that align with the project's overall goals.
While establishing software requirements, we require methods that accurately capture,
interpret and communicate with customers' preferences. Let us see the significance of
communication during the requirement analysis.
3.4.7. Significance of Communication in Requirement Analysis
 In the requirement analysis process, the project team comes together to understand
project objectives, elucidate anticipations, and record the necessary specifications and
attributes of the product. Accomplishing all of this hinges on lucid and unequivocal
communication among team participants.

 In gathering requisites, the project team must dialogue with other invested parties,
including the owner and end users. This interaction serves the purpose of ascertaining
their anticipatory outlooks with respect to distinct functionalities.

 Commencing early and frequent exchanges among these entities is a deterrent against
any vagueness. It guarantees alignment of the final product with the end user's or
client's requisites and averts users' need to recalibrate their expectations.

3.5. TESTABLE REQUIREMENTS


Testable requirement describes a single function or behavior of an application in a way that
makes it possible to develop tests to determine whether the requirement has been met. To be
testable, a requirement must be clear, measurable, and complete, without any ambiguity.

For example, assume that you are planning to test a web shopping application. You are
presented with the following requirement: “Easy-to-use search for available inventory.”
Testing this requirement as written requires assumptions about what is meant by ambiguous
terms such as “easy-to-use” and “available inventory.” To make requirements more testable,
clarify ambiguous wording such as “fast,” “intuitive” or “user-friendly.” Requirements
shouldn’t contain implementation details such as “the search box will be located in the top
right corner of the screen,” but otherwise should be measurable and complete. Consider the
following example for a web shopping platform:

“When at least one matching item is found, display up to 20 matching inventory items, in a
grid or list and using the sort order according to the user preference settings.”

This requirement provides details that lead to the creation of tests for boundary cases, such as
no matching items, 1 or 2 matching items, and 19, 20 and 21 matching items. However, this
requirement describes more than one function. It would be better practice to separate it into
three separate requirements, as shown below:

 When at least one matching item is found, display up to 20 matching inventory items
 Display search results in a grid or list according to the user preference settings
 Display search results in the sort order according to the user preference settings
The principle of one function per requirement increases agility. In theory, it would be
possible to release the search function itself in one sprint, with the addition of the ability to
choose a grid/list display or a sort order in subsequent sprints.
Testable requirements should not include the following:
 Text that is irrelevant. Just as you can’t judge a book by the number of words, length
by itself is not a sign of a testable requirement. Remove anything that doesn’t add to
your understanding of the requirement.
 A description of the problem rather than the function that solves it.
 Implementation details. For implementation details such as font size, color, and
placement, consider creating a set of standards that apply to the entire project rather
than repeating the standards in each individual requirement.
 Ambiguity. Specifications should be specific. Avoid subjective terms that can’t be
measured, such as “usually.” Replace these with objective, measurable terms such as
“80%.”
3.5.1. Five Techniques for Creating Testable Requirements
Documenting user requirements is always a challenging phase in software development, as
there are no standard processes or notations. However, communication and facilitation skills
can make this activity easier.
Here are five techniques that can be used for converting user stories into testable
requirements.
1. Mind maps
Mind mapping is a graphical technique of taking notes and visualizing thoughts using a
radiant structure. One of the core values of agile is interaction, so when the team is talking
about requirements, using mind maps for documentation can help capture the context of the
conversation.
The shapes, colors, and other properties of the map help participants in the conversation
remember the situation. A mind map can be a context-embedded memo, just like handwritten
story cards.
2. Process workflow
If user stories involve a workflow of some kind, the item can usually be broken into
individual steps. By dividing up a large user story, you can improve your understanding of
the functionality and your ability to estimate. It will also be easier for a product owner to
make decisions about priority.
Some workflow steps may not be important right now and can be moved to future sprints.
This will certainly limit the functionality of the application, but it does allow a team to review
the completed functionality at the end of the sprint, test it, and use the feedback to make
changes.
3. Brainstorming
Brainstorming, one of the most powerful techniques, is a team or individual creative activity
to find a solution to a problem. For example, teams can brainstorm about the various options
for platforms available to host the application under test.
4. Alternate flows
This technique is useful when there are many flows and it is hard to break down large user
stories based on functionality alone. In that case, it helps to ask how a piece of functionality
is going to be tested. Which scenarios have to be checked in order to know if the functionality
works?
Sometimes, test scenarios are complicated because of the work involved to set up the tests
and work through them. If a test scenario is not very common to begin with or does not
present a high enough risk, a product owner can decide to skip the functionality for the time
being and focus on test scenarios that deliver more value. In other cases, test scenarios can be
simplified to cover the most urgent issues.
5. Decision tables
User stories often involve a number of roles that perform parts of certain functionalities.
These groups, in turn, operate on certain sets of test data to determine the expected output of
a particular functionality. By breaking up that functionality into the roles that have to perform
specific requirements, we more clearly understand what functionality is needed and can more
accurately estimate the work involved.
The ways in which requirements are captured have a direct bearing on a project’s cost, time,
and quality. Implement these five approaches to ensure more effective requirements gathering
in your testing.

3.6. Modelling a Test Design Process


 Create State It is the creator of a test case, also referred to as the owner or creator,
who is responsible for initiating the creation of the test case and putting it into this
original condition. The following required variables affiliated with the test case are
initialized by the creator: requirement_ids, tc_id, tc_title, originator_group, creator,
and test_category. It is anticipated that the test case will validate the requirements that
are listed in the field labeled "requirement_ids."
 The group that first Create State It is the creator of a test case, also referred to as the
owner or creator, who is responsible for initiating the creation of the test case and
putting it into this original condition. The following required variables affiliated with
the test case are initialized by the creator: requirement_ids, tc_id, tc_title,
originator_group, creator, and test_category. It is anticipated that the test case will
validate the requirements that are listed in the field labeled "requirement_ids."
 The group that first Create State It is the creator of a test case, also referred to as the
owner or creator, who is responsible for initiating the creation of the test case and
putting it into this original condition. The following required variables affiliated with
the test case are initialized by the creator: requirement_ids, tc_id, tc_title,
originator_group, creator, and test_category. It is anticipated that the test case will
validate the requirements that are listed in the field labeled "requirement_ids."
 The group that first Test automation involves executing the test scripts
automatically, handling test data, and using results to sweeten software
quality. It’s like a quality checker for the whole team that helps ensure the
software is perfect and bug-free. However, test automation is only valuable with
suitable test case design techniques, as we will conduct test cases numerous times
without catching the bugs in our software. Therefore, testers or developers must
first figure out some practical and reliable

3.6.1. Basics of Test Case Design Techniques


Test Cases are pre-defined sets of instructions handling the steps to be carried out to
determine whether or not the end product flaunts the desired result. These instructions may
retain predefined sets of conditions and inputs and their results. Test cases are the essential
building blocks that, when put together, construct the testing phase. Designing these test
cases can be time-consuming and may lead to some uncaught bugs if not appropriately
designed.
Various test case design techniques help us design a robust and reliable test case that covers
all our software’s features and functions. These design techniques implicate multiple
measures that aim to guarantee the effectiveness of test cases in finding bugs or any other
defects in the software.
Test case design requires a clever approach to identify missing necessities and faults without
wasting time or resources. In other words, solid test case design techniques create reusable
and concise tests over the application’s lifetime. These test case design techniques smoothen
how test cases are written to provide the highest code coverage.
3.6.2. Types of Test Case Design Techniques
The test case design techniques are classified into various types based on their use cases. This
classification helps automation testers or developers determine the most effective techniques
for their products. Generally, test case design techniques are separated into three main types.
Specification-Based or Black-Box Techniques
Specification-based testing, also known as black-box testing, is a technique that concentrates
on testing the software system based on its functioning without any knowledge regarding the
underlying code or structure. This technique is further classified as:
 Boundary Value Analysis (BVA)
 Equivalence Partitioning (EP)
 Decision Table Testing
 State Transition Diagrams
 Use Case Testing
Structure-Based or White-Box Techniques
Structure-Based testing, or White-Box testing, is a test case design technique for testing that
focuses on testing the internal architecture, components, or the actual code of the software
system. It is further classified into five categories:
 Statement Coverage
 Decision Coverage
 Condition Coverage
 Multiple Condition Coverage Testing
 All Path Coverage Testing
Experience-Based techniques
As the name suggests, Experience-Based testing is a technique for testing that requires actual
human experience to design the test cases. The outcomes of this technique are highly
dependent on the knowledge, skills, and expertise of the automation tester or developer
involved. It is broadly classified into the following types:
 Error Guessing
 Exploratory Testing
3.7. MODELLING TEST RESULTS
After a test case has been designed or selected, its execution status is reset to its original state,
which is unverified. This happens automatically. The test case outcome will be moved into
the invalid state if the test case is not legitimate for the software version that is currently
being tested. In the condition that has not been tested, the identifier of the test suite is
recorded in a variable that is called test_suite_id. Once the execution of a test case has begun,
the status of the test outcome may transition into one of the following states: succeeded,
failed, invalid, or delayed. If the test case processing is finished and it meets the pass criteria,
a test engineer may transfer the test case outcome from the untested state to the passed state.
This occurs when the test case is considered to have satisfied the pass criteria.
A test engineer will move the test result from the untested state to the failed state if the test
execution is finished and has fulfilled the fail criteria. They will then correlate the defect with
the test case by initializing the defect_ids field and transfer the test result to the failed state.
When a new build is obtained that contains a remedy for the bug, the test case needs to be
rerun so that the change can be verified. In the event that the reexecution is finished and the
conditions for passing the test are met, the test outcome will be changed to the passed state.
If it is not feasible to completely carry out the test case's instructions, the outcome of the test
is set to the stopped state. The defect number that prevents the test case from being executed
is, if it is known, entered into the defect_ids section of the database. When a new build that
addresses a stalled test case is obtained, the test case could potentially be reexecuted again.
The test result is considered to have passed when it is shifted into the passed state provided
that the processing has been finished and the conditions for passing have been met. If, on the
other hand, it meets the requirements for failing the test, the outcome of the test will be
shifted to the failed state.
If the execution does not succeed because of a new stopping defect, the test result will
continue to be in the stopped state, and the defect_ids field will be updated to include the new
defect that prevented the test case from being run
3.8. Equivalence Class Testing
 Equivalence Class Testing -that assists the team in getting accurate and expected
results, within the limited period of time and while covering large input scenarios.
 Equivalence Class Testing, which is also known as Equivalence Class Partitioning
(ECP) and Equivalence Partitioning, is an important software testing technique used
by the team of testers for grouping and partitioning of the test input data, which is
then used for the purpose of testing the software product into a number of different
classes.
 These different classes resemble the specified requirements and common behaviour or
attribute(s) of the aggregated inputs. Thereafter, the test cases are designed and
created based on each class attribute(s) and one element or input is used from each
class for the test execution to validate the software functioning, and simultaneously
validates the similar working of the software product for all the other inputs present in
their respective classes.
Features of Equivalence Class Testing:
Equivalence class testing can be termed as a logical step in the model of functional
testing. It improves the quality of test cases, which further enhances the quality of testing,
by removing the vast amount of redundancy and gaps that appear in the boundary value
testing. Other features of this testing technique are:
 It is a black box testing technique which restricts the testers to examine the
software product, externally.
 Also known by the name of equivalence class partitioning, it is used to form
groups of test inputs of similar behaviour or nature.
 Based on the approach, if one member works well in the family then the whole
family is considered to function well and if one members fails, whole family is
rejected.
 Test cases are based on classes, not on every input, thereby reduces the time and
efforts required to build large number of test cases.
 It may be used at any level of testing i.e. unit, integration, system & acceptance.
 It is good to go for the ECT, when the input data is available in terms of intervals
and sets of discrete values.
 However, there is no such specific rule to use only input from each class. Based
on the experience and need, a tester may opt for more than one input.
 It may results into good amount of decrease in the redundant test cases, if
implemented properly.
 It may not work well with the boolean or logical types variables.
 A mixed combination of Equivalence class testing and boundary value testing
produces effective results.
 The fundamental concept of equivalence class testing/partition comes from the
equivalence class, which further comes from equivalence relations.
Equivalence Class Testing Types:
The equivalence class testing can be categorized into four different types, which are
integral part of testing and cater to different data set. These types of equivalence class
testing are:
 Weak Normal Equivalence Class Testing: In this first type of equivalence class
testing, one variable from each equivalence class is tested by the team. Moreover, the
values are identified in a systematic manner. Weak normal equivalence class testing is
also known as single fault assumption.
 Strong Normal Equivalence Class Testing: Termed as multiple fault assumption, in
strong normal equivalence class testing the team selects test cases from each element
of the Cartesian product of the equivalence. This ensures the notion of completeness
in testing, as it covers all equivalence classes and offers the team one of each possible
combinations of inputs.
 Weak Robust Equivalence Class Testing: Like weak normal equivalence, weak
robust testing too tests one variable from each equivalence class. However, unlike the
former method, it is also focused on testing test cases for invalid values.
 Strong Robust Equivalence Class Testing: Another type of equivalence class
testing, strong robust testing produces test cases for all valid and invalid elements of
the product of the equivalence class. However, it is incapable of reducing the
redundancy in testing.
Advantages & Disadvantages of Equivalence Class Testing:
Equivalence class testing or equivalence partitioning plays a potent role in reducing
redundancy in testing and making the process agile and powerful. It is among those
testing techniques that offer numerous benefits to the team and ensures compliance of the
product with customer requirements. However, there are few drawbacks associated to this
type of testing, which are listed below along with its various advantages.
Advantages:
 Equivalence class testing helps reduce the number of test cases, without
compromising the test coverage.
 Reduces the overall test execution time as it minimizes the set of test data.
 It can be applied to all levels of testing, such as unit testing, integration testing,
system testing, etc.
 Enables the testers to focus on smaller data sets, which increases the probability to
uncovering more defects in the software product.
 It is used in cases where performing exhaustive testing is difficult but at the same
time maintaining good coverage is required.
Disadvantages:
 The whole success of equivalence class testing relies on the identification of
equivalence classes. The identification of these classes relies on the ability of the
testers who creates these classes and the test cases based on them.
 In the case of complex applications, it is very difficult to identify all sets of
equivalence classes and requires a great deal of expertise from the tester’s side.
 Incorrectly identified equivalence classes can lead to lesser test coverage and the
possibility of defect leakage.

Example
Consider an example of an application that accepts a numeric number as input with a value
between 10 to 100 and finds its square. Now, using equivalence class testing, we can create
the following equivalence classes-
Equivalence Explanation
Class
Numbers 10 This class will include test data for a positive scenario.
to 100
Numbers 0 This class will include test data that is restricted by the application. Since it
to 9 is designed to work with numbers 10 to 100 only.
Greater This class will again include test data that is restricted by the application but
than 100 this time to test the upper limit.
Negative Since negative numbers can be treated in a different way so, we will create
numbers a different class for negative numbers in order to check the robustness of
the application.
Alphabets This class will be used to test the robustness of the application with non-
numeric characters.
Special Just like the equivalence class for alphabets, we can have a separate
characters equivalence class for special characters.
Identification of Equivalence Classes
 Cover all test data types for positive and negative test scenarios. We have to create
test data classes in such a way that covers all sets of test scenarios but at the same
time, there should not be any kind of redundancy.
 If there is a possibility that the test data in a particular class can be treated differently
then it is better to split that equivalence class.
 For example, in the above example, the application doesn’t work with numbers – less
than 10. So, instead of creating 1 class for numbers less than 10, we created two
classes – numbers 0-9 and negative numbers. This is because there is a possibility that
the application may handle negative numbers differently.
3.9. Equivalence Partitioning (EP)
In the Equivalence Partitioning technique for testing, the entire range of input data is split
into separate partitions. All imaginable test cases are assessed and divided into logical sets of
data named classes. One random test value is selected from each class during test execution.

The notion behind this design technique is that a test case of a representative value of an
individual class is equivalent to a test of any more value of the same class. It allows us to
Identify invalid as well as valid equivalence classes.
Let’s understand this technique for designing test cases with an example. Here, we will cover
the same example of validating the user age in the input form before registering. The test
conditions and expected behavior of the testing will remain the same as in the last example.
But now we will design our test cases based on the Equivalence Partitioning.
Test cases design Equivalence Partitioning:
To test the functionality of the user age from the input form (i.e., it must accept the age
between 18 to 59, both inclusive; otherwise, produce an error alert), we will first find all the
possible similar types of inputs to test and then place them into separate classes. In this case,
we can divide our test cases into three groups or classes:
Age < 18 – Invalid – ( For e.g. 1, 2, 3, 4, …, up to 17).
18 <= Age <= 59 – Valid – ( For e.g. 18, 19, 20, …, upto 59).
Age > 59 – Invalid – (For e.g. 60, 61, 62, 63, …)
user age validation input form 4
These designed test cases are too much for testing, aren’t they? But here lies the beauty of
Equivalence testing. We have infinite test cases to pick, but we only need to test one value
from each class. This reduces the number of tests we need to perform but increases our test
coverage. So, we can perform these tests for a definite number only, and the test value will be
picked randomly from each class and track the expected behavior for each input.
3.10. State Transition
 State Transition Testing is a type of software testing which is performed to check the
change in the state of the application under varying input. The condition of input
passed is changed and the change in state is observed.
 State Transition Testing is basically a black box testing technique that is carried out to
observe the behavior of the system or application for different input conditions passed
in a sequence. In this type of testing, both positive and negative input values are
provided and the behavior of the system is observed.
 State Transition Testing is basically used where different system transitions are
needed to be tested.

The objective of State Transition testing is:

 To test the behavior of the system under varying input.


 To test the dependency on the values in the past.
 To test the change in transition state of the application.
 To test the performance of the system.
Transition States:
 Change Mode: When this mode is activated then the display mode moves from
TIME to DATE.
 Reset: When the display mode is TIME or DATE, then reset mode sets them to
ALTER TIME or ALTER DATE respectively.
 Time Set: When this mode is activated, display mode changes from ALTER
TIME to TIME.
 Date Set: When this mode is activated, display mode changes from ALTER
DATE to DATE.
State Transition Diagram:
State Transition Diagram shows how the state of the system changes on certain inputs.
It has four main components:
 States
 Transition
 Events
 Actions
Advantages of State Transition Testing:
 State transition testing helps in understanding the behavior of the system.
 State transition testing gives the proper representation of the system behavior.
 State transition testing covers all the conditions.
Disadvantages of State Transition Testing:
 State transition testing cannot be performed everywhere.
 State transition testing is not always reliable.
Example 1:
 Let’s consider an ATM system function where if the user enters the invalid
password three times the account will be locked.
 In this system, if the user enters a valid password in any of the first three attempts the
user will be logged in successfully. If the user enters the invalid password in the first
or second try, the user will be asked to re-enter the password. And finally, if the user
enters incorrect password 3rd time, the account will be blocked.

State transition diagram

 In the diagram whenever the user enters the correct PIN he is moved to Access
granted state, and if he enters the wrong password he is moved to next try and if he
does the same for the 3rd time the account blocked state is reached.
 State Transition Table
Correct PIN Incorrect PIN
S1) Start S5 S2
st
S2) 1 attempt S5 S3
nd
S3) 2 attempt S5 S4
S4) 3rd attempt S5 S6
S5) Access Granted – –
S6) Account blocked – –
 In the table when the user enters the correct PIN, state is transitioned to S5 which is
Access granted. And if the user enters a wrong password he is moved to next state. If he
does the same 3rd time, he will reach the account blocked state
Example 2:
In the flight reservation login screen, consider you have to enter correct agent name and
password to access the flight reservation application.

 It gives you the access to the application with correct password and login name, but what
if you entered the wrong password.
 The application allows three attempts, and if users enter the wrong password at 4th
attempt, the system closes the application automatically.
 The State Graphs helps you determine valid transitions to be tested. In this case, testing
with the correct password and with an incorrect password is compulsory. For the test
scenarios, log-in on 2nd, 3rd and 4th attempt anyone could be tested.
 You can use State Table to determine invalid system transitions.

 In a State Table, all the valid states are listed on the left side of the table, and the
events that cause them on the top.
 Each cell represents the state system will move to when the corresponding event
occurs.
 For example, while in S1 state you enter a correct password you are taken to state S6
(Access Granted). Suppose if you have entered the wrong password at first attempt
you will be taken to state S3 or 2nd Try.
 Likewise, you can determine all other states.
 Two invalid states are highlighted using this method. Suppose you are in state S6 that
is you are already logged into the application, and you open another instance of flight
reservation and enter valid or invalid passwords for the same agent. System response
for such a scenario needs to be tested.
3.11. Exploratory Testing
Exploratory Testing is a type of software testing in which the tester is free to select any
possible methodology to test the software. It is an unscripted approach to software testing. In
exploratory testing, software developers use their personal learning, knowledge, skills, and
abilities to test the software developed by themselves. Exploratory testing checks the
functionality and operations of the software as well as identify the functional and technical
faults in it. The aim of exploratory testing is to optimize and improve the software in every
possible way. The exploratory testing technique combines the experience of testers with a
structured approach to testing.
Why use Exploratory Testing?
 Random and unstructured testing: Exploratory testing is unstructured in nature and
thus can help to reveal bugs that would of undiscovered during structured phases of
testing.
 Testers can play around with user stories: With exploratory testing, testers can
annotate defects, add assertions, and voice memos and in this way, the user story is
converted to a test case.
 Facilitate agile workflow: Exploratory testing helps formalize the findings and document
them automatically. Everyone can participate in exploratory testing with the help of
visual feedback thus enabling the team to adapt to changes quickly and facilitating agile
workflow.
 Reinforce traditional testing process: Using tools for automated test case
documentation testers can convert exploratory testing sequences into functional test
scripts.
 Speeds up documentation: Exploratory testing speeds up documentation and creates an
instant feedback loop.
 Export documentation to test cases: Integration exploratory testing with tools like Jira
recorded documentation can be directly exported to test cases.
When to use Exploratory Testing?
 When need to learn quickly about the application: Exploratory testing is beneficial for the
scenarios when a new tester enters the team and needs to learn quickly about the
application and provide rapid feedback.
 Review from a user perspective: It comes in handy when there is a need to review
products from a user perspective.
 Early iteration required: Exploratory testing is helpful in scenarios when an early
iteration is required as the teams don’t have much time to structure the test cases.
 Testing mission-critical applications: Exploratory testing ensures that the tester doesn’t
miss the edge cases that can lead to critical quality failures.
 Aid unit test: Exploratory testing can be used to aid unit tests, document the test cases,
and use test cases to test extensively during the later sprints.
Types of Exploratory Testing
There are 3 types of exploratory testing:
 Freestyle: In freestyle exploratory testing, the application is tested in an ad-hoc way,
there is no maximum coverage, and there are no rules to follow for testing. It is done
in the following cases:
o When there is a need to get friendly with the application.
o To check other test engineers’ work.
o To perform smoke tests quickly.
 Strategy Based: Strategy-based testing can be performed with the help of multiple
testing techniques like decision-table testing, cause-effect graphing, boundary value
analysis, equivalence partitioning, and error guessing. It is done by an experienced
tester who has known the application for the longest time.
 Scenario Based: Scenario-based exploratory testing is done on the basis of scenarios
with the help of multiple scenarios like end-to-end, test scenarios. The scenarios can
be provided by the user or can be prepared by the test team.
Exploratory Testing Process
The following 4 steps are involved in the exploratory testing process:

1. Learn: This is the first phase of exploratory testing in which the tester learns about the
faults or issues that occur in the software. The tester uses his/her knowledge, skill, and
experience to observe and find what kind of problem the software is suffering from. This is
the initial phase of exploratory testing. It also involves different new learning for the tester.
2. Test Case Creation: When the fault is identified i.e. tester comes to know what kind of
problem the software is suffering from then the tester creates test cases according to defects
to test the software. Test cases are designed by keeping in mind the problems end users can
face.
3. Test Case Execution: After the creation of test cases according to end user problems, the
tester executed the test cases. Execution of test cases is a prominent phase of any testing
process. This includes the computational and operational tasks performed by the software in
order to get the desired output.
4. Analysis: After the execution of the test cases, the result is analyzed and observed whether
the software is working properly or not. If the defects are found then they are fixed and the
above three steps are performed again. Hence this whole process goes on in a cycle and
software testing is performed.
Exploratory Testing vs Automated Testing
Below are the differences between exploratory testing and automated testing:

Parameters Exploratory Testing Automated Testing


Documentation No need to maintain Proper documentation is
documentation. required.
Test cases Test cases are determined Test cases are determined in
during testing. advance.
Is testing Testing cannot be reproduced, Testing can be reproduced.
reproducible only defects can be reproduced.
Investment in There is no investment in There is a significant investment
documentation preparing documentation. in preparing documentation and
test scripts. scripts.
Spontaneity This is spontaneous and directed This is well-planned and directed
by requirements and exploring from requirements.
during testing.
Best Practices for Exploratory Testing
 Understand the customer: For effective exploratory testing, it is important to understand
the customer’s viewpoint and expectations properly. End users browse the same software
in different ways based on age, gender preferences, and other factors. Testers must be
able to approach the software from all those user perspectives.
 Aim of testing should be clear: For effective exploratory testing, it is very important for
the testers to have a clear mindset and have clarity on the mission of testing. Testers
should maintain clear notes on what needs to be tested, and why it needs to be tested.
 Proper documentation: It is important to make proper notes and take a document and
monitor test coverage, risk, Tets execution log, issues, and queries.
 Tracking of issues: The tester should maintain a proper record of questions and issues
raised during testing.
Challenges of Exploratory Testing
 Replication of failure: In exploratory testing replication of failure to identify the cause
is difficult.
 Difficult to determine the best test case: In exploratory testing, determining the best
test case to execute or to determine the best tool to use can be challenging.
 Difficult to document all events: During exploratory testing documentation of all
events is difficult.
 Difficult reporting: Reporting test results is difficult in exploratory testing as the
report does not have well-planned test scripts to compare with the outcome.
Advantages of Exploratory Testing
 Less preparation required: It takes no preparation as it is an unscripted testing
technique.
 Finds critical defects: Exploratory testing involves an investigation process that helps
to find critical defects very quickly.
 Improves productivity: In exploratory testing, testers use their knowledge, skills, and
experience to test the software. It helps to expand the imagination of the testers by
executing more test cases, thus enhancing the overall quality of the software.
 Generation of new ideas: Exploratory testing encourages creativity and intuition thus
the generation of new ideas during test execution.
 Catch defects missed in test cases: Exploratory testing helps to uncover bugs that are
normally ignored by other testing techniques.
Disadvantages of Exploratory Testing
 Tests cannot be reviewed in advance: In exploratory testing, Testing is performed
randomly so once testing is performed it cannot be reviewed.
 Dependent on the tester’s knowledge: In exploratory testing, the testing is dependent
on the tester’s knowledge, experience, and skill. Thus, it is limited by the tester’s
domain knowledge.
 Difficult to keep track of tests: In Exploratory testing, as testing is done in an ad-hoc
manner, thus keeping track of tests performed is difficult.
 Not possible to repeat test methodology: Due to the ad-hoc nature of testing in
exploratory testing, tests are done randomly and thus it is not suitable for longer
execution time, and it is not possible to repeat the same test methodology
3.12. Boundary value analysis

Boundary Value Analysis is based on testing the boundary values of valid and invalid
partitions. The behavior at the edge of the equivalence partition is more likely to be
incorrect than the behavior within the partition, so boundaries are an area where testing is
likely to yield defects.

It checks for the input values near the boundary that have a higher chance of error. Every
partition has its maximum and minimum values and these maximum and minimum values
are the boundary values of a partition.

 A boundary value for a valid partition is a valid boundary value.


 A boundary value for an invalid partition is an invalid boundary value.
 For each variable we check-
 Minimum value.
 Just above the minimum.
 Nominal Value.
 Just below Max value.
 Max value.
Example: Consider a system that accepts ages from 18 to 56.

Boundary Value Analysis (Age accepts 18 to 56)

Invalid Valid Invalid


(min-1) (min, min + 1, nominal, (max + 1)
max – 1, max)

17 18, 19, 37, 55, 56 57

Valid Test cases: Valid test cases for the above can be any value entered greater than 17
and less than 57.
Enter the value- 18.
Enter the value- 19.
Enter the value- 37.
Enter the value- 55.
Enter the value- 56.
Invalid Testcases: When any value less than 18 and greater than 56 is entered.
Enter the value- 17.
Enter the value- 57.

EXAMPLE -2:
This is a simple but popular functional testing technique. Here, we concentrate on input
values and design test cases with input values that are on or close to boundary values.
Experience has shown that such test cases have a higher probability of detecting a fault in
the software. Suppose there is a program ‘Square’ which takes ‘x’ as an input and prints
the square of ‘x’as output. The range of ‘x’ is from 1 to 100. One possibility is to give all
values from 1 to 100 one by one to the program and see the observed behaviour. We have
to execute this program 100 times to check every input value. In boundary value analysis,
we select values on or close to boundaries and all input values may have one of the
following:
(i) Minimum value
(ii) Just above minimum value
(iii) Maximum value
(iv) Just below maximum value
(v) Nominal (Average) value
These values are shown in Figure for the program ‘Square’

These five values (1, 2, 50, 99 and 100) are selected on the basis of boundary value
analysis and give reasonable confidence about the correctness of the program. There is no
need to select all 100 inputs and execute the program one by one for all 100 inputs. The
number of inputs selected by this technique is 4n + 1 where ‘n’ is the number of inputs.
One nominal value is selected which may represent all values which are neither close to
boundary nor on the boundary. Test cases for ‘Square’ program are given in Table 2.1.

EXAMPLE: 2
Consider a program for the determination of division of a student based on the marks in
three subjects. Its input is a triple of positive integers (say mark1, mark2, and mark3) and
values are from interval [0, 100].
The division is calculated according to the following rules:
Marks Obtained Division
(Average)
75 – 100 First Division with distinction
60 – 74 First division
50 – 59 Second division
40 – 49 Third division
0 – 39 Fail
Total marks obtained are the average of marks obtained in the three subjects i.e. Average
= (mark1 + mark 2 + mark3) / 3
The program output may have one of the following words: [Fail, Third Division, Second
Division, First Division, First Division with Distinction] Design the boundary value test
cases.
Solution: The boundary value test cases are given in Table 2.4
Example 1: Equivalence and Boundary Value
• Let’s consider the behavior of Order Pizza Text Box Below
• Pizza values 1 to 10 is considered valid. A success message is shown.
• While value 11 to 99 are considered invalid for order and an error message will
appear, “Only 10 Pizza can be ordered”
• Here is the test condition
1. Any Number greater than 10 entered in the Order Pizza field(let say 11) is considered
invalid.
2. Any Number less than 1 that is 0 or below, then it is considered invalid.
3. Numbers 1 to 10 are considered valid
4. Any 3 Digit Number say -100 is invalid.
 We cannot test all the possible values because if done, the number of test
cases will be more than 100. To address this problem, we use equivalence
partitioning hypothesis where we divide the possible values of tickets into groups
or sets as shown below where the system behaviour can be considered the same.

 The divided sets are called Equivalence Partitions or Equivalence Classes. Then
we pick only one value from each partition for testing. The hypothesis behind this
technique is that if one condition/value in a partition passes all others will also
pass. Likewise, if one condition in a partition fails, all other conditions in that
partition will fail.
 Boundary Value Analysis– in Boundary Value Analysis, you test boundaries
between equivalence partitions

 In our earlier equivalence partitioning example, instead of checking one value for
each partition, you will check the values at the partitions like 0, 1, 10, 11 and so
on. As you may observe, you test values at both valid and invalid boundaries.
Boundary Value Analysis is also called range checking.
 Equivalence partitioning and boundary value analysis(BVA) are closely related
and can be used together at all levels of testing.

3.13. Decision Table Testing


A Decision Table is a technique that demonstrates the relationship between the inputs
provided, rules, output, and test conditions. In test case design techniques, this decision
table is a very efficient way for complex test cases. The decision table allows automation
testers or developers to inspect all credible combinations of conditions for testing.
True(T) and False(F) values signify the criteria.

Decision table testing is a test case design technique that examines how the software
responds to different input combinations. In this technique, various input combinations or
test cases and the concurrent system behavior (or output) are tabulated to form a decision
table. That’s why it is also known as a Cause/Effect table, as it captures both the cause
and effect for enhanced test coverage.
Automation testers or developers mainly use this technique to make test cases for
complex tasks that involve lots of conditions to be checked. To understand the Decision
table testing technique better, let’s consider a real-world example to test an upload image
feature in the software system.
Test Name: Test upload image form.
Test Condition: Upload option must upload image(JPEG) only and that too of size less
than 1 Mb.
Expected Behaviour: If the input is not an image or not less than 1 Mb in size, then it
must pop an “invalid Image size” alert; otherwise, it must accept the upload.
Test Cases using Decision Table Testing:
Based upon our testing conditions, we should test our software system for the following
conditions:

 The uploaded Image must be in JPEG format.


 Image size must be less than 1 Mb.
If any of the conditions are not satisfied, the software system will display an “invalid
input” alert, and if all requirements are met, the image will be correctly uploaded.

3.14. Use Case Testing


As the name suggests, in Use case testing, we design our test cases based on the use cases
of the software or application depending on the business logic and end-user
functionalities. It is a black box testing technique that helps us identify test cases that
constitute part of the whole system on a transaction basis from beginning to end.

A use case testing technique serves the following objective:

 Manages scope conditions related to the project.


 Depicts different manners by which any user may interact with the system or
software.
 Visualizes the system architecture.
 Permits to assess the potential risks and system reliances.
 Communicates complex technical necessities to relevant stakeholders easily.
Let’s understand the Use case testing technique using our last example of a mobile
passcode verification system. Before moving on to the test case, let’s first assess the use
cases for this particular system for the user.

 The user may unlock the device on his/her first try.


 Someone may attempt to unlock the user’s device with the wrong passcode three
consecutive times. Provide a cooling period in such a situation to avoid brute-
force attacks.
The device must not accept a passcode when the cooling period is active.
The user should be able to unlock the device after the expiry of the cooling period
by entering a correct passcode.
Now, analyzing these use cases can help us primarily design test cases for our system.
These test cases can be either tested manually or automatically.

3.15. Path Testing


Path testing is a testing in which tester ensure that every path of the application should be
executed at least once. In this testing, all paths in the program source code are tested at
least once. Tester can use the control flow graph to perform this type of testing.
Steps to Performing Basis Path Testing
Step #1) Code Interpretation:
Start by carefully comprehending the code you want to test. Learn the program’s logic by
studying the source code, recognizing control structures (such as loops and conditionals), and
identifying them.

Step #2) Construction of a Control Flow Graph (CFG):


For the program, create a Control Flow Graph (CFG). The CFG graphically illustrates the
program’s control flow, with nodes standing in for fundamental code blocks and edges for the
movement of control between them.

Step #3) Calculating the Cyclomatic Complexity:


Determine the program’s cyclomatic complexity (CC). Based on the CFG, Cyclomatic
Complexity is a numerical indicator of a program’s complexity. The formula is used to
calculate it:

CC = E – N + 2P

Where:

The CFG has E edges in total.

The CFG has N nodes in total.

P is the CFG’s connected component count.

Understanding the upper limit of the number of paths that must be tested to achieve complete
path coverage is made easier by considering cyclomatic complexity.

Step #4) Determine Paths:


Determine every route that could lead to the CFG. This entails following the control’s path
from its point of entry to its point of exit while taking into account all potential branch
outcomes.

When determining paths, you’ll also take into account loops, nested conditions, and recursive
calls.

Step #5) Path counting:


List every route through the CFG. Give each path a special name or label so you can keep
track of which paths have been tested.
Step #6) Test Case Design:
Create test plans for each path that has been determined. Make test inputs and circumstances
that will make the program take each path in turn. Make sure the test cases are thorough and
cover all potential paths.

Step #6) Run the Test:


Put the test cases you created in the previous step to use. Keep track of the paths taken during
test execution as well as any deviations from expected behavior.

Step #7) Coverage Evaluation:


Analyze the testing-related coverage achieved. Track which paths have been tested and which
ones have not using the path labels or identifiers.

Step #8) Analysis of Cyclomatic Complexity:


The number of paths covered should be compared to the program’s cyclomatic complexity.
The Cyclomatic Complexity value should ideally be matched by the number of paths tested.

Step #9) Find Unexplored Paths:


Identify any paths that the executed test cases did not cover. These are CFG paths that have
not been used, suggesting that there may be untested code in these areas.

Step #10) Improve and iterate:


Make more test cases to cover uncovered paths if there are any. To ensure complete path
coverage, this might entail improving already-existing test cases or developing brand-new
ones.

Step #11) Re-execution:


To cover the remaining paths, run the modified or additional test cases again.

Step #12) Examining and Validating:


Examine the test results to confirm that all possible paths have been taken. Make sure the
code responds as anticipated in all conceivable control flow scenarios.

Step #13) Report and supporting materials


Keep track of the path coverage attained, the cyclomatic complexity, and any problems or
flaws found during testing. This documentation is useful for quality control reports and
upcoming testing initiatives.
Example
An example may help for better understanding. Let’s say we want to perform basis path
testing on a basic block of code. First, we will create our control flow graph.

x=0
print(x)

if x > 10
print (‘try again’)
else
print (‘success’)
end
The control flow graph could look something like this:

However, we know that we often have a more complicated scenario than that. Let’s show
what happens when we have a compound statement.

x=0

while x < 10
if x > 2
print (x)
else
print ('x is less than 3')
end
x += 1
end
For step 2, we will determine a baseline path using our control flow graph. Let’s say the
most likely path looks like this:
This would be our first test case. We can use a simple equation called cyclomatic
complexity to determine how many test cases we need for full branch coverage

Path Coverage testing is a structured testing technique for designing test cases with
intention to examine all possible paths of execution at least once.

Creating and executing tests for all possible paths results in 100% statement coverage and
100% branch coverage.
In this type of testing every statement in the program is guaranteed to be executed at least
one time. Flow Graph, Cyclomatic Complexity are used to arrive at basis path
Cyclomatic Complexity
 Cyclomatic Complexity is a software metric used to indicate the complexity of a
program.
 Cyclomatic Complexity refers to the number of minimum test cases for a white
boxed code which will cover every execution path in the flow.
For example:
if a>b
//do something
else
//do something else
In the above code, cyclomatic complexity is 2 as minimum 2 test cases to cover all the
possible execution path in the known code.
Cyclomatic Complexity is computed in one of three ways:
1. The numbers of regions of the flow graph correspond to the Cyclomatic complexity.
2. Cyclomatic complexity, V(G), for a flow graph G is defined as
V(G) = E – N + 2
where E = number of flow graph edges and N = is the number of flow graph nodes.
3. Cyclomatic complexity, V(G), for a graph flow G is also defined as
V(G) = P + 1
Where P = number of predicate nodes contained in the flow graph G.

Advantages of Path Testing:


 Path Testing helps reducing redundant tests
 Focus on program logic
 Test cases will execute every statement in a program at least once
PathCoverage=Number of Paths Covered / Number of Total Paths
Example of Path Coverage Testing
Read A
Read B
IF A+B > 50 THEN
Print "Large"
ENDIF
If A+B<50 THEN
Print "Small"
ENDIF
Path Coverage ensures covering of all the paths from start to end.
All possible paths are-
1-3-5-7
1-3-5-6-8
1-2-4-5-6-8
1-2-4-5-7

So, Path Coverage is 4.

3.16. Data flow Testing


Data flow testing is a white-box testing technique that examines the data flow with
respect to the variables used in the code. It examines the initialization of variables
and checks their values at each instance.

Types of data flow testing


There are two types of data flow testing:
 Static data flow testing: The declaration, usage, and deletion of the variables
are examined without executing the code. A control flow graph is helpful in
this.
 Dynamic data flow testing: The variables and data flow are examined with
the execution of the code.
Advantages of data flow testing
Data flow testing helps catch different kinds of anomalies in the code. These
anomalies include:
 Using a variable without declaration
 Deleting a variable without declaration
 Defining a variable two times
 Deleting a variable without using it in the code
 Deleting a variable twice
 Using a variable after deleting it
 Not using a variable after defining it
Disadvantages of data flow testing
A few disadvantages of data flow testing are:

 Good knowledge of programming is required for proper testing


 Expensive
 Time consuming
 Techniques of data flow testing
Data flow testing can be done using one of the following two techniques:

a) Control flow graph


b) Making associations between data definition and usages
a. Control flow graph
A control flow graph is a graphical representation of the flow of control, i.e., the
order of statements in which they will be executed.

 Junction Node – a node with more than one arrow entering it.
 Decision Node – a node with more than one arrow leaving it.
 Region – area bounded by edges and nodes (area outside the graph is also
counted as a region.).
Below are the notations used while constructing a flow graph :

Sequential Statements

If – Then – Else –
Do – While

While – Do
Switch – Case –
Example-1:Consider the following piece of pseudo-code:

1. input(x)
2. if(x>5)
3. z = x + 10
4. else
5. z = x - 5
6. print("Value of Z: ", z)
In the above piece of code, if the value of x entered by the user is greater than 5, then
the order of execution of statements would be:

1, 2, 3, 6

If the value entered by the user in line 1 is less than or equal to 5, the order of
execution of statements would be:

1, 4, 5, 6

Hence, the control flow graph of the above piece of code will be:

Using the above control flow graph and code, we can deduce the table below. This
table mentions the node at which the variable was declared and the nodes at which it
was used:
We can use the above table to ensure that no anomaly occurs in the code by ensuring
multiple tests. E.g., each variable is declared before it is used.

b. Making associations
In this technique, we make associations between two kinds of statements:

 Where variables are defined


 Where those variables are used

 (1, (2, t), x), (1, (2, f), x)- This association is made with statement 1 (read x;) and
statement 2 (If(x>0)) where x is defined at line number 1, and it is used at line
number 2 so, x is the variable.
 Statement 2 is logical, and it can be true or false that's why the association is
defined in two ways; one is (1, (2, t), x) for true and another is (1, (2, f), x) for
false.
 (1, 3, x)- This association is made with statement 1 (read x;) and statement 3 (a=
x+1) where x is defined in statement 1 and used in statement 3. It is a computation
use.
 (1, (4, t), x), (1, (4, f), x)- This association is made with statement 1 (read x;) and
statement 4 (If(x<=0)) where x is defined at line number 1 and it is used at line
number 4 so x is the variable. Statement 4 is logical, and it can be true or false
that's why the association is defined in two ways one is (1, (4, t), x) for true and
another is (1, (4, f), x) for false.
 (1, (5, t), x), (1, (5, f), x)- This association is made with statement 1 (read x;) and
statement 5 (if (x<1)) where x is defined at line number 1, and it is used at line
number 5, so x is the variable.
 Statement 5 is logical, and it can be true or false that's why the association is
defined in two ways; one is (1, (5, t), x) for true and another is (1, (5, f), x) for
false.
 (1, 6, x)- This association is made with statement 1 (read x;) and statement 6
(x=x+1). x is defined in statement 1 and used in statement 6. It is a computation
use.
 (1, 7, x)- This association is made with statement 1 (read x) and statement 7
(a=x+1). x is defined in statement 1 and used in statement 7 when statement 5 is
false. It is a computation use.
 (6, (5, f) x), (6, (5, t) x)- This association is made with statement 6 (x=x+1;) and
statement 5 if (x<1) because x is defined in statement 6 and used in statement 5.
Statement 5 is logical, and it can be true or false that's why the association is
defined in two ways one is (6, (5, f) x) for true and another is (6, (5, t) x) for false.
It is a predicted use.
 (6, 6, x)- This association is made with statement 6 which is using the value of
variable x and then defining the new value of x.
 x=x+1
 x= (-1+1)
 Statement 6 is using the value of variable x that is ?1 and then defining new value
of x [x= (-1+1) = 0] that is 0.
 (3, 8, a)- This association is made with statement 3(a= x+1) and statement 8 where
variable a is defined in statement 3 and used in statement 8.
 (7, 8, a)- This association is made with statement 7(a=x+1) and statement 8 where
variable a is defined in statement 7 and used in statement 8.
Now, there are two types of uses of a variable:

 predicate use: the use of a variable is called p-use. Its value is used to decide
the flow of the program, e.g., line 2.
 computational use: the use of a variable is called c-use when its value is
used compute another variable or the output, e.g., line 3.
Data Flow Testing Coverage Metrics:
 All Definition Coverage: Encompassing “sub-paths” from each definition to
some of their respective uses, this metric ensures a comprehensive examination
of variable paths, fostering a deeper understanding of data flow within the code.
 All Definition-C Use Coverage: Extending the coverage spectrum, this metric
explores “sub-paths” from each definition to all their respective C uses, providing
a thorough analysis of how variables are consumed within the code.
 All Definition-P Use Coverage: Delving into precision, this metric focuses on
“sub-paths” from each definition to all their respective P uses, ensuring a
meticulous evaluation of data variable paths with an emphasis on precision.
 All Use Coverage: Breaking through type barriers, this metric covers “sub-paths”
from each definition to every respective use, regardless of their types. It offers a
holistic view of how data variables traverse through the code.
 All Definition Use Coverage: Elevating simplicity, this metric focuses on “simple
sub-paths” from each definition to every respective use. It streamlines the
coverage analysis, offering insights into fundamental data variable interactions
within the code
 The strongest of these criteria is all def-use paths. This includes all p- and c-uses.
Definition of a variable is the occurrence of a variable when the value is bound to the
variable. In the above code, the value gets bound in the first statement and then start to flow.

o If(x>0) is statement 2 in which value of x is bound with it.


Association of statement 2 is (1, (2, f), x), (1, (2, t.)
o a= x+1 is statement 3 bounded with the value of x
Association of statement 3 is (1, 3, x)

All definitions coverage

(1, (2, f), x), (6, (5, f) x), (3, 8, a), (7, 8, a).

Predicate use (p-use)


If the value of a variable is used to decide an execution path is considered as predicate use (p-
use). In control flow statements there are two

Statement 4 if (x<=0) is predicate use because it can be predicate as true or false. If it is true
then if (x<1),6x=x+1; execution path will be executed otherwise, else path will be executed.

Computation use (c-use)


If the value of a variable is used to compute a value for output or for defining another
variable.

Statement 3 a= x+1 (1, 3, x)


Statement 7 a=x+1 (1, 7, x)
Statement 8 print a (3, 8, a), (7, 8, a).

These are Computation use because the value of x is used to compute and value of a is
used for output.

All c-use coverage

(1, 3, x), (1, 6, x), (1, 7, x), (6, 6, x), (6, 7, x), (3, 8, a), (7, 8, a).

All c-use some p-use coverage

(1, 3, x), (1, 6, x), (1, 7, x), (6, 6, x), (6, 7, x), (3, 8, a), (7, 8, a).

All p-use some c-use coverage

(1, (2, f), x), (1, (2, t), x), (1, (4, t), x), (1, (4, f), x), (1, (5, t), x), (1,
(5, f), x), (6, (5, f), x), (6, (5, t), x), (3, 8, a), (7, 8, a).
3.17. Test Design Preparedness Metrics

The following metrics can be used to represent the level of preparedness of test design :
1. Preparation Status of Test Cases (PST):

A test case can go through a number of phases or states, such as draft and review, before
it is released as a valid and useful test case.

Thus it is useful to periodically monitor the progress of test design by counting the test
cases lying in different states of design – create, draft, review, released and deleted.

It is expected that all the planned test cases that are created for a particular project
eventually move to the released state before the start of test execution.

2. Average Time Spent (ATS) in Test Case Design :

It is useful to know the amount of time it takes for a test case to move from its initial
conception, that is, create state, to when it is considered to be usable, that is, released
state.

This metric is useful in allocating time to the test preparation activity in a subsequent test
project. Hence it is useful in test planning.

3. Number of Available Test (NAT) Cases :

This is the number of test cases in the released state from the existing projects.

Some of these test cases are selected for regression testing in the current test project.

4.Number of Planned Test (NPT) Cases :

This is the number of test cases that are in the test suite and ready for execution at the
start of system testing.

This metric is useful in scheduling test execution. As testing continues, new, unplanned
test cases may be required to be designed.

A large number of new test cases compared to NPT suggest that initial planning was not
accurate.

5. Coverage of a Test Suite (CTS) :

This metric gives the fraction of all requirements covered by a selected number of test
cases or a complete test suite.
The CTS is a measure of the number of test cases needed to be selected or designed to
have good coverage of system requirements.
3.18. TEST CASE DESIGN EFFECTIVENESS
In the world of software testing, test case design is a crucial aspect that can significantly
impact the effectiveness of the testing process. An effective test case not only uncovers
defects but also helps ensure comprehensive test coverage and efficient testing efforts.
This article will explore the art of test case design, providing insights and best practices
for crafting effective test scenarios that contribute to the overall success of your software
testing strategy.
1. Understanding The Importance Of Test Case Design
Test case design is the process of defining the conditions, inputs, and expected results for
individual test scenarios. A well-designed test case serves several purposes:
• Ensures comprehensive test coverage: Test cases should cover all critical aspects of
the application, including functional, non-functional, and integration requirements.
• Reduces testing time and effort: By focusing on the most critical and relevant
scenarios, test case design helps streamline testing efforts and reduces the time needed
for execution.
• Improves communication and collaboration: Test cases serve as a common language
for testers, developers, and other stakeholders, enabling them to understand and discuss
application requirements and expected behavior.
2. Best Practices For Crafting Effective Test Cases
To create effective test cases, QA teams should follow these best practices:
• Focus on user requirements: Ensure that test cases are designed based on user
requirements and cover all critical aspects of the application, including functional, non-
functional, and integration requirements.
• Keep test cases simple and concise: Test cases should be easy to understand, with
clear and concise descriptions, steps, inputs, and expected results.
• Consider positive and negative scenarios: Design test cases that cover both positive
(expected behavior) and negative (unexpected behavior) scenarios to ensure
comprehensive test coverage.
• Incorporate boundary and edge cases: Boundary and edge cases often reveal defects in
the application, so be sure to include these scenarios in your test case design.
• Use a consistent format and structure: Adopt a consistent format and structure for test
cases, making them easier to read, understand, and maintain.

3. What Are The Techniques For Effective Test Case Design?


Several techniques can be used to design effective test cases, including:
• Equivalence Partitioning: This technique involves dividing input data into equivalent
partitions and selecting representative test cases from each partition. By doing so, it
reduces the number of test cases while maintaining comprehensive test coverage.
• Boundary Value Analysis: This technique focuses on testing the boundary values of
input data, as these values often reveal defects in the application.
• State Transition Testing: This technique is particularly useful for applications with
multiple states and state transitions, as it focuses on testing the transitions between states
and their associated actions.
• Decision Table Testing: This technique involves creating decision tables that outline
different input combinations and their expected outcomes, helping to ensure
comprehensive test coverage.

4. Continuously Review And Refine Test Cases


As the application evolves, test cases should be reviewed and updated to ensure they
remain relevant and effective. This includes:
• Regularly reviewing test cases: Review test cases to ensure they are up-to-date and
cover the latest application requirements and changes.
• Updating test cases as needed: Update test cases to reflect changes in the application,
user requirements, or testing priorities.
• Retiring obsolete test cases: Remove test cases that are no longer relevant or needed,
helping to streamline the testing process and reduce testing effort.
• Effective test case design is an essential aspect of a successful software testing
strategy. By following best practices and leveraging various test case design techniques,
QA teams can craft test scenarios that ensure comprehensive test coverage, reduce
testing time and effort, and improve communication and collaboration among
stakeholders.
• In conclusion, the art of test case design plays a critical role in the overall success of
software testing efforts. By focusing on user requirements, keeping test cases simple and
concise, considering both positive and negative scenarios, incorporating boundary and
edge cases, and maintaining a consistent format and structure, QA teams can create test
cases that contribute to the efficient and thorough testing of their applications.
Additionally, by continuously reviewing and refining test cases as the application
evolves, teams can ensure that their testing efforts remain effective and relevant in the
face of changing requirements and priorities.
• Embracing the art of test case design is crucial for organizations looking to optimize
their software testing processes and deliver high-quality products that meet the needs and
expectations of their end-users. As the software development landscape continues to
evolve, mastering the art of test case design will be a key factor in staying competitive
and ensuring the delivery of exceptional software experiences. With the right approach to
test case design and a commitment to continuous improvement, QA teams can play a
vital role in helping their organizations achieve long-term success in the ever-changing
world of software development.
3.19. Model Driven Test Design (MDTD)
MDTD is built on the idea that designers will become more effective and efficient if
they can raise the level of abstraction. This approach breaks down the testing into a
series of small tasks that simplify test generation. Then test designers isolate their
tasks and work at a higher level of abstraction by using mathematical engineering
structures to design test values independently of the details of the software or design
artifacts, test automation, and Test Execution.

Different phases in MDTD


MDTD can be done in 4 different phases. Each type of activity requires different
skills, background knowledge, education, and training.It is better to use different
sets of people depend on the situation.

 Test Design — This can be done in either Criteria-Based where Design test
values satisfy coverage criteria or other engineering goals or in Human-Based
where Design test values based on domain knowledge of the program and
human knowledge of testing which is comparatively harder. This the most
technical part of the MDTD process better to use experienced developers in
this phase.
 Test Automation — This involves embedding test values to scripts. Test cases
are defined based on the test requirements. Test values are chosen such that
we can cover a larger part of the application with fewer test cases. We don’t
need that much domain knowledge in this phase, however, we need to use
technically skilled people.
 Test Execution — The test engineer will run tests and records the results in
this activity. Unlike the previous activities, test execution not required a high
skill set such as technical knowledge, logical thinking, or domain knowledge.
Since we consider this phase comparatively low risk, we can assign junior
intern engineers to execute the process. But we should focus on monitoring,
log collecting activities based on automation tools.
 Test Evaluation — The process of evaluating the results and reporting to
developers. This phase is comparatively harder and we expected to have
knowledge in the domain, testing, and User interfaces, and psychology
The below diagrams shows the steps & activities involved in the MDTD
Test automation process, make it easy to do regression testing in less time compared
to manual testing, and also it avoids the chance of missing the previous passed test
cases to be tested in the current testing process. MDTD defines a simple framework
to automate the testing process in a structured manner.

3.20. TEST PROCEDURES

The procedural aspect of a given test, usually a set of detailed instructions for the
setup and step-by-step execution of one or more given test cases. The test procedure
is captured in both test scenarios and test scripts.
Here's a general outline of the typical testing procedure:

Requirement Analysis:

Understand and analyze the software requirements to identify testable features and
criteria.
Test Planning:

Develop a comprehensive test plan that outlines the testing approach, scope,
resources, schedule, and deliverables.
Test Case Design:

Create detailed test cases based on the requirements and specifications. Test cases
should cover various scenarios, including normal and edge cases.
Test Environment Setup:

Prepare the necessary test infrastructure, including hardware, software, and network
configurations.
Test Data Preparation:

Identify and create the test data needed to execute the test cases effectively.
Test Execution:

Run the test cases on the prepared test environment. This involves executing the test
scripts and manual testing, if applicable.
Defect Reporting:

Record and report any defects or issues found during the testing process. Provide
detailed information to help developers understand and fix the problems.
Defect Retesting:

After developers address reported defects, re-run the relevant test cases to ensure that
the issues have been resolved.
Regression Testing:

Conduct regression testing to ensure that new changes or fixes do not negatively
impact existing functionality.
Performance Testing (if applicable):

Verify the performance of the software, including aspects such as speed, scalability,
and responsiveness.
Security Testing (if applicable):

Check for vulnerabilities and ensure that the software is secure against potential
threats.
User Acceptance Testing (UAT):

Allow end-users or stakeholders to test the software in a real-world environment to


ensure it meets their expectations.
Test Closure:

Summarize the testing activities, evaluate the test process against the defined criteria,
and provide recommendations for future improvements.

3.20.1. Types of Testing


The software testing mainly divided into two parts, which are as follows:
 Manual Testing
 Automation Testing
What is Manual Testing?
 Testing any software or an application according to the client's needs without using any
automation tool is known as manual testing.
 In other words, we can say that it is a procedure of verification and validation. Manual
testing is used to verify the behavior of an application or software in contradiction of
requirements specification.

Classification of Manual Testing


In software testing, manual testing can be further classified into three different types of
testing, which are as follows:
 White Box Testing
 Black Box Testing
 Grey Box Testing
White Box Testing
In white-box testing, the developer will inspect every line of code before handing it
over to the testing team or the concerned test engineers.
 Subsequently, the code is noticeable for developers throughout testing; that's why
this process is known as WBT (White Box Testing).
 In other words, we can say that the developer will execute the complete white-box
testing for the particular software and send the specific application to the testing
team.
 The purpose of implementing the white box testing is to emphasize the flow of
inputs and outputs over the software and enhance the security of an application.
 White box testing is also known as open box testing, glass box testing, structural
testing, clear box testing, and transparent box testing.

Black Box Testing


Another type of manual testing is black-box testing. In this testing, the test engineer
will analyze the software against requirements, identify the defects or bug, and sends
it back to the development team.

 Then, the developers will fix those defects, do one round of White box testing,
and send it to the testing team.
 Here, fixing the bugs means the defect is resolved, and the particular feature is
working according to the given requirement.
 The main objective of implementing the black box testing is to specify the
business needs or the customer's requirements.
 In other words, we can say that black box testing is a process of checking the
functionality of an application as per the customer requirement. The source code
is not visible in this testing; that's why it is known as black-box testing.

Types of Black Box Testing


Black box testing further categorizes into two parts, which are as discussed below:

 Functional Testing
 Non-function Testing
Functional Testing
 The test engineer will check all the components systematically against
requirement specifications is known as functional testing. Functional testing is
also known as Component testing.
 In functional testing, all the components are tested by giving the value, defining
the output, and validating the actual output with the expected value.
 Functional testing is a part of black-box testing as its emphases on application
requirement rather than actual code. The test engineer has to test only the program
instead of the system.

Types of Functional Testing


The diverse types of Functional Testing contain the following:

 Unit Testing
 Integration Testing
 System Testing
1. Unit Testing
Unit testing is the first level of functional testing in order to test any software. In this,
the test engineer will test the module of an application independently or test all the
module functionality is called unit testing.

The primary objective of executing the unit testing is to confirm the unit components
with their performance. Here, a unit is defined as a single testable function of a
software or an application. And it is verified throughout the specified application
development phase.

2. Integration Testing
Once we are successfully implementing the unit testing, we will go integration testing.
It is the second level of functional testing, where we test the data flow between
dependent modules or interface between two features is called integration testing.

The purpose of executing the integration testing is to test the statement's accuracy
between each module.

Types of Integration Testing


Integration testing is also further divided into the following parts:

 Incremental Testing
 Non-Incremental Testing
Incremental Integration Testing
Whenever there is a clear relationship between modules, we go for incremental
integration testing. Suppose, we take two modules and analysis the data flow between
them if they are working fine or not.

If these modules are working fine, then we can add one more module and test again.
And we can continue with the same process to get better results.
In other words, we can say that incrementally adding up the modules and test the data
flow between the modules is known as Incremental integration testing.
Types of Incremental Integration Testing

Incremental integration testing can further classify into two parts, which are as
follows:

 Top-down Incremental Integration Testing


 Bottom-up Incremental Integration Testing
a. Top-down Incremental Integration Testing

In this approach, we will add the modules step by step or incrementally and test the
data flow between them. We have to ensure that the modules we are adding are the
child of the earlier ones.

b. Bottom-up Incremental Integration Testing

In the bottom-up approach, we will add the modules incrementally and check the data
flow between modules. And also, ensure that the module we are adding is the parent
of the earlier ones.

Non-Incremental Integration Testing/ Big Bang Method


Whenever the data flow is complex and very difficult to classify a parent and a child,
we will go for the non-incremental integration approach. The non-incremental method
is also known as the Big Bang method.

3. System Testing
 Whenever we are done with the unit and integration testing, we can proceed
with the system testing.
 In system testing, the test environment is parallel to the production
environment. It is also known as end-to-end testing.
 In this type of testing, we will undergo each attribute of the software and test if
the end feature works according to the business requirement. And analysis the
software product as a complete system.

Non-function Testing
 The next part of black-box testing is non-functional testing. It provides
detailed information on software product performance and used technologies.

 Non-functional testing will help us minimize the risk of production and related
costs of the software.

 Non-functional testing is a combination of performance, load, stress, usability


and, compatibility testing.

Types of Non-functional Testing


Non-functional testing categorized into different parts of testing, which we are going
to discuss further:

 Performance Testing
 Usability Testing
 Compatibility Testing
1. Performance Testing
 In performance testing, the test engineer will test the working of an application by
applying some load.
 In this type of non-functional testing, the test engineer will only focus on several
aspects, such as Response time, Load, scalability, and Stability of the software or
an application.

Classification of Performance Testing

Performance testing includes the various types of testing, which are as follows:

 Load Testing
 Stress Testing
 Scalability Testing
 Stability Testing
Load Testing
 While executing the performance testing, we will apply some load on the
particular application to check the application's performance, known as load
testing. Here, the load could be less than or equal to the desired load.

 It will help us to detect the highest operating volume of the software and
bottlenecks.

Stress Testing
 It is used to analyze the user-friendliness and robustness of the software
beyond the common functional limits.

 Primarily, stress testing is used for critical software, but it can also be used for
all types of software applications.

Scalability Testing
 To analysis, the application's performance by enhancing or reducing the load
in particular balances is known as scalability testing.

 In scalability testing, we can also check the system, processes, or database's


ability to meet an upward need. And in this, the Test Cases are designed and
implemented efficiently.

Stability Testing
 Stability testing is a procedure where we evaluate the application's
performance by applying the load for a precise time.

 It mainly checks the constancy problems of the application and the efficiency
of a developed product. In this type of testing, we can rapidly find the system's
defect even in a stressful situation.

2. Usability Testing
 Another type of non-functional testing is usability testing. In usability testing,
we will analyze the user-friendliness of an application and detect the bugs in
the software's end-user interface.

 Here, the term user-friendliness defines the following aspects of an


application:

 The application should be easy to understand, which means that all the
features must be visible to end-users.
 The application's look and feel should be good that means the application
should be pleasant looking and make a feel to the end-user to use it.
3. Compatibility Testing
 In compatibility testing, we will check the functionality of an application in
specific hardware and software environments. Once the application is
functionally stable then only, we go for compatibility testing.

 Here, software means we can test the application on the different operating
systems and other browsers, and hardware means we can test the application
on different sizes.
Grey Box Testing
 Another part of manual testing is Grey box testing. It is a collaboration of
black box and white box testing.

 Since, the grey box testing includes access to internal coding for designing test
cases. Grey box testing is performed by a person who knows coding as well as
testing.

.Automation Testing
 The most significant part of Software testing is Automation testing. It uses
specific tools to automate manual design test cases without any human
interference.

 Automation testing is the best way to enhance the efficiency, productivity, and
coverage of Software testing.

 It is used to re-run the test scenarios, which were executed manually, quickly,
and repeatedly.
In other words, we can say that whenever we are testing an application by using some
tools is known as automation testing.

We will go for automation testing when various releases or several regression cycles
goes on the application or software. We cannot write the test script or perform the
automation testing without understanding the programming language.

Some other types of Software Testing


In software testing, we also have some other types of testing that are not part of any
above discussed testing, but those testing are required while testing any software or an
application.

 Smoke Testing
 Sanity Testing
 Regression Testing
 User Acceptance Testing
 Exploratory Testing
 Adhoc Testing
 Security Testing
 Globalization Testing
Let's understand those types of testing one by one:

Smoke testing
In smoke testing, we will test an application's basic and critical features before doing
one round of deep and rigorous testing.

Or before checking all possible positive and negative values is known as smoke
testing. Analyzing the workflow of the application's core and main functions is the
main objective of performing the smoke testing.

.
Sanity Testing
It is used to ensure that all the bugs have been fixed and no added issues come into
existence due to these changes. Sanity testing is unscripted, which means we cannot
documented it. It checks the correctness of the newly added features and components.

Regression Testing
Regression testing is the most commonly used type of software testing. Here, the term
regression implies that we have to re-test those parts of an unaffected application.

Regression testing is the most suitable testing for automation tools. As per the project
type and accessibility of resources, regression testing can be similar to Retesting.

Whenever a bug is fixed by the developers and then testing the other features of the
applications that might be simulated because of the bug fixing is known as regression
testing.

In other words, we can say that whenever there is a new release for some project, then
we can perform Regression Testing, and due to a new feature may affect the old
features in the earlier releases.

User Acceptance Testing


The User acceptance testing (UAT) is done by the individual team known as domain
expert/customer or the client. And knowing the application before accepting the final
product is called as user acceptance testing.

In user acceptance testing, we analyze the business scenarios, and real-time scenarios
on the distinct environment called the UAT environment. In this testing, we will test
the application before UAI for customer approval.

Exploratory Testing
Whenever the requirement is missing, early iteration is required, and the testing team
has experienced testers when we have a critical application. New test engineer entered
into the team then we go for the exploratory testing.

To execute the exploratory testing, we will first go through the application in all
possible ways, make a test document, understand the flow of the application, and then
test the application.

Adhoc Testing
Testing the application randomly as soon as the build is in the checked sequence is
known as Adhoc testing.

It is also called Monkey testing and Gorilla testing. In Adhoc testing, we will check
the application in contradiction of the client's requirements; that's why it is also
known as negative testing.

When the end-user using the application casually, and he/she may detect a bug. Still,
the specialized test engineer uses the software thoroughly, so he/she may not identify
a similar detection.
Security Testing

It is an essential part of software testing, used to determine the weakness, risks, or


threats in the software application.

The execution of security testing will help us to avoid the nasty attack from outsiders
and ensure our software applications' security.

In other words, we can say that security testing is mainly used to define that the data
will be safe and endure the software's working process.

Globalization Testing
Another type of software testing is Globalization testing. Globalization testing is used
to check the developed software for multiple languages or not. Here, the words
globalization means enlightening the application or software for various languages.

Globalization testing is used to make sure that the application will support multiple
languages and multiple features.

In present scenarios, we can see the enhancement in several technologies as the


applications are prepared to be used globally.

3.20.2. What are Software Testing Procedures?


The various testing practices, processes, & techniques used by testers to ensure that the
software application is tested and validated before its release or deployment is known as
software testing procedure. It is a combination of several test cases based on a certain logical
reason. These are complete, self-contained, self validating and can be executed automatically,
with the assistance of automated tools. Moreover, the software testing procedures are the
deliverables produced of the software development process and are used for both initial
evaluation and subsequent regression testing of target program module or modifications.
Hence, software testing procedures must be defined, planned, constructed, tested & reported
regularly to achieve desired results.

3.20.3. Test Procedure/Script Specification


As reporting and documenting software testing procedure is important for the success
of the software testing and development process, the team publishes a test procedure/script
specification report, which is a document providing detailed instructions for the execution
of one or more test cases. This report defines the purpose of various software testing
techniques used by testers and makes them understandable for the client and other
stakeholders of the project. IEEE standard for Software Test Documentation (829-
1998) defines the format for this report, which is used rigorously followed by
individuals/testers all over the globe.
3.20.4. Format for Test Procedure / Script Specification:

With its aim to define and specify the steps used for executing test cases as well as the
measures taken to analyze the software item in order to evaluate its set of features, the test
procedure / script specification document is prepared by software testers after the
accomplishment of the software testing phase. The template/format of this document is
universally acknowledged and accepted as it is defined by the IEEE standard 829-1998.
Therefore, the format for test procedure / script specification is:

 Identifier: To avoid confusion each test procedure / script specification


document is provided a unique company generated number, which helps define
procedure specification level as well as the the software it is related to.

 Purpose: Once a distinctive identification is generated for the document, the purpose
of test procedure is defined. It consists a detailed list of all the test cases covered
during testing procedure as well as the description of each test procedure.

 Special Requirements: Any special requirements and specification mentioned by the


client or stakeholders of the project before the commencement of the testing process
are recorded here with proper evidence and documentation. Hence, the details
included here are:

o Type of testing: Manual or automated.

o The test environment.

o Any prerequisites of the test procedure.

o The stages where the test is to be executed, such as pre-testing, regression


testing, future compliance testing, etc.

o It also includes details about the special skills and training required by the
team for the test procedure.

 Procedures/Script Steps: Finally the actual steps used for the execution of the
procedure as well as the activities related to it are defined by the team at the end of the
specification document. It includes various procedure/script steps that are defined by
IEEE are:
o Log.

o Set Up.

o Proceed.

o Measure .

o Shut down.

o Restart.

o Stop.

o Wrap-up.

3.21. TEST CASE ORGANIZATION AND TRACKING


Test case management is a systematic approach to organising, documenting, and
tracking test cases and related activities during software testing. It involves planning,
designing, executing, and tracking results to ensure a software product’s quality and
functionality.

Test case management is a framework that helps QA professionals, especially testers,


effectively manage testing. Test management platforms offer a centralised repository to store
and manage test cases, suites, data, and results.

Test case management tools offer many features to streamline the process. These
tools enable testers to create and organise test cases, assign priorities and dependencies,
track progress, and generate reports while facilitating team collaboration.
3.21.1. Components of a test case

When creating test cases, you should include several essential components to ensure their
effectiveness and comprehensiveness:

1. Test case ID: A unique identifier or number for easy tracking and reference.

2. Test case title: A concise and descriptive title summarising the test case objective
or goal.
3. Test description: A detailed description of the tested functionality or features,
outlining the inputs, actions, and expected outcomes.

4. Preconditions: All necessary requirements or conditions that must be met before


test case execution, including data setup, system configurations, or specific
application states.

5. Test steps: Clear instructions on executing the test case, including the specific
actions and the expected results at each step.

6. Test data: The input data you must use during the test case execution, including
valid and invalid data sets to validate different scenarios.

7. Expected results: The anticipated outcomes of the test case that need to be
specific, measurable, and aligned with the test objective.

8. Actual results: The actual outcome of the test case execution with the deviations
or discrepancies from actual results.

9. Pass/Fail status: A clear indication of whether the test case passed or failed.

10. Test environment: The specific environment, including hardware, software, OS,
or browsers.

11. Test Priority/Severity: The priority of the test case based on its impact on the
system, and severity refers to the degree of impact a bug would have on the
system’s functionality.

12. Test case author: The name of the person responsible for creating the case.
3.16.2 How to write effective test cases?

Writing effective test cases ensures thorough testing coverage and efficient software QA.
Here are some key guidelines for writing test cases:
 Understand the project requirements, user stories, and functional specifications
to define the test cases’ scope and objectives.
 Keep test cases clear and concise by using simple language and avoiding
ambiguity. Ensure everyone can understand the steps and expected results without
confusion.
 Use a standardised template for test case documentation to maintain consistency
and make the test cases easier to read, understand, and hand over.
 Test one functionality per test case and avoid combining multiple test scenarios
into one case, which may lead to confusion. Test scenarios are a better option to
validate an entire flow.
 Define preconditions and test data for the test case, such as specific system
configurations or data setups.
 Outline test steps sequentially and provide step-by-step instructions on executing
the test case from the initial state with concise, specific, and easy-to-follow.
 Include expected results for each step to validate the system’s behaviour and easy
comparison with actual results.
 Cover positive and negative scenarios to identify potential defects and ensure
comprehensive test coverage.
 Prioritise test cases based on their criticality and impact on the system to manage
testing efforts and address high-priority test cases first.
 Let peers review and validate the cases for accuracy, clarity, and coverage and
compare them against the requirements to ensure they align with the expected
behaviour.
 Keep test cases maintainable and easy to update as the system evolves, avoiding
hard-coding values or dependencies irrelevant to the test.
 Regularly update test cases as requirements change, defects are identified and
fixed, or new features are added.

Example: sample case of testing the login functionality of a web application


like Facebook.
3.21.2. Different types of test cases
The different types of test cases are listed below:

 User Interface Test Cases: These are compiled to ensure the application’s aesthetics
appear as planned.
 Functionality Test Cases: This ensures the application’s expected functionalities work
correctly.
 Performance Test Cases: These cases help verify the response time and efficiency.
 Security Test Cases: These are compiled to secure the application data and restrict the
application’s use to specific users.
 Integration Test Cases: These help ensure that interfaces between multiple modules of an
application are working as expected.
 Usability Test Cases: These cases help enhance the application’s user experience.
 Database Test Cases: These help ensure the application can collect, store, process and
handle data appropriately.
3.21.3. What is the role of test case management?

Test case management is crucial in software testing and quality assurance. Here are some
key roles and benefits of effective test case management:

 Effective, centralised management: The test case management system consists of


a centralised repository to manage test cases, allowing you to access and track
them easily.
 Traceability: Test case management ensures comprehensive test coverage by
systematically documenting and managing test cases. This way, test case
tracking and the mapping between test cases and requirements to test all
functionality become easier. Testing efforts are aligned with the project’s objective
and meet scope, which can also be easily demonstrated during regulatory audits.
 Planning and prioritisation: Test case management tools enable you to prioritise
test cases based on business priorities, risk assessments, and resource constraints,
focusing on critical functionalities and high-risk areas.
 Test execution and tracking: Test case management enables better execution and
tracking by providing a platform to record test results, track progress, and monitor
the statuses.
 Collaboration and communication: Test case management tools promote better
collaboration and communication in the team and create an environment for shared
information, feedback, and discussions on test case-related issues. Questions and
concerns do not get lost in chats.
 Reporting and metrics: Using management tools allow you to generate relevant
performance indicators like test execution reports or defect metrics, enabling you
to make data-driven decisions and track the effectiveness of your efforts.
 Test maintenance and reusability: Test case management facilitates test
maintenance and reusability, as the cases can be easily updated, modified, or
reused as the software evolves. This saves you a lot of time and resources.
3.21.4. Test case management methodologies
There are two commonly used test case management approaches:

1. Spreadsheet-based test case management is a common method of using


spreadsheets or similar tools to manage test cases. Testers typically use document-
based templates or spreadsheets to organise and maintain test cases. They create
test case templates in the document outlining the steps and expected results. This
approach offers flexibility and ease of use but does not include advanced features
like test execution tracking or seamless collaboration. It does not meet modern
regulatory requirements most of the time, either.

2. Test case management tools provide dedicated platforms or software solutions to


streamline your efforts. These tools allow automated test case management and
offer features such as test case creation, organisation, version control, execution
tracking, defect management, and reporting. Test case management
software provides a centralised repository for test cases, facilitates collaboration
among team members, and offers robust reporting capabilities.
You can also use such solutions for first-party test automation or seamless
integration with third-party solutions. Examples of popular automation
management tools include aqua, Zephyr, and qTest.

There are different approaches to test case management, including the following:
 Agile test case management includes methodologies like Scrum or Kanban,
focusing on iterative development and frequent software releases. It involves
creating and managing test cases that align with user stories or features defined in
the product backlog.
In this methodology, you continuously refine and update test cases based on
evolving requirements and integrate test execution within the sprint or iteration
cycles. This approach emphasises flexibility, adaptability, and collaboration
between developers, testers, and stakeholders.

 Behaviour-driven development (BDD) is a collaborative approach that aligns


business stakeholders, developers, and testers with defining and validating
software behaviour. This way, you write test cases in an extra human-friendly
format using a domain-specific language (DSL) like Gherkin. These test cases,
known as “feature files”, outline desired behaviours and acceptance criteria.

 Continuous Integration/Continuous Delivery (CI/CD) test case


management ensures a robust and efficient testing process. Test cases are
integrated into the CI/CD pipeline, and automated tests are executed as part of the
build and deployment processes. The required effort includes maintaining a suite
of automated tests, monitoring test execution results, and incorporating test
failures or issues into the CI/CD feedback loop.

 Waterfall test case management: In a Waterfall approach, test case management


follows a sequential process where you perform testing at the end of each
development phase. Test cases are designed and executed based on the defined
requirements and specifications. The emphasis is on comprehensive testing before
moving to the next development phase.
3.21.5. Why is Test Case Management Important
Test Case Management helps in the following ways:

 It gives a clear idea of the testing activities to a testing team. The team will know
what tests to execute and what to expect if the test succeeds or fails.
 It helps to keep track of test cases and group them into categories like resolved,
deferred, ongoing, etc.
 It helps manage automated and manual testing more efficiently.
 It helps manage a range of test executions for various test cases.
 It improves the collaboration efforts between project engineers even if they belong
to different teams.
3.21.6. Levels of testing and test case management
There are mainly four Levels of Testing in software testing :

Unit testing:
 A Unit is a smallest testable portion of system or application which can be
compiled, liked, loaded, and executed. This kind of testing helps to test each module
separately.
 The aim is to test each part of the software by separating it. It checks that
component are fulfilling functionalities or not. This kind of testing is performed by
developers.
Integration testing:
 Integration means combining. For Example, In this testing phase, different software
modules are combined and tested as a group to make sure that integrated system is
ready for system testing.
 Integrating testing checks the data flow from one module to other modules. This
kind of testing is performed by testers.
System Testing
 System testing is performed on a complete, integrated system. It allows checking
system’s compliance as per the requirements. It tests the overall interaction of
components. It involves load, performance, reliability and security testing.
 System testing most often the final test to verify that the system meets the
specification. It evaluates both functional and non-functional need for the testing.
Acceptance testing:
 Acceptance testing is a test conducted to find if the requirements of a specification
or contract are met as per its delivery. Acceptance testing is basically done by the
user or customer. However, other stockholders can be involved in this process.

3.21.7. Challenges in test case management


Here are some of the most common challenges you might face in test case management:

 Test case maintenance: Keeping test cases updated and synchronised can be time-
consuming and demands continuous effort. To overcome this, you need to
implement a robust change management process to ensure timely updates to test
cases when requirements, functionality, or user scenarios are changed.

 Test case reusability: Identifying and organising reusable test cases across
projects or releases can be challenging. Keeping test cases easily adaptable and
relevant to various contexts without causing false positives or negatives can be a
struggle. To handle this, you need to establish a centralised repository or test case
management tool that allows easy categorisation and tagging of test cases based on
their reusability. Clearly documenting the context and prerequisites for each test
case ensures adaptability while avoiding false positives or negatives.

 Test case traceability: Achieving and maintaining near 100% test coverage
requires good traceability to see what requirements have enough tests to them.
However, it can be complex in large-scale projects with changing requirements
and multiple stakeholders, requiring careful management and coordination. If not
available in your test management software, you should develop a traceability
solution that links test cases to the corresponding requirements and regularly
reviews the results to ensure coverage and track any changes in requirements.

 Test case version control: Managing different test case versions, especially in
collaborative environments, poses challenges. Maintaining a clear version history,
ensuring the latest versions, and avoiding conflicts can be demanding without
proper version control mechanisms not found in spreadsheet-style management
approaches. One solution might be using version control mechanisms provided by
test case management tools. You should maintain a clear version history, ensure
the latest versions are used, and establish guidelines for resolving conflicts in case
of overlapping modifications.

 Test case prioritisation: With limited time and resources, prioritising test cases
becomes vital. Determining the priority of test cases based on risk assessment,
business impact, or critical functionalities can be subjective and challenging,
requiring careful analysis and decision-making. You must conduct a risk
assessment to identify critical functionalities and high-risk areas to deal with them.
Consider the impact on business goals and prioritise test cases accordingly.
3.22. BUG REPORTING

A bug report is a document that communicates information regarding a software


or hardware fault or malfunction. It typically includes details such as the steps
necessary to reproduce the issue, the expected behavior, and the observed behavior. The
primary purpose of a bug report is to provide an accurate description of the problem to
the development team to facilitate its resolution.

Bug reports must be clear, concise, and correct to assist developers in understanding
and quickly resolving the issue. All bugs must be documented in a bug-reporting system to
identify, prioritize, and fix them promptly. Failure to do so may lead to the developer not
understanding or disregarding the issue, as well as management not recognizing the severity
of it and leaving it in production until customers make them aware
Reporting bugs is a fundamental process involving the documentation and
communication of software defects, commonly known as “bugs,” to relevant developers
responsible for correcting them. These bugs are essentially unintended errors or flaws in a
software system that can result in malfunctions, unexpected behaviours, or disruptions to its
normal operation. This practice holds immense significance within the realms of software
development and quality assurance. Whenever users or testers come across bugs while
utilizing a software application, they initiate the creation of bug reports.

 Understanding the Basics of Bug Reporting Bug reporting is created by a tester or


user, and an ideal software bug report generally contains the following information:
 Description: A detailed description of the problem, explaining what the bug is and
how it affects the application’s functionality.
 Reproduction Steps: Clear, step-by-step instructions on how to reproduce the bug or
the specific actions that trigger the issue.
 Expected Behaviour: Explanation of what the user expected to happen when
performing those actions.
 Actual Behaviour: Description of what actually happened, including any error
messages, unexpected outputs, or crashes.
 Environment: Information about the software version, operating system, hardware,
and any other relevant configurations.
 Severity: An assessment of the bug’s impact on the application’s functionality,
ranging from minor issues to critical defects.
 Priority: The bug’s importance in terms of fixing it compared to other reported
issues.
 Reported by: Bug report usually contains the reporter’s name or email address.
3.22.1. Benefits of a Good Software Bug Report
A good bug report should provide clear and detailed information about the issue,
enabling the development team to understand and reproduce it. It should include
details such as an accurate description of the problem, steps taken to reproduce it,
expected results, actual results, screenshots or video recordings, if applicable, device
configuration, and other relevant data. Such information allows for a more efficient
resolution of the issue.
 It can help you figure out precisely what’s wrong with a bug, so you can find the
best way to fix it.
 Saves you time and money by helping you catch the bug before it worsens.
 Stops bugs from making it into the final product and ruining someone’s
experience.
 Plus, it helps ensure the same bug doesn’t appear again in future versions.
 Finally, everyone involved will know what’s happening with the bug so they can
do something about it.
3.22.2. How to Report a Bug?
Effectively reporting a bug is essential for the development team to resolve the issue
promptly and accurately. A well-constructed bug report should be concise, comprehensive,
and comprehensible. The following steps can be taken to submit a bug report:

 Attempt to replicate the bug consistently and systematically.


 Gather data on the environment, such as the browser type, operating system, and
applicable software versions.
 Construct explicit instructions outlining how to reproduce the bug.
 Include screenshots or videos that may assist in illustrating the issue to developers.
 Articulate what outcome was anticipated and differentiate it from what occurred in
reality.
 Outline the severity and priority of the bug: Describe how the bug impacts the
software’s functionality and determine its level of urgency.
 Check for duplicates: Investigate the bug tracking system to ascertain if it has already
been reported.
 Assign the bug to a relevant developer or team and follow up
 Monitor progress on the bug to ensure it is being addressed and provide any extra
information that may be necessary.
3.22.3. How to Write a Bug Report?
A good bug report should enable the developer and management to comprehend the issue.
Guidelines to consider include:

1. All the relevant information must be provided with the bug report
Simple sentences should be used to describe the bug. Expert testers consider bug reporting
nothing less than a skill. We have compiled some tips that will help testers master them
better:

2. Report reproducible bugs:

While reporting a bug, the tester must ensure that the bug is reproducible. The steps to
reproduce the bug must be mentioned. All the prerequisites for the execution of steps and any
test data details should be added to the bug.

3. Be concise and clear:

Try to summarize the issue in a few words, brief but comprehensive. Avoid writing lengthy
descriptions of the problem.

4. Report bugs early:

It is important to report bugs as soon as you find them. Reporting the bug early will help the
team to fix the bug early and will help to deliver the product early.

5. Avoid Spelling mistakes and language errors:

 Proofread all the sentences and check the issue description for spelling and
grammatical errors.
 If required, one can use third-party tools, for eg. Grammarly. It will help the developer
understand the bug without ambiguity and misrepresentation.
6. Documenting intermittent issues:

Sometimes all bugs are not reproducible. You must have observed that sometimes a mobile
app crashes, and you must restart the app to continue. These types of bugs are not
reproducible every time.

Testers must try to make a video of the bug in such scenarios and attach it to the bug report. A
video is often more helpful than a screenshot because it will include details of steps that are
difficult to document.

7. Avoid duplication of bugs:

While raising a bug, one must ensure that the bug is not duplicating an already-reported bug.
Also, check the list of known and open issues before you start raising bugs. Reporting
duplicate bugs could cost duplicate efforts for developers, thus impacting the testing life
cycle.

8. Create separate bugs for unrelated issues:

If multiple issues are reported in the same bug, it can’t be closed unless all the issues are
resolved. So, separate bugs should be created if issues are not related to each other.

9. Don’t use an authoritative tone:

While documenting the bug, avoid using a commanding tone, harsh words, or making fun of
the developer.

3.22.4. Bug Report Checklist


3.22.5. Challenges faced during bug reporting

Bug reporting can be a complex and challenging process. Some of the common
challenges faced in bug reporting are:

 Incomplete or Inaccurate Information: When the user reports problems with


software but doesn’t give all the important details, it can be really hard for the people
who make the software to fix the issues. For example, if someone doesn’t explain
exactly how to make the problem happen, where it happened, and what they thought
should happen instead, it makes it tough for the software developers to figure out
what went wrong. This can make it take longer to fix the problem and might even lead
to the wrong solution.
 Reproducibility: Certain bugs can act unpredictably, which means they don’t happen
the same way every time. When people testing the software can’t make the bug
happen again and again in a consistent way, it becomes really tough for the folks who
build the software to figure out why it’s happening. It’s important to be able to make
the bug happen repeatedly because it helps the developers study the problem closely
and try out different ways to fix it. When a bug doesn’t show up reliably, it’s hard for
developers to find a pattern or understand what’s causing it, which makes it difficult
to solve the problem.
 Priority and Severity Misalignment: Discrepancies in understanding the severity
and priority of bugs can create confusion and inefficiencies in bug-fixing endeavours.
If testers and developers hold differing perspectives on how critical a bug is or when
it should be addressed, it can lead to misallocation of resources. High-priority bugs
might not receive the immediate attention they require, and conversely, low-priority
issues might be prioritized over more critical concerns. This misalignment can result
in delays and skewed bug-fixing priorities.
 Absence of Clear Bug Reporting Process: A standardized bug reporting process is
pivotal for efficient communication between testers and developers. When testers
submit bug reports using varying formats or omit crucial details, it becomes
challenging for developers to assess the severity, impact, and necessary steps to
reproduce the issue. A clear bug reporting process, encompassing well-defined
templates and guidelines, streamlines communication, accelerates issue resolution and
ensures that no essential information is overlooked or omitted.
 Lack of Bug Reporting Tools: Effective collaboration and bug reporting tools play a
pivotal role in expediting the identification, reporting, and resolution of bugs. In the
absence of such tools, communication channels between testers and developers might
be disjointed or inefficient. Robust bug reporting tools facilitate seamless
documentation, bug tracking, and updating of the reports. Without these tools, the
process can become cumbersome, leading to delays in addressing issues and reduced
overall efficiency in bug management and resolution.
Overcoming these challenges requires open communication, clear bug-reporting guidelines,
collaboration tools, and an understanding of the importance of accurate and comprehensive
bug reporting in the software development process.

3.22.6. BUG LIFE CYCLE

 Defect Life Cycle or Bug Life Cycle in software testing is the specific set of states
that defect or bug goes through in its entire life. The purpose of Defect life cycle is to
easily coordinate and communicate current status of defect which changes to various
assignees and make the defect fixing process systematic and efficient.
 Defect Status - Defect Status or Bug Status in defect life cycle is the present state
from which the defect or a bug is currently undergoing. The goal of defect status is to
precisely convey the current state or progress of a defect or bug in order to better track
and understand the actual progress of the defect life cycle.

 Defect States Workflow - The number of states that a defect goes through varies
from project to project. Below lifecycle diagram, covers all possible states

 New: When a new defect is logged and posted for the first time. It is assigned a status
as NEW.
 Assigned: Once the bug is posted by the tester, the lead of the tester approves the bug
and assigns the bug to the developer team
 Open: The developer starts analyzing and works on the defect fix
 Fixed: When a developer makes a necessary code change and verifies the change, he
or she can make bug status as “Fixed.”
 Pending retest: Once the defect is fixed the developer gives a particular code for
retesting the code to the tester. Since the software testing remains pending from the
testers end, the status assigned is “pending retest.”
 Retest: Tester does the retesting of the code at this stage to check whether the defect
is fixed by the developer or not and changes the status to “Re-test.”

 Verified: The tester re-tests the bug after it got fixed by the developer. If there is no
bug detected in the software, then the bug is fixed and the status assigned is
“verified.”
 Reopen: If the bug persists even after the developer has fixed the bug, the tester
changes the status to “reopened”. Once again the bug goes through the life cycle.
 Closed: If the bug is no longer exists then tester assigns the status “Closed.”
 Duplicate: If the defect is repeated twice or the defect corresponds to the same
concept of the bug, the status is changed to “duplicate.”
 Rejected: If the developer feels the defect is not a genuine defect then it changes the
defect to “rejected.”
 Deferred: If the present bug is not of a prime priority and if it is expected to get fixed
in the next release, then status “Deferred” is assigned to such bugs
 Not a bug: If it does not affect the functionality of the application then the status
assigned to a bug is “Not a bug”.

You might also like