UNIT II
UNIT II
TESTING ECHNIQUES
Black Box Testing
Black Box Testing is a Software testing method in which the internal working of the
application is not known to the tester.
The Black Box Testing mainly focuses on testing the functionality of software without any
knowledge of the internal logic of an application.
Black-box testing is a Type of Software Testing in which the tester is not concerned with
the software’s internal knowledge or implementation details but rather focuses on
validating the functionality based on the provided specifications or requirements.
1. Functional Testing
Functional Testing is a type of Software Testing in which the system is tested against the
functional requirements and specifications. Functional testing ensures that the requirements
or specifications are properly satisfied by the application.
This testing is not concerned with the source code of the application. Each functionality
of the software application is tested by providing appropriate test input, expecting the
output, and comparing the actual output with the expected output.
This testing focuses on checking the user interface, APIs, database, security, client or
server application, and functionality of the Application Under Test. Functional testing
can be manual or automated. It determines the system’s software functional
requirements.
2. Regression Testing
Regression Testing is like a Software Quality checkup after any changes are made. It
involves running tests to make sure that everything still works as it should, even after
updates or tweaks to the code. This ensures that the software remains reliable and functions
properly, maintaining its integrity throughout its development lifecycle.
Regression means the return of something and in the software field, it refers to the
return of a bug. It ensures that the newly added code is compatible with the existing
code.
In other words, a new software update has no impact on the functionality of the
software. This is carried out after a system maintenance operation and upgrades.
3. Nonfunctional Testing
Non-functional Testing is a type of Software Testing that is performed to verify the non-
functional requirements of the application. It verifies whether the behavior of the system is
as per the requirement or not. It tests all the aspects that are not tested in functional testing.
It is designed to test the readiness of a system as per nonfunctional parameters which
are never addressed by functional testing.
It is as important as functional testing.
It is also known as NFT. This testing is not functional testing of software. It focuses on
the software’s performance, usability, and scalability.
Advantages of Black Box Testing
The tester does not need to have more functional knowledge or programming skills to
implement the Black Box Testing.
It is efficient for implementing the tests in the larger system.
Tests are executed from the user’s or client’s point of view.
Test cases are easily reproducible.
It is used to find the ambiguity and contradictions in the functional specifications.
Disadvantages of Black Box Testing
There is a possibility of repeating the same tests while implementing the testing
process.
Without clear functional specifications, test cases are difficult to implement.
It is difficult to execute the test cases because of complex inputs at different stages of
testing.
Sometimes, the reason for the test failure cannot be detected.
Some programs in the application are not tested.
It does not reveal the errors in the control structure.
Working with a large sample space of inputs can be exhaustive and consumes a lot of
time.
Equivalence Class Testing:
Equivalence Partitioning Method is also known as Equivalence class partitioning (ECP).
It is a software testing technique or black-box testing that divides input domain into classes
of data, and with the help of these classes of data, test cases can be derived. An ideal test
case identifies class of error that might require many arbitrary test cases to be executed
before general error is observed.
In equivalence partitioning, equivalence classes are evaluated for given input conditions.
Whenever any input is given, then type of input condition is checked, then for this input
conditions, Equivalence class represents or describes set of valid or invalid states.
Guidelines for Equivalence Partitioning :
If the range condition is given as an input, then one valid and two invalid equivalence
classes are defined.
If a specific value is given as input, then one valid and two invalid equivalence classes
are defined.
If a member of set is given as an input, then one valid and one invalid equivalence class
is defined.
If Boolean no. is given as an input condition, then one valid and one invalid equivalence
class is defined
Example-1:
Let us consider an example of any college admission process. There is a college that gives
admissions to students based upon their percentage.
Consider percentage field that will accept percentage only between 50 to 90 %, more and
even less than not be accepted, and application will redirect user to an error page. If
percentage entered by user is less than 50 %or more than 90 %, that equivalence
partitioning method will show an invalid percentage. If percentage entered is between 50 to
90 %, then equivalence partitioning method will show valid percentage.
1. Condition Stubs : The conditions are listed in this first upper left part of the decision
table that is used to determine a particular action or set of actions.
2. Action Stubs : All the possible actions are given in the first lower left portion (i.e,
below condition stub) of the decision table.
3. Condition Entries : In the condition entry, the values are inputted in the upper right
portion of the decision table. In the condition entries part of the table, there are multiple
rows and columns which are known as Rule.
4. Action Entries : In the action entry, every entry has some associated action or set of
actions in the lower right portion of the decision table and these values are called
outputs.
Types of Decision Tables :
The decision tables are categorized into two types and these are given below:
1. Limited Entry : In the limited entry decision tables, the condition entries are restricted
to binary values.
2. Extended Entry : In the extended entry decision table, the condition entries have more
than two values. The decision tables use multiple conditions where a condition may
have many possibilities instead of only ‘true’ and ‘false’ are known as extended entry
decision tables.
Applicability of Decision Tables :
The order of rule evaluation has no effect on the resulting action.
The decision tables can be applied easily at the unit level only.
Once a rule is satisfied and the action selected, n another rule needs to be examined.
The restrictions do not eliminate many applications.
2. One and Only One constraint or O-constraint: This constraint exists between causes.
It states that one and only one of c1 and c2 must be 1.
3. Requires constraint or R-constraint: This constraint exists between causes. It states
that for c1 to be 1, c2 must be 1. It is impossible for c1 to be 1 and c2 to be 0.
4. Mask constraint or M-constraint: This constraint exists between effects. It states that
if effect e1 is 1, the effect e2 is forced to be 0.
Error Guessing
Software application is a part of our daily life. May be on a laptop or maybe on our mobile
phone, or it may be any digital device/interface our day starts with the use of
various software applications and also ends with the use of various software
applications. That’s why software companies are also trying their best to develop good
quality error-free software applications for the users.
So when a company develops any software application software testing plays a major role
in that. Testers not only test the product with a set of specified test cases they also test the
software by coming out of the testing documents. There the term error guessing comes
which is not specified in any testing instruction manual still it is performed
Error guessing is an informal testing technique where testers rely on
their experience, intuition, and domain knowledge to identify potential defects in
software applications that may not be captured by formal test cases or specifications. It
involves guessing where errors or bugs might exist based on experience with similar
systems or applications, common pitfalls, or understanding of user behavior and
expectations. This technique complements formal testing methods by uncovering issues that
may be overlooked in structured testing processes.
This is not a formal way of performing testing still it has importance as it sometimes solves
many unresolved issues also.
Determine Which Areas Are Ambiguous: Error guessing is a technique used by testers
to find unclear or poorly specified software areas that might not have clear
specifications. Testers can highlight areas of the application that need more explanation
or elaboration in the requirements by estimating with confidence.
Experience-Based Testing: Testers use their expertise and understanding of typical
software dangers to estimate possible flaws. This method works especially well for
complicated or inadequately documented systems when experience is essential for
identifying flaws.
Testing Based on Risk: By enabling testers to concentrate on software components that
pose a high risk, it is consistent with a risk-based testing methodology. Based on their
assessment of regions where flaws have a higher chance of affecting the system, testers
prioritize their testing efforts.
Early Error Identification: Early defect detection during testing is made possible by
error guessing. By addressing faults at an early stage, testers can contribute to the
overall quality of the product by identifying potential issues before the execution of
formal test cases.
Where or how to use it?
Error guessing in the software testing approach is a sort of black box testing technique and
also error guessing is best used as a part of the conditions where other black box testing
techniques are performed, for instance, boundary value analysis and equivalence split are
not prepared to cover all of the condition which is slanted to an error in the application.
Time and resource constraints: Error guessing can be a fast and efficient technique to
find flaws in circumstances where there are not enough resources or time to plan a
thorough test.
Agile and Adaptive Workplaces: Error guessing fits in nicely with the incremental and
iterative nature of development in agile development settings, where adaptation and
flexibility are essential.
Unfamiliar or Complex Systems: Error guessing enables testers to leverage their general
testing experience to identify potential difficulties when working with complex systems
or poorly known technology.
Parts with Few Specifications: In software areas where specifications are unclear,
lacking or poorly defined, error guessing works well.
Areas at High Risk: Error guessing is a useful tool for testers to concentrate on high-risk
software regions. Through the application of their expertise on the system’s
vulnerabilities, testers are able to direct their testing efforts towards places where errors
are most likely to cause major problems.
Advantages of Error Guessing Technique
It is effective when used with other testing approaches.
It is helpful to solve some complex and problematic areas of application.
It figures out errors which may not be identified through other formal testing
techniques.
It helps in reducing testing times.
Disadvantages of Error Guessing Technique
Only capable and skilled tests can perform.
Dependent on testers experience and skills.
Fails in providing guarantee the quality standard of the application.
Not an efficient way of error detection as compared to effort.
Drawbacks of Error Guessing technique:
Not sure that the software has reached the expected quality.
Never provide full coverage of an application.
White box Testing
White box testing techniques analyze the internal structures the used data structures,
internal design, code structure, and the working of the software rather than just the
functionality as in black box testing.
It is also called glass box testing clear box testing or structural testing. White Box Testing
is also known as transparent testing or open box testing .
White box testing is a Software Testing Technique that involves testing the internal
structure and workings of a Software Application. The tester has access to the source code
and uses this knowledge to design test cases that can verify the correctness of the software
at the code level.
White box testing is also known as Structural Testing or Code-based Testing, and it is
used to test the software’s internal logic, flow, and structure. The tester creates test cases to
examine the code paths and logic flows to ensure they meet the specified requirements.
White box testing include testing a software application with an extend understanding of
its internal code and structure. This type of testing allows testers to create detailed test
cases based on the application’s design and functionality.
Here are some key types of tests commonly used in white box testing:
Path Testing: White box testing will be checks all possible execution paths in the
program to sure about the each one of the function behaves as expected. It helps verify
that all logical conditions in the code are functioning correctly and efficiently with as
properly manner, avoiding unnecessary steps with better code reusability.
Input and Output Validation: By providing different inputs to a function, white box
testing check the the function gives the correct output each of the time. This helps to
confirm that the software consistently produces the required results under various
conditions.
Security Testing: this will focuses on finding security issues in the code. Tools like
static code analysis are used to check the code for potential security flaws, for checking
the application for the best practices for secure development.
Loop testing: It will be check that loops (for or while loops) in the program operate
correctly and efficiently. It checks that the loop handles variables correctly and doesn’t
cause errors like infinite loops or logic flaws.
Data Flow Testing: This involves tracking the flow of variables through the program. It
ensures that variables are properly declared, initialized, and used in the right places,
preventing errors related to incorrect data handling.
Types Of White Box Testing
White box testing can be done for different purposes at different places. There are three
main types of White Box testing which is follows:-
Types Of White Box Testing
Unit Testing: Unit Testing checks if each part or function of the application works
correctly. It will check the application meets design requirements during development.
Integration Testing : Integration Testing Examines how different parts of the
application work together. After unit testing to make sure components work well both
alone and together.
Regression Testing : Regression Testing Verifies that changes or updates don’t break
existing functionality of the code. It will check the application still passes all existing
tests after updates
White Box Testing Techniques
One of the main benefits of white box testing is that it allows for testing every part of an
application. To achieve complete code coverage, white box testing uses the following
techniques:
1. Statement Coverage: In this technique, the aim is to traverse all statements at least
once. Hence, each line of code is tested. In the case of a flowchart, every node must be
traversed at least once. Since all lines of code are covered, it helps in pointing out faulty
code.
If we see in the case of a flowchart, every node must be traversed at least once. Since all
lines of code are covered, it helps in pointing out faulty code while detecting.
Branch Coverage: Branch coverage focuses on testing the decision points or conditional
branches in the code. It checks whether both possible outcomes (true and false) of each
conditional statement are tested. In this technique, test cases are designed so that each
branch from all decision points is traversed at least once. In a flowchart, all edges must be
traversed at least once.
In a flowchart, all edges must be traversed at least once.
Condition Coverage: In this technique, all individual conditions must be covered as shown
in the following example:
READ X, Y
IF(X == 0 || Y == 0)
PRINT ‘0’
#TC1 – X = 0, Y = 55
#TC2 – X = 5, Y = 0
4. Multiple Condition Coverage: In this technique, all the possible combinations of the
possible outcomes of conditions are tested at least once. Let’s consider the following
example:
READ X, Y
IF(X == 0 || Y == 0)
PRINT ‘0’
#TC1: X = 0, Y = 0
#TC2: X = 0, Y = 5
#TC3: X = 55, Y = 0
#TC4: X = 55, Y = 5
5. Basis Path Testing: In this technique, control flow graphs are made from code or
flowchart and then Cyclomatic complexity is calculated which defines the number of
independent paths so that the minimal number of test cases can be designed for each
independent path. Steps:
Make the corresponding control flow graph
Calculate the cyclomatic complexity
Find the independent paths
Design test cases corresponding to each independent path
V(G) = P + 1, where P is the number of predicate nodes in the flow graph
V(G) = E – N + 2, where E is the number of edges and N is the total number of nodes
V(G) = Number of non-overlapping regions in the graph
#P1: 1 – 2 – 4 – 7 – 8
#P2: 1 – 2 – 3 – 5 – 7 – 8
#P3: 1 – 2 – 3 – 6 – 7 – 8
#P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8
6. Loop Testing: Loops are widely used and these are fundamental to many algorithms
hence, their testing is very important. Errors often occur at the beginnings and ends of
loops.
Simple loops: For simple loops of size n, test cases are designed that:
1. Skip the loop entirely
2. Only one pass through the loop
3. 2 passes
4. m passes, where m < n
5. n-1 ans n+1 passes
Nested loops: For nested loops, all the loops are set to their minimum count, and we
start from the innermost loop. Simple loop tests are conducted for the innermost loop
and this is worked outwards till all the loops have been tested.
Concatenated loops: Independent loops, one after another. Simple loop tests are
applied for each. If they’re not independent, treat them like nesting
Basis Path Testing
Prerequisite – Path Testing Basis Path Testing is a white-box testing technique based on
the control structure of a program or a module. Using this structure, a control flow graph is
prepared and the various possible paths present in the graph are executed as a part of
testing. Therefore, by definition, Basis path testing is a technique of selecting the paths in
the control flow graph, that provide a basis set of execution paths through the program or
module. Since this testing is based on the control structure of the program, it requires
complete knowledge of the program’s structure. To design test cases using this technique,
four steps are followed :
1. Construct the Control Flow Graph
2. Compute the Cyclomatic Complexity of the Graph
3. Identify the Independent Paths
4. Design Test cases from Independent Paths
Let’s understand each step one by one. 1. Control Flow Graph – A control flow graph (or
simply, flow graph) is a directed graph which represents the control structure of a program
or module. A control flow graph (V, E) has V number of nodes/vertices and E number of
edges in it. A control graph can also have :
Junction Node – a node with more than one arrow entering it.
Decision Node – a node with more than one arrow leaving it.
Region – area bounded by edges and nodes (area outside the graph is also counted as a
region.).
If – Then – Else –
Do – While –
While – Do –
Switch – Case –
Graph Matrices
Software testing metrics are quantifiable indicators of the software testing process progress,
quality, productivity, and overall health. The purpose of software testing metrics is to
increase the efficiency and effectiveness of the software testing process while also assisting
in making better decisions for future testing by providing accurate data about the testing
process. A metric expresses the degree to which a system, system component, or process
possesses a certain attribute in numerical terms.
Importance of Metrics in Software Testing:
Test metrics are essential in determining the software’s quality and performance.
Developers may use the right software testing metrics to improve their productivity.
Early Problem Identification: By measuring metrics such as defect density and defect
arrival rate, testing teams can spot trends and patterns early in the development process.
Allocation of Resources: Metrics identify regions where testing efforts are most needed,
which helps with resource allocation optimization. By ensuring that testing resources
are concentrated on important areas, this enhances the strategy for testing as a whole.
Monitoring Progress: Metrics are useful instruments for monitoring the advancement of
testing. They offer insight into the quantity of test cases that have been run, their
completion rate, and if the testing effort is proceeding according to plan.
Continuous Improvement: Metrics offer input on the testing procedure, which helps to
foster a culture of continuous development.
Software testing metrics are essential for evaluating the quality and efficiency of testing
processes. They provide critical data for early problem detection, resource allocation, and
progress monitoring, helping to improve overall testing practices. For a concise overview of
software testing metrics and their benefits, explore our Complete Guide to Software
Testing Metrics at GeeksforGeeks .
Types of Software Testing Metrics:
Software testing metrics are divided into three categories:
1. Process Metrics: A project’s characteristics and execution are defined by process
metrics. These features are critical to the SDLC process’s improvement and
maintenance (Software Development Life Cycle).
2. Product Metrics: A product’s size, design, performance, quality, and complexity are
defined by product metrics. Developers can improve the quality of their software
development by utilizing these features.
3. Project Metrics: Project Metrics are used to assess a project’s overall quality. It is used
to estimate a project’s resources and deliverables, as well as to determine costs,
productivity, and flaws.
It is critical to determine the appropriate testing metrics for the process. A few points to
keep in mind:
Before creating the metrics, carefully select your target audiences.
Define the aim for which the metrics were created.
Prepare measurements based on the project’s specific requirements. Assess the financial
gain associated with each statistic.
Match the measurements to the project lifestyle phase for the best results.
The major benefit of automated testing is that it allows testers to complete more tests in less
time while also covering a large number of variations that would be practically difficult to
calculate manually.
Manual Test Metrics: What Are They and How Do They Work?
Manual testing is carried out in a step-by-step manner by quality assurance experts. Test
automation frameworks, tools, and software are used to execute tests in automated testing.
There are advantages and disadvantages to both human and automated testing. Manual
testing is a time-consuming technique, but it allows testers to deal with more complicated
circumstances. There are two sorts of manual test metrics:
1. Base Metrics: Analysts collect data throughout the development and execution of test
cases to provide base metrics. By generating a project status report, these metrics are sent to
test leads and project managers. It is quantified using calculated metrics.
The total number of test cases
The total number of test cases completed.
2. Calculated Metrics: Data from base metrics are used to create calculated metrics. The
test lead collects this information and transforms it into more useful information for
tracking project progress at the module, tester, and other levels. It’s an important aspect of
the SDLC since it allows developers to make critical software changes.
Other Important Metrics:
The following are some of the other important software metrics:
Defect metrics: Defect metrics help engineers understand the many aspects of software
quality, such as functionality, performance, installation stability, usability,
compatibility, and so on.
Schedule Adherence: Schedule Adherence’s major purpose is to determine the time
difference between a schedule’s expected and actual execution times.
Defect Severity: The severity of the problem allows the developer to see how the defect
will affect the software’s quality .
Test case efficiency: Test case efficiency is a measure of how effective test cases are at
detecting problems.
Defects finding rate: It is used to determine the pattern of flaws over a period of time.
Defect Fixing Time: The amount of time it takes to remedy a problem is known as defect
fixing time.
Test Coverage: It specifies the number of test cases assigned to the program. This
metric ensures that the testing is completed completely. It also aids in the verification of
code flow and the testing of functionality.
Defect cause: It’s utilized to figure out what’s causing the problem.
The various stages of the test metrics lifecycle are:
1. Analysis:
The metrics must be recognized.
Define the QA metrics that have been identified.
2. Communicate:
Stakeholders and the testing team should be informed about the requirement for
metrics.
Educate the testing team on the data points that must be collected in order to process
the metrics.
3. Evaluation:
Data should be captured and verified.
Using the data collected to calculate the value of the metrics
4. Report:
Create a strong conclusion for the paper.
Distribute the report to the appropriate stakeholder and representatives.
Gather input from stakeholder representatives.
Loop Software Testing
Loop Testing is a type of software testing type that is performed to validate the loops. It is
one of the type of Control Structure Testing. Loop testing is a white box testing technique
and is used to test loops in the program.
Objectives of Loop Testing:
The objective of Loop Testing is:
To fix the infinite loop repetition problem.
To know the performance.
To identify the loop initialization problems.
To determine the uninitialized variables.
Types of Loop Testing:
Loop testing is classified on the basis of the types of the loops:
1. Simple Loop Testing:
Testing performed in a simple loop is known as Simple loop testing. Simple loop is
basically a normal “for”, “while” or “do-while” in which a condition is given and loop
runs and terminates according to true and false occurrence of the condition respectively.
This type of testing is performed basically to test the condition of the loop whether the
condition is sufficient to terminate loop after some point of time
Nested Loop Testing:
Testing performed in a nested loop in known as Nested loop testing. Nested loop is
basically one loop inside the another loop. In nested loop there can be finite number of
loops inside a loop and there a nest is made. It may be either of any of three loops i.e., for,
while or do-while.
Concatenated Loop Testing:
Testing performed in a concatenated loop is known as Concatenated loop testing. It is
performed on the concatenated loops. Concatenated loops are loops after the loop. It is a
series of loops. Difference between nested and concatenated is that in nested loop is inside
the loop but here loop is after the loop.
Unstructured Loop Testing:
Testing performed in an unstructured loop is known as Unstructured loop testing.
Unstructured loop is the combination of nested and concatenated loops. It is basically a
group of loops that are in no order.
Advantages of Loop Testing:
The advantages of Loop testing are:
Loop testing limits the number of iterations of loop.
Loop testing ensures that program doesn’t go into infinite loop process.
Loop testing endures initialization of every used variable inside the loop.
Loop testing helps in identification of different problems inside the loop.
Loop testing helps in determination of capacity.
Disadvantages of Loop Testing:
The disadvantages of Loop testing are:
Loop testing is mostly effective in bug detection in low-level software.
Loop testing is not useful in bug detection .
Data Flow Testing
Data Flow Testing is a structural testing method that examines how variables are defined
and used throughout a program. It uses control flow graphs to identify paths where
variables are defined and then utilized, aiming to uncover anomalies such as unused
variables or incorrect definitions. By focusing on the flow of data, it helps ensure that
variables are properly handled and used in the code.
Data Flow Testing is a type of structural testing . It is a method that is used to find the test
paths of a program according to the locations of definitions and uses of variables in the
program. It has nothing to do with data flow diagrams. Furthermore, it is concerned with:
Statements where variables receive values,
Statements where these values are used or referenced.
By analyzing control flow graphs, this technique aims to identify issues such as unused
variables or incorrect definitions, ensuring proper handling of data within the code. To gain
a deeper understanding of Data Flow Testing and enhance your testing skills, explore
the Complete Guide to Software Testing & Automation by GeeksforGeeks . This course
provides detailed insights into Data Flow Testing, including its types, advantages, and
practical applications, helping you implement effective testing strategies and improve
software quality.
Types of Data Flow Testing
1. Testing for All-Du-Paths: It Focuses on “All Definition-Use Paths. All-Du-Paths is an
acronym for “All Definition-Use Paths.” Using this technique, every possible path from
a variable’s definition to every usage point is tested.
2. All-Du-Path Predicate Node Testing: This technique focuses on predicate nodes, or
decision points, in the control flow graph.
3. All-Uses Testing: This type of testing checks every place a variable is used in the
application.
4. All-Defs Testing: This type of testing examines every place a variable is specified within
the application’s code.
5. Testing for All-P-Uses: All-P-Uses stands for “All Possible Uses.” Using this method,
every potential use of a variable is tested.
6. All-C-Uses Test: It stands for “All Computation Uses.” Testing every possible path
where a variable is used in calculations or computations is the main goal of this
technique.
7. Testing for All-I-Uses: All-I-Uses stands for “All Input Uses.” With this method, every
path that uses a variable obtained from outside inputs is tested.
8. Testing for All-O-Uses: It stands for “All Output Uses.” Using this method, every path
where a variable has been used to produce output must be tested.
9. Testing of Definition-Use Pairs: It concentrates on particular pairs of definitions and
uses for variables.
10. Testing of Use-Definition Paths: This type of testing examines the routes that lead from
a variable’s point of use to its definition.
Advantages of Data Flow Testing:
Data Flow Testing is used to find the following issues-
To find a variable that is used but never defined,
To find a variable that is defined but never used,
To find a variable that is defined multiple times before it is use,
Deallocating a variable before it is used.
Disadvantages of Data Flow Testing
Time consuming and costly process
Requires knowledge of programming languages
Mutation Testing
Mutation Testing is a type of Software Testing that is performed to design new software
tests and also evaluate the quality of already existing software tests. Mutation testing is
related to modification a program in small ways. It focuses to help the tester develop
effective tests or locate weaknesses in the test data used for the program.
History of Mutation Testing:
Richard Lipton proposed the mutation testing in 1971 for the first time. Although high cost
reduced the use of mutation testing but now it is widely used for languages such as Java and
XML
1. Review
In static testing, the review is a process or technique that is performed to find potential
defects in the design of the software. It is a process to detect and remove errors and defects
in the different supporting documents like software requirements specifications. People
examine the documents and sorted out errors, redundancies, and ambiguities. Review is of
four types:
1. Informal: In an informal review the creator of the documents put the contents in front of
an audience and everyone gives their opinion and thus defects are identified in the early
stage.
2. Walkthrough: It is basically performed by an experienced person or expert to check the
defects so that there might not be problems further in the development or testing phase.
3. Peer review: Peer review means checking documents of one another to detect and fix
defects. It is basically done in a team of colleagues.
4. Inspection: Inspection is basically the verification of documents by the higher authority
like the verification of software requirement specifications (SRS).
2. Static Analysis
Static Analysis includes the evaluation of the code quality that is written by developers.
Different tools are used to do the analysis of the code and comparison of the same with the
standard. It also helps in following identification of the following defects:
1. Unused variables.
2. Dead code.
3. Infinite loops.
4. Variable with an undefined value.
5. Wrong syntax.
Static Analysis is of three types:
1. Data Flow: Data flow is related to the stream processing.
2. Control Flow: Control flow is basically how the statements or instructions are executed.
3. Cyclomatic Complexity: Cyclomatic complexity defines the number of independent
paths in the control flow graph made from the code or flowchart so that a minimum
number of test cases can be designed for each independent path.
How Static Testing is Performed?
Below are the steps that can be followed to perform static testing:
1. Planning: This step involves defining what needs to be tested, setting objectives,
determining the scope of testing, and preparing a testing strategy. This should involve
identifying the software components to be tested, developing the testing methods, and
identifying the tools to be tested.
2. Prepare artifacts: In this step, necessary artifacts like source codes, design documents,
requirement documents, and test cases are prepared.
3. Perform static analysis: Static analysis is conducted in this phase where the code is
reviewed and analyzed for compliance with coding standards, code quality, and security
issues using specialized static analysis tools without executing the code.
4. Perform code reviews: Code reviews are performed where a small team of experts
systematically reviews the code and finds potential errors using various methods.
5. Report and document bugs: Bugs identified during static testing are reported and
documented.
6. Analyze results: The results collected during static testing are analyzed to determine the
quality of the software product.
Benefits of Static Testing
Below are some of the benefits of static testing:
1. Early defect detection: Static testing helps in early defect detection when they are most
easy and cost-effective to fix.
2. Prevention of common issues: Static testing helps to fix common issues like syntax
errors, null pointer exceptions, etc. Addressing these issues early in development helps
the teams to avoid problems later.
3. Improved code quality: Static testing helps to make sure that the code is easy to
maintain and well-structured. This leads to a higher quality code.
4. Reduced costs: Early bug detection in static testing helps to fix them early in the
development thus saving time, effort, and cost.
5. Immediate feedback: Static testing provides immediate evaluation and feedback on the
software during each phase while developing the software product.
6. Helps to find exact bug location: Static testing helps to find the exact bug location as
compared to dynamic testing.
Limitations of Static Testing
Below are some of the limitations of static testing:
1. Detect Some Issues: Static testing may not uncover all issues that could arise during
runtime. Some defects may appear only during dynamic testing when the software runs.
2. Depends on the Reviewer’s Skills: The effectiveness of static testing depends on the
reviewer’s skills, experience, and knowledge.
3. Time-consuming: Static testing can be time-consuming when working on large and
complex projects.
4. No Runtime Environment: It is conducted without executing the code. This means it
cannot detect runtime errors such as memory leaks, performance issues, etc.
5. Prone to Human Error: Static testing is prone to human error due to manual reviews
and inspections techniques being used.
Best Practices for Static Testing
Below are some of the best practices for static testing:
1. Define Clear Objectives: Establish the objectives and scope of static testing early in the
project.
2. Develop Checklist: Create a checklist for reviews and coding standards that align with
the industry best practices and specific project requirements.
3. Focus on High-Risk Areas: Prioritize static testing on high-risk areas of the codebase
that are more likely to contain defects.
4. Team Training: Provide training to the team members on static testing techniques, tools,
and best practices. Ensure everyone understands how to perform static testing.
5. Prevent Test Execution Delays: Time and cost can be managed and reduced if the test
execution can be delayed.
6. Track Review Activities: It is good to plan the review activities and track them as
walkthroughs and reviews are usually merged into peer reviews.
7. Keep Process Formal: For efficient static testing, it is very important to keep the
process and project culture formal.
8. Regular Tool Updates: Keep static testing tools up to date to ensure they can effectively
detect the new types of issues.
Static Testing Tools
Some of the most commonly used static testing tools are:
1. Checkstyle
Checkstyle is a static analysis tool that helps developers to write Java code and automates
the process of checking Java code.
Features:
It can verify the code layout and formatting issues.
It can help to identify the method design problems and class design problems.
It is a highly configured tool that can support almost any coding standard like Google
Java Style, and Sun code conventions.
2. Soot
Soot is a Java optimization framework that has several analysis and transformation tools.
Features:
It can detect unnecessary code and thus improve the overall code quality.
It is a framework for analyzing and transforming Java and Android applications to test
aspects like named modules and modular jar files, automatic modules, exploded
modules, etc.
3. SourceMeter
SourceMeter is a static testing tool for static source code analysis of various programming
languages like C/ C++, Java, C#, Python, and RPG Projects.
Features:
It helps in the easy identification of vulnerable spots of the system under development
from the source code.
It can analyze code in multiple programming languages and generates reports that help
developers to make informed decisions.
The output of analysis and quality of analyzed source code can be used to enhance the
product.
4. Lint
Lint is a static analysis tool that scans code to flag programming errors and bugs.
Features:
It helps enforce coding standards and prevent errors and bugs in the code.
It helps to identify and correct common code mistakes without having to run the
application.
5. SonarQube
SonarQube is a static testing open-source tool to inspect code quality continuously.
Features:
It analyses and identifies the technical debt, bugs, and vulnerabilities across different
programming languages.
It provides support for 29 languages and analyzes the quality of all the languages in
your projects.
It has features like custom rules, integration with code repositories, detailed code
reports, and extensible plugins.
Progressive Testing
Progressive Testing is also known as Incremental Testing. In Software
Testing Incremental Testing refers to test modules one after another. When in an
application parent-child modules are tested related modules need to be tested first.
Let’s understand Progressive Testing/Increment Testing more elaborately by going a little
bit deeper into it. This Increment Testing is considered a sub-testing technique that comes
under Integration Testing. This testing acts as an approach/strategy to perform integration
testing on software products rather than direct testing activities.
After completion of Unit Testing over each component of the software, then Integration
Testing is performed to ensure proper interface and interaction between components of the
system. Incremental testing or Progressive testing is treated as a partial phase of Integration
testing. First, it performs Integration testing on standalone components thereafter it goes on
integrating components and performs integration testing over them accordingly. As the
components are integrated in an incremental manner that’s why also it is termed
Incremental Testing.
Working of Incremental Testing
Unit Testing: To make sure that every unit performs as planned and satisfies its
requirements, it is independently tested, usually with the use of automated testing
frameworks. Units are considered to be operating correctly in isolation if they have
completed their unit tests.
Parameterized Testing: To cover a range of situations and edge cases, units are
evaluated with different input parameters during the testing process. Given that many
inputs may cause the unit to behave differently, this helps to guarantee that testing is
robust and complete.
Isolation: The units are maintained in isolation, which means they haven’t been
integrated with other system components until they pass unit tests. As a result, testing
individual units may be done with greater focus and control because interactions with
other components don’t complicate matters.
Testing for Incremental Integration: Following testing and verification, individual
units are progressively included into larger system components or subsystems. Testing
the relationships between units, integrating them incrementally, and finding any
compatibility or integration problems as they appear are all part of incremental
integration testing.
Progressive Testing Approaches
1. Bottom-up approach – In the Bottom-up approach all components are combined one by
one from bottom level to top level until all components are integrated.
2. Top-down approach – In Top top-down approach all components are combined one by
one from the top level to the bottom level until all components are integrated. Stubs are
used to replace the need for essential components.
3. Functional approach – In the Functional approach testing is carried out horizontally
means integration is done based on functionality. That’s why it is also named
Functional Increment.
4. Hybrid approach – In the Hybrid approach both the top-down approach and with
Bottom approach are followed. In this, we exploit the advantages of both the Top-down
approach and the Bottom-up approach.
Key Points of Progressive Testing
Increment testing involves the execution of integration tests over each of the
components.
To fulfill requirements of other necessary units or components drivers and stubs are
used as substitutes.
But stubs may increase the complexity of software.
It is easy to detect defects/faults in small subsystems as compared to large subsystems.
It is time time-consuming process, implantation requires a lot of time.
The incremental approach gives an advantage in the early detection of any defects over
that of the non-incremental approach.
Best Practices for Implementing Progressive Testing
Prioritize Test Cases: Test case prioritization should be done by taking into account
factors like risk, criticality, and business effect. It reduces the possibility of problems
that could affect the software’s overall quality.
Feedback Loop: To promote cooperation and communication between the development
and testing teams, set up a feedback loop. Motivate developers to take part in testing
and to quickly report on test findings.
Start Early: In the software development lifecycle, start testing as soon as feasible. This
lowers the possibility of costly repairs later in the process by enabling the early
detection and correction of flaws.
Automate Testing: Try to automate as much testing as you can, particularly repetitive
activities and regression tests. Improved test coverage, early defect discovery, and faster
test execution are all made possible by automated testing.
Track and Measure: Keep a close eye on the status of testing, test coverage, and defect
metrics. Utilize this information to identify areas in need of development and to help
you decide which tests should come first.
Regression Testing
Regression testing is a crucial aspect of software engineering that ensures the stability
and reliability of a software product. It involves retesting the previously tested
functionalities to verify that recent code changes haven’t adversely affected the existing
features.
By identifying and fixing any regression or unintended bugs, regression testing helps
maintain the overall quality of the software. This process is essential for software
development teams to deliver consistent and high-quality products to their users.
Regression testing is like a software quality checkup after any changes are made. It
involves running tests to make sure that everything still works as it should, even after
updates or tweaks to the code. This ensures that the software remains reliable and functions
properly, maintaining its integrity throughout its development lifecycle
When to do regression testing?
When new functionality is added to the system and the code has been modified to
absorb and integrate that functionality with the existing code.
When some defect has been identified in the software and the code is debugged to fix it.
When the code is modified to optimize its working.
Process of Regression testing
Firstly, whenever we make some changes to the source code for any reason like adding new
functionality, optimization, etc. then our program when executed fails in the previously
designed test suite for obvious reasons. After the failure, the source code is debugged to
identify the bugs in the program. After identification of the bugs in the source code,
appropriate modifications are made.
Then appropriate test cases are selected from the already existing test suite which covers all
the modified and affected parts of the source code. We can add new test cases if required.
In the end, regression testing is performed using the selected test cases.
Techniques for the selection of Test cases for Regression Testing
Select all test cases: In this technique, all the test cases are selected from the already
existing test suite. It is the simplest and safest technique but not very efficient.
Select test cases randomly: In this technique, test cases are selected randomly from the
existing test suite, but it is only useful if all the test cases are equally good in their fault
detection capability which is very rare. Hence, it is not used in most of the cases.
Select modification traversing test cases: In this technique, only those test cases are
selected that cover and test the modified portions of the source code and the parts that
are affected by these modifications.
Select higher priority test cases: In this technique, priority codes are assigned to each
test case of the test suite based upon their bug detection capability, customer
requirements, etc. After assigning the priority codes, test cases with the highest
priorities are selected for the process of regression testing. The test case with the
highest priority has the highest rank. For example, a test case with priority code 2 is less
important than a test case with priority code.
Top Regression Testing Tools
In regression testing, we generally select the test cases from the existing test suite itself
and hence, we need not compute their expected output, and it can be easily automated due
to this reason. Automating the process of regression testing will be very effective and
time-saving. The most commonly used tools for regression testing are:
1. Selenium
Open Source: Selenium is an open-source tool, making it freely available and
accessible for developers and testers.
Browser Compatibility: Supports multiple browsers , including Chrome, Firefox,
Safari, and Edge, ensuring tests can be run across different environments.
Programming Language Support: Allows writing tests in various programming
languages such as Java, Python, C#, Ruby, and JavaScript, providing flexibility for
testers.
Cross-Platform: Capable of running on different operating systems, including
Windows, macOS, and Linux, which enhances the tool’s portability.
Web Application Testing: Primarily designed for automating web applications, making
it ideal for regression testing of web-based systems.
Extensive Community Support: Boasts a large and active community, offering a wealth
of resources, plugins, and extensions to aid in test automation.
Integration Capabilities: Integrates well with other tools such as Jenkins for continuous
integration and continuous deployment (CI/CD), facilitating automated regression
testing in development pipelines.
2. Ranorex Studio
Comprehensive Testing Solution: Ranorex Studio provides a complete testing solution
that supports end-to-end regression testing, including both functional and non-
functional tests.
User-Friendly Interface: The tool offers an intuitive and user-friendly interface, making
it accessible for both beginners and experienced testers.
Cross-Platform Testing: It supports cross-platform testing, enabling tests to be executed
on desktop, web, and mobile applications, ensuring broad test coverage.
Codeless Test Automation: Ranorex Studio offers codeless automation through its
capture-and-replay functionality, allowing testers to create automated tests without
extensive programming knowledge.
Robust Reporting: The tool provides detailed and customizable test reports, helping
teams to easily identify issues and track the quality of the software over time.
Integration Capabilities: It integrates seamlessly with popular development and CI/CD
tools such as Jenkins, Azure DevOps, and Git, promoting a streamlined workflow and
continuous testing.
Data-Driven Testing: Ranorex Studio supports data-driven testing, allowing testers to
run the same set of tests with different data inputs, enhancing test coverage and
reliability.
3. testRigor
AI-Powered Test Automation: testRigor utilizes artificial intelligence to automate the
creation and maintenance of regression tests, reducing the effort and time required for
manual testing.
Natural Language Processing (NLP): The tool allows users to write test cases in plain
English, making it accessible to both technical and non-technical team members. This
feature simplifies test creation and improves collaboration.
Codeless Test Creation: Users can create and execute test cases without writing any
code, which speeds up the testing process and reduces the dependency on developers.
Cross-Browser and Cross-Platform Testing: testRigor supports testing across multiple
browsers and platforms, ensuring that the software works consistently in different
environments.
Self-Healing Tests: The tool automatically updates test scripts to adapt to minor changes
in the application’s UI, minimizing the maintenance overhead commonly associated
with regression testing.
4. Sahi Pro
Cross-browser Testing: Supports multiple browsers, ensuring consistent performance
across different environments.
Ease of Use: User-friendly interface with scriptless record and playback functionality,
making it accessible for testers with varying levels of expertise.
Robust Reporting: Detailed reports and logs that help in tracking and analyzing test
results efficiently.
Integrated Suite: Offers integration with various Continuous Integration (CI) tools like
Jenkins, enhancing the automation workflow.
Scalability: Capable of handling large-scale test automation projects with ease.
Script Flexibility: Supports both scriptless testing and advanced scripting using
JavaScript, providing flexibility for complex test scenarios.
5. Testlio
Global Network of Testers: Access to a diverse pool of professional testers from around
the world. Ensures comprehensive test coverage across different devices, operating
systems, and locations.
On-Demand Testing: Flexible scheduling allows for testing when needed, fitting into
the development cycle seamlessly. Enables quick turnaround times for regression
testing after code changes.
Integrated Platform: Combines test management, test execution, and reporting in a
single platform. Simplifies the regression testing process and provides a centralized
view of test results.
Comprehensive Reporting: Detailed reports with actionable insights. Helps identify and
prioritize issues, making it easier to address regressions promptly.
Advantages of Regression Testing
Automated unit testing
Comprehensive test coverage
System integration
Faster test execution completion
Improved developer productivity
Parallel testing
Reduced costs
Regression testing improves product quality
Reusability
Scalability
Time efficiency
Disadvantages of Regression Testing
It can be time and resource-consuming if automated tools are not used.
It is required even after very small changes in the code.
Time and Resource Constraints.
Among the significant risks associated with regression testing are the time and
resources required to perform it.
Incomplete or Insufficient Test Coverage.
False Positives and False Negatives.
Test Data Management Challenges.
regression testing techniques
Regression testing ensures that software changes haven't broken existing functionality. It
involves re-running previously executed test cases after changes, ensuring no unintended side
effects are introduced. Techniques include retesting all test cases, selectively testing affected
areas, and prioritizing tests based on importance. Automation can speed up the process, and
tools like Selenium and Appium are commonly used
1. Retest All:
This involves re-running the entire test suite after any changes, ensuring comprehensive
coverage.
It's a thorough approach but can be time-consuming and resource-intensive.
Suitable for situations where significant changes have been made or when a complete check
of the software is needed.
2. Selective Regression Testing:
This technique focuses on re-running tests that are most likely to be affected by the changes.
It involves identifying and executing a subset of test cases, reducing the overall time and
effort required.
This approach is particularly useful for situations where changes are localized or when the
impact of changes is well-understood.
3. Test Case Prioritization:
This involves prioritizing test cases based on their importance, risk, and likelihood of failure.
High-priority tests, such as those covering critical functionalities or high-risk areas, are run
first.
This allows for early detection of critical issues and efficient resource allocation.
4. Unit Regression Testing:
This type of regression testing focuses on individual units of code, ensuring that changes to a
specific unit haven't introduced new bugs.
It's a crucial component of unit testing and is often performed at the developer level.
5. Complete Regression Testing:
This involves testing the entire system after implementing changes, providing a more
comprehensive approach.
It's typically used after introducing significant changes or when a thorough check of the
software is required.
6. Partial Regression Testing:
This technique involves testing a portion of the system after minor changes, focusing on areas
that are likely to be affected.
It's a more efficient approach than complete regression testing when the changes are limited.
7. Automated Regression Testing:
This involves using automation tools to execute test scripts and automate the regression
testing process.
It significantly reduces manual effort, speeds up execution, and minimizes human error.
Popular tools for automated regression testing include Selenium, Appium, and Cypress.
8. Corrective Regression Testing:
This type of regression testing is used to verify that a bug fix has been effective and hasn't
introduced new issues.
It involves re-running existing test cases to ensure the bug is resolved and that no new
functionality has been affected.
9. Progressive Regression Testing:
This approach involves building upon the existing test suite with each development cycle,
making it a continuous and evolving process.
It ensures that new features and functionality are integrated smoothly without negatively
impacting existing functionality.
10. Reset All Test Cases:
This method involves running all existing test cases to ensure that no functionality has been
negatively impacted by changes.
It's a comprehensive approach but can be time-consuming.