Software Testing Techniques
Software Testing Techniques
TECHNIQUES
TEST 01
TA B L E O F C O N T E N T
I n d e p e n d e n t Te s t i n g
Te s t P l a n n i n g a n d E s t i m a t i o n
Te s t M o n i t o r i n g a n d C o n t r o l
C o n fi g u r a t i o n M a n a g e m e n t
R i s k s a n d Te s t i n g
Defect Management 01
I. TEST
ORGANIZATION I.1 INDEPENDENT
TESTING
•Independent testing refers to the process of evaluating
software or a system by a third-party entity that is not
directly involved in its development. The purpose of
independent testing is to ensure objectivity, impartiality,
and thoroughness in assessing the quality, functionality,
and performance of the software.
Learn More
02
DEGREES OF
INDEPENDENCE IN
•No independent testers; the only form of testing
TESTING
available is developers testing their own code
INDEPENDENCE
because of their different backgrounds, technical
perspectives, and biases
INDEPENDENCE
lack of collaboration, delays in providing feedback to
the development team, or an adversarial relationship
with the development team
•An independent tester can verify, challenge, or
disprove assumptions made by stakeholders •Developers may lose a sense of responsibility for
during specification and implementation of the quality
system
•Independent testers may be seen as a bottleneck
•Independent testers of a vendor can report in an
upright and objective manner about the system •Independent testers may lack some important
under test without (political) pressure of the information (e.g., about the test object)
company that hired them.
04
I.2 TASKS OF A TEST MANAGER AND
TESTER
I.TEST MANAGER I.TESTER
•Depending on the risks related to the product and the
•A test manager is a key role in software project, and the software development lifecycle model
development organizations responsible for selected, different people may take over the role of tester at
overseeing the planning, execution, and different test levels.
monitoring of testing activities throughout the
software development lifecycle. Typical tester tasks may include:
Typical test manager tasks may include: •Review and contribute to test plans
•Identify and document test conditions, and capture
•Write and update the test plan(s) traceability between test cases, test conditions, and the
•Coordinate the test plan(s) with project test basis
managers, product owners, and others •Design, set up, and verify test environment(s), often
•Develop or review a test policy and test strategy coordinating with system administration and network
for the organization management
•Design and implement test cases and test procedures
05
II. Test Planning and
Estimation
II.1 PURPOSE AND
CONTENT OF A TEST PLAN
•A test plan is a comprehensive document that
outlines the approach, scope, resources, and
schedule for testing activities within a software
development project.
Learn More
06
TEST PLANNING
ACTIVITIES
Test planning activities may include the following and some of these may be
documented in a test plan:
The content of test plans vary, and can extend beyond the topics identified above.
II.2 TEST STRATEGY AND
TEST APPROACH
A test strategy provides a generalized description of
the test process, usually at the product or
organizational level. Common types of test strategies
include:
Learn More
8
•Analytical: This type of test strategy is based on an
Reactive: In this type of test strategy, testing is reactive to
analysis of some factor (e.g., requirement or risk). Risk- the component or system being tested, and the events
based testing is an example of an analytical approach, occurring during test execution, rather than being pre-
where tests are designed and prioritized based on the planned (as the preceding strategies are). Tests are designed
level of risk. and implemented, and may immediately be executed in
•Model-Based: In this type of test strategy, tests are response to knowledge gained from prior test results.
designed based on some model of some required Exploratory testing is a common technique employed in
aspect of the product, such as a function, a business reactive strategies.
process, an internal structure, or a non-functional
characteristic (e.g., reliability). Examples of such •Regression-averse: This type of test strategy is motivated
models include business process models, state models, by a desire to avoid regression of existing capabilities. This
test strategy includes reuse of existing test ware (especially
and reliability growth models.
test cases and test data), extensive automation of regression
•Methodical: This type of test strategy relies on making
tests, and standard test suites.
systematic use of some predefined set of tests or test
conditions, such as a taxonomy of common or likely •Directed (or consultative): This type of test strategy is
types of failures, a list of important quality driven primarily by the advice, guidance, or instructions of
characteristics, or company-wide look-and-feel stakeholders, business domain experts, or technology
standards for mobile apps or web pages. experts, who may be outside the test team or outside the
•Process-compliant (or standard-compliant): This type organization itself.
of test strategy involves analyzing, designing, and An appropriate test strategy is often created by combining
implementing tests based on external rules and several of these types of test strategies. For example, risk-
standards, such as those specified by industry-specific based testing (an analytical strategy) can be combined with
standards, by process documentation, by the rigorous exploratory testing (a reactive strategy); they complement 9
•Exit Criteria define the conditions or standards that must be satisfied for a
task or activity to be considered complete or "done." Exit Criteria ensure that
the desired outcomes have been achieved and that the deliverables meet
the required quality standards before moving on to the next phase or
handing over the product. It provides a clear indication of when a task can be
considered finished and ready for review or deployment.
10
Learn More
E N T RY C R I T E R I A EXIT CRITERIA
11
II.4 TEST EXECUTION
SCHEDULE
A “test execution schedule” is a document or plan
that outlines the timeline and sequence for executing
test cases during the software testing phase of a
project.
Learn More
12
KEY ELEMENTS INCLUDED IN A
TEST EXECUTION SCHEDULE
Test Case Execution Order: The schedule defines the
Test Data and Environment Considerations: It identifies any specific test data
or environmental configurations required for successful test execution. This
order in which test cases will be executed. This order ensures that the necessary resources are available when needed.
may be based on test case priorities, dependencies,
or other criteria. Resource Allocation: The schedule may detail the allocation of resources,
including the number of testers, machines, and devices, for executing the
test cases.
Test Case Assignment: It specifies which test cases
will be executed by which testers or test teams. Testing Milestones: Major testing milestones, such as the completion of a test
Testers may be assigned specific areas of the cycle, integration testing, or user acceptance testing, may be indicated on
application or particular test types. the schedule.
Reporting and Communication: The schedule specifies when and how test
Start and End Dates: The schedule includes start and results will be reported to project stakeholders, including management,
end dates for each phase or iteration of test case developers, and other relevant parties.
execution. This helps project stakeholders
understand the duration of the testing process. Regression Testing: If applicable, the schedule outlines when regression
testing will occur and which test cases will be included in regression test
cycles.
Execution Cycles: Test execution schedules often
consist of multiple cycles, where test cases are Parallel Testing: In some cases, the schedule may account for parallel testing,
executed, issues are addressed, and tests are where multiple test teams execute test cases concurrently.
13
repeated. The schedule outlines the timing and
objectives of each cycle. Integration with Development: It may indicate points in time when testing
aligns with the development process, such as integration testing following
development iterations.
II.5 FACTORS
INFLUENCING THE TEST
EFFORT
Test effort estimation is predicting how much work is
needed to test a product. This helps ensure the
testing achieves its goals. There are four main
factors that affect how much effort is required for
testing:
14
Learn More
P R O D U C T C H A RAC T E R I S T I C S DEVELOPMENT PROCESS
C H A RAC T ER I S T I C S
•The risks associated with the product
•The stability and maturity of the organization
•The quality of the test basis
•The development model in use
•The size of the product
•The test approach
•The complexity of the product domain
•The tools used
•The requirements for quality characteristics
•The test process
(e.g., security, reliability)
•Time pressure
•The required level of detail for test
documentation
•Requirements for legal and regulatory
P EOP LE C H A RAC T ER I S T I C S &
compliance
T ECharacteristics
S T R E S U LT S
II.6 TEST ESTIMATION
People
TECHNIQUES
•There are a number of estimation techniques used
•Skills and experience of the people involved: This includes domain to determine the effort required for adequate testing.
knowledge (experience with similar projects and products) and general Two of the most commonly used techniques are:
testing expertise.
•The metrics-based technique: estimating the test
•Team cohesion and leadership: A well-coordinated team with strong
effort based on metrics of former similar projects, or
leadership can be more efficient
based on typical values
Testing Result •The expert-based technique: estimating the test
effort based on the experience of the owners of the
15
•The number and severity of defects found: More defects, especially testing tasks or by experts
critical ones, may necessitate additional testing.
•The amount of rework required: Fixing defects often requires re-testing,
adding to the total effort.
III. TEST MONITORING
AND CONTROL
The purpose of test monitoring is to gather information and
provide feedback and visibility about test activities. The
information acquired can be used to measure progress or to
see if the targets were hit in case of agile style projects.
16
Learn More
III.1 METRICS USED IN
TESTING
Metrics can be collected during and at the end of Common test metrics include:
test activities in order to assess:
•Percentage of planned work done in test case preparation (or
•Progress against the planned schedule and percentage of planned test cases implemented)
budget
•Percentage of planned work done in test environment
•Current quality of the test object preparation
•Adequacy of the test approach •Test case execution (e.g., number of test cases run/not run,
test cases passed/failed, and/or test conditions passed/failed)
•Effectiveness of the test activities with respect
to the objectives •Defect information (e.g., defect density, defects found and
fixed, failure rate, and confirmation test results)
17
III.2 PURPOSES, CONTENTS, AND
AUDIENCES FOR TEST REPORTS
18
Learn More
During test monitoring and control, the test manager regularly issues test
progress reports for stakeholders. In addition to content common to test progress
reports and test summary reports,
•The status of the test activities and progress against the test plan
•Factors impeding progress
•Testing planned for the next reporting period
•The quality of the test object
When exit criteria are reached, the test manager issues the test summary report.
This report provides a summary of the testing performed, based on the latest test
progress report and any other relevant information.
19
Typical test summary reports may include:
The contents of a test report will vary depending on the project, the organizational
requirements, and the software development lifecycle. More complex projects may
require longer, more thorough and detailed test reports as opposed to agile style projects
which may only require a short daily meeting. It is also important that test reports are
relevant to the audience it will be shown to, for example it has to reflect the cost and the
failures and success when it reaches the higher ups but for testers it can be detailed
more on where things are failing.
20
IV. CONFIGURATION MANAGEMENT
21
Learn More
To properly support testing, configuration management may involve
ensuring the following:
1. All test items are uniquely identified, version controlled, tracked for
changes, and related to each other
22
V. RISKS AND TESTING
23
Learn More
V. 1 P R O D U C T A N D P R O J E C T
RISKS
Product risk involves the possibility that a work product (e.g., a
specification, component, system, or test) may fail to satisfy the legitimate
needs of its users and/or stakeholders. When the product risks are
associated with specific quality characteristics of a product (e.g., functional
suitability, reliability, performance efficiency, usability, security,
compatibility, maintainability, and portability), product risks are also called
quality risks.
risks include:
•Project issues:
•Delays may occur in delivery, task completion, or satisfaction of exit
criteria or definition of done
•Inaccurate a, reallocation of funds to higher priority projects, or general
cost cutting across the organization may result in inadequate funding
•Late changes may result in substantial re-work
•Organizational issues:
•Skills, training, and staff may not be sufficient
•Personnel issues may cause conflict and problems
•Users, business staff, or subject matter experts may not be available due
to conflicting business priorities
• Technical issues:
•Requirements may not be defined well enough
•The requirements may not be met, given existing constraints
•The test environment may not be ready on time
25
•Political issues:
•Testers may not communicate their needs and/or the test results
adequately
•Developers and/or testers may fail to follow up on information found in
testing and reviews (e.g., not improving development and testing practices)
•There may be an improper attitude toward, or expectations of, testing
(e.g., not appreciating the value of finding defects during testing).
•Supplier issues:
•A third party may fail to deliver a necessary product or service, or go
bankrupt
•Contractual issues may cause problems to the project
Project risks may affect both development activities and test activities. In
some cases, project managers are responsible for handling all project risks,
but it is not unusual for test managers to have responsibility for test-related
project risks.
26
V.2 RISK-BASED TESTING AND
PRODUCT QUALITY
Risk-based testing is a strategic approach that utilizes
thorough analysis to identify, assess, and mitigate potential
risks to a product's quality and performance. It involves
evaluating the likelihood and impact of various risks, then
tailoring testing efforts accordingly to minimize the chances of
adverse events occurring.
Learn More
27
S E V E RA L K E Y S T E P S :
•Product Risk Analysis: This entails identifying potential risks to the product, assessing their likelihood and
impact on the project's success.
•Test Planning and Execution: The insights gained from risk analysis guide decisions on test techniques, levels,
and types of testing required. This ensures that testing efforts are focused on areas with the highest risk.
•Prioritization of Testing: Critical defects are targeted early on through prioritized testing, aiming to uncover and
address them as soon as possible.
•Risk Mitigation: Besides testing, other activities may be employed to reduce risk, such as providing training to
inexperienced designers or implementing additional security measures.
•Continuous Risk Management: Risk analysis is an ongoing process, with regular re-evaluation to adapt to
changing circumstances and emerging risks.
•Testing not only helps identify existing risks but may also uncover new ones. It provides valuable insights into
which risks should be addressed and helps lower uncertainty surrounding potential risks. For instance, during
testing, a software application may reveal vulnerabilities to data breaches, prompting the team to prioritize
security measures to mitigate this risk.
•In essence, risk-based testing ensures that resources are allocated efficiently, focusing on areas of highest risk
to enhance product quality and minimize the likelihood of failure.
28
VI. DEFECT MANAGEMENT
•Defect Management is a critical component of the
software testing process, aimed at identifying and
resolving issues, or defects, found during testing.
Defects are logged and tracked from discovery to
resolution, which could include correction, deferral,
or acceptance as a product limitation.
Learn More
29
KEY ELEMENTS OF DEFECT 1.Defect Management Process: Organizations establish a defect
M A N AG E M E N T I N C LU D E : management process with a defined workflow and rules for
1.Defect Logging: Defects are logged based on classification, agreed upon by all stakeholders involved.
the context of the component or system being
2.Minimizing False Positives: It's important to differentiate
tested, the test level, and the software
between actual defects and false positives to avoid unnecessary
development lifecycle model.
reporting and resource wastage.
30
DEFECT REPORTS
Typical defect reports have the following objectives:
•Provide developers and other parties with information about any adverse
event that occurred, to enable them to identify specific effects, to isolate the
problem with a minimal reproducing test, and to correct the potential defect(s),
as needed or to otherwise resolve the problem
•Provide test managers a means of tracking the quality of the work product
and the impact on the testing (e.g., if a lot of defects are reported, the testers
will have spent a lot of time reporting them instead of running tests, and there
will be more confirmation testing needed)
•Provide ideas for development and test process improvement
Learn More
31
A defect report filed during dynamic testing typically includes:
•An identifier
•A title and a short summary of the defect being reported
•Date of the defect report, issuing organization, and author
•Identification of the test item (configuration item being tested) and environment
•The development lifecycle phase(s) in which the defect was observed
•A description of the defect to enable reproduction and resolution, including logs, database
dumps, screenshots, or recordings (if found during test execution)
•Expected and actual results
•Scope or degree of impact (severity) of the defect on the interests of stakeholder(s)
•Urgency/priority to fix
32
GROUP 5
NAMES
25011 Niyigena Angelique
25130 Ruzibiza Kelia
24035 Ishimwe Fleury Belami
24560 Ashimwe Aurore
23663 Iradukunda Divine
24061 Karagire Vincent
24757 Manzi Theogene
24345 Habinka Raissa
23956 Muhire David
25131 Mutesa Kenny Elvis
24282 Ndatimana Eric
23952 Nshizirungu Butera Kennedy
24581 Ayoub Mahamat Abakar
24098 Mugisha Thomas
23265 Ruhinda Benjamin
24768 Ndacyayisenga Herve
24078 Dushimimana Irakiza Olivier
THANK
G R O U P 5