0% found this document useful (0 votes)
9 views35 pages

Software Testing Techniques

The document outlines various software testing techniques, focusing on independent testing, test planning, estimation, monitoring, and control. It discusses the roles of test managers and testers, the importance of test plans, and the criteria for entry and exit in testing processes. Additionally, it covers factors influencing test effort and the significance of metrics and reporting in assessing testing progress and quality.

Uploaded by

angelbrenna20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views35 pages

Software Testing Techniques

The document outlines various software testing techniques, focusing on independent testing, test planning, estimation, monitoring, and control. It discusses the roles of test managers and testers, the importance of test plans, and the criteria for entry and exit in testing processes. Additionally, it covers factors influencing test effort and the significance of metrics and reporting in assessing testing progress and quality.

Uploaded by

angelbrenna20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 35

SOFTWARE TESTING

TECHNIQUES

TEST 01
TA B L E O F C O N T E N T

I n d e p e n d e n t Te s t i n g

Te s t P l a n n i n g a n d E s t i m a t i o n

Te s t M o n i t o r i n g a n d C o n t r o l

C o n fi g u r a t i o n M a n a g e m e n t

R i s k s a n d Te s t i n g

Defect Management 01
I. TEST
ORGANIZATION I.1 INDEPENDENT
TESTING
•Independent testing refers to the process of evaluating
software or a system by a third-party entity that is not
directly involved in its development. The purpose of
independent testing is to ensure objectivity, impartiality,
and thoroughness in assessing the quality, functionality,
and performance of the software.

•Independent testing is very useful, however it can never


replace familiarity, and developers can efficiently find
many defects in their own code.

Learn More

02
DEGREES OF
INDEPENDENCE IN
•No independent testers; the only form of testing

TESTING
available is developers testing their own code

•Independent developers or testers within the


development teams or the project team; this could be
developers testing their colleagues’ products

•Independent test team or group within the organization,


reporting to project management or executive
management

•Independent testers from the business organization or


user community, or with specializations in specific test
types such as usability, security, performance,
regulatory/compliance, or portability

•Independent testers external to the organization, either


03
working on-site (in-house) or off-site (outsourcing)
POTENTIAL POTENTIAL
BENEFITS OF DRAWBACKS OF
TEST
•Independent testers are likely to recognize
different kinds of failures compared to developers
TEST
•Isolation from the development team, may lead to a

INDEPENDENCE
because of their different backgrounds, technical
perspectives, and biases
INDEPENDENCE
lack of collaboration, delays in providing feedback to
the development team, or an adversarial relationship
with the development team
•An independent tester can verify, challenge, or
disprove assumptions made by stakeholders •Developers may lose a sense of responsibility for
during specification and implementation of the quality
system
•Independent testers may be seen as a bottleneck
•Independent testers of a vendor can report in an
upright and objective manner about the system •Independent testers may lack some important
under test without (political) pressure of the information (e.g., about the test object)
company that hired them.

04
I.2 TASKS OF A TEST MANAGER AND
TESTER
I.TEST MANAGER I.TESTER
•Depending on the risks related to the product and the
•A test manager is a key role in software project, and the software development lifecycle model
development organizations responsible for selected, different people may take over the role of tester at
overseeing the planning, execution, and different test levels.
monitoring of testing activities throughout the
software development lifecycle. Typical tester tasks may include:

Typical test manager tasks may include: •Review and contribute to test plans
•Identify and document test conditions, and capture
•Write and update the test plan(s) traceability between test cases, test conditions, and the
•Coordinate the test plan(s) with project test basis
managers, product owners, and others •Design, set up, and verify test environment(s), often
•Develop or review a test policy and test strategy coordinating with system administration and network
for the organization management
•Design and implement test cases and test procedures

05
II. Test Planning and
Estimation
II.1 PURPOSE AND
CONTENT OF A TEST PLAN
•A test plan is a comprehensive document that
outlines the approach, scope, resources, and
schedule for testing activities within a software
development project.

Test planning evolves with project progress, allowing


for more detailed inclusion as information becomes
available. It's a continuous process throughout the
product's lifecycle, with feedback from test activities
informing adjustments to planning.

Learn More

06
TEST PLANNING
ACTIVITIES
Test planning activities may include the following and some of these may be
documented in a test plan:

•Determining the scope, objectives, and risks of testing


•Defining the overall approach of testing
•Budgeting for the test activities
•Integrating and coordinating the test activities into the software lifecycle activities
•Making decisions about what to test, the people and other resources required to
perform the various test activities, and how test activities will be carried out
•Scheduling of test analysis, design, implementation, execution, and evaluation
activities, either on particular dates (e.g., in sequential development) or in the
context of each iteration (e.g., in iterative development)
•Selecting metrics for test monitoring and control
•Determining the level of detail and structure for test documentation (e.g., by
providing templates or example documents)
07

The content of test plans vary, and can extend beyond the topics identified above.
II.2 TEST STRATEGY AND
TEST APPROACH
A test strategy provides a generalized description of
the test process, usually at the product or
organizational level. Common types of test strategies
include:

Learn More

8
•Analytical: This type of test strategy is based on an
Reactive: In this type of test strategy, testing is reactive to
analysis of some factor (e.g., requirement or risk). Risk- the component or system being tested, and the events
based testing is an example of an analytical approach, occurring during test execution, rather than being pre-
where tests are designed and prioritized based on the planned (as the preceding strategies are). Tests are designed
level of risk. and implemented, and may immediately be executed in
•Model-Based: In this type of test strategy, tests are response to knowledge gained from prior test results.
designed based on some model of some required Exploratory testing is a common technique employed in
aspect of the product, such as a function, a business reactive strategies.
process, an internal structure, or a non-functional
characteristic (e.g., reliability). Examples of such •Regression-averse: This type of test strategy is motivated
models include business process models, state models, by a desire to avoid regression of existing capabilities. This
test strategy includes reuse of existing test ware (especially
and reliability growth models.
test cases and test data), extensive automation of regression
•Methodical: This type of test strategy relies on making
tests, and standard test suites.
systematic use of some predefined set of tests or test
conditions, such as a taxonomy of common or likely •Directed (or consultative): This type of test strategy is
types of failures, a list of important quality driven primarily by the advice, guidance, or instructions of
characteristics, or company-wide look-and-feel stakeholders, business domain experts, or technology
standards for mobile apps or web pages. experts, who may be outside the test team or outside the
•Process-compliant (or standard-compliant): This type organization itself.
of test strategy involves analyzing, designing, and An appropriate test strategy is often created by combining
implementing tests based on external rules and several of these types of test strategies. For example, risk-
standards, such as those specified by industry-specific based testing (an analytical strategy) can be combined with
standards, by process documentation, by the rigorous exploratory testing (a reactive strategy); they complement 9

each other and may achieve more effective testing when


II.3 ENTRY CRITERIA AND EXIT CRITERIA
(DEFINITION OF READY AND DEFINITION
OF DONE)
•Entry Criteria define the conditions or requirements that must be met
before a task or activity can begin. These criteria ensure that the necessary
prerequisites are in place to start the work effectively and efficiently. Entry
Criteria help prevent the waste of resources and minimize the risk of failure
by ensuring that tasks are undertaken only when the project is adequately
prepared.

•Exit Criteria define the conditions or standards that must be satisfied for a
task or activity to be considered complete or "done." Exit Criteria ensure that
the desired outcomes have been achieved and that the deliverables meet
the required quality standards before moving on to the next phase or
handing over the product. It provides a clear indication of when a task can be
considered finished and ready for review or deployment.

10
Learn More
E N T RY C R I T E R I A EXIT CRITERIA

TYPICAL ENTRY CRITERIA INCLUDE: TYPICAL EXIT CRITERIA INCLUDE:

•Availability of testable requirements, user •Execution of planned tests


stories, and/or models (e.g., when following a •Achievement of defined coverage (e.g., requirements, user
model based testing strategy) stories, acceptance criteria)
•Availability of test items that have met the exit •Limited number of unresolved defects
criteria for any prior test levels •Satisfactory levels of reliability, performance efficiency,
•Availability of test environment usability, security, etc.
•Availability of necessary test tools
•Availability of test data and other necessary Even without exit criteria being satisfied, it is also common for
resources test activities to be curtailed due to the budget being expended,
the scheduled time being completed, and/or pressure to bring
the product to market. It can be acceptable to end testing under
such circumstances, if the project stakeholders and business
owners have reviewed and accepted the risk to go live without
further testing.

11
II.4 TEST EXECUTION
SCHEDULE
A “test execution schedule” is a document or plan
that outlines the timeline and sequence for executing
test cases during the software testing phase of a
project.

Learn More

12
KEY ELEMENTS INCLUDED IN A
TEST EXECUTION SCHEDULE
Test Case Execution Order: The schedule defines the
Test Data and Environment Considerations: It identifies any specific test data
or environmental configurations required for successful test execution. This
order in which test cases will be executed. This order ensures that the necessary resources are available when needed.
may be based on test case priorities, dependencies,
or other criteria. Resource Allocation: The schedule may detail the allocation of resources,
including the number of testers, machines, and devices, for executing the
test cases.
Test Case Assignment: It specifies which test cases
will be executed by which testers or test teams. Testing Milestones: Major testing milestones, such as the completion of a test
Testers may be assigned specific areas of the cycle, integration testing, or user acceptance testing, may be indicated on
application or particular test types. the schedule.

Reporting and Communication: The schedule specifies when and how test
Start and End Dates: The schedule includes start and results will be reported to project stakeholders, including management,
end dates for each phase or iteration of test case developers, and other relevant parties.
execution. This helps project stakeholders
understand the duration of the testing process. Regression Testing: If applicable, the schedule outlines when regression
testing will occur and which test cases will be included in regression test
cycles.
Execution Cycles: Test execution schedules often
consist of multiple cycles, where test cases are Parallel Testing: In some cases, the schedule may account for parallel testing,
executed, issues are addressed, and tests are where multiple test teams execute test cases concurrently.
13
repeated. The schedule outlines the timing and
objectives of each cycle. Integration with Development: It may indicate points in time when testing
aligns with the development process, such as integration testing following
development iterations.
II.5 FACTORS
INFLUENCING THE TEST
EFFORT
Test effort estimation is predicting how much work is
needed to test a product. This helps ensure the
testing achieves its goals. There are four main
factors that affect how much effort is required for
testing:

•The product itself


•The development process
•The people involved
•The results of the tests.

14
Learn More
P R O D U C T C H A RAC T E R I S T I C S DEVELOPMENT PROCESS
C H A RAC T ER I S T I C S
•The risks associated with the product
•The stability and maturity of the organization
•The quality of the test basis
•The development model in use
•The size of the product
•The test approach
•The complexity of the product domain
•The tools used
•The requirements for quality characteristics
•The test process
(e.g., security, reliability)
•Time pressure
•The required level of detail for test
documentation
•Requirements for legal and regulatory
P EOP LE C H A RAC T ER I S T I C S &
compliance
T ECharacteristics
S T R E S U LT S
II.6 TEST ESTIMATION
People
TECHNIQUES
•There are a number of estimation techniques used
•Skills and experience of the people involved: This includes domain to determine the effort required for adequate testing.
knowledge (experience with similar projects and products) and general Two of the most commonly used techniques are:
testing expertise.
•The metrics-based technique: estimating the test
•Team cohesion and leadership: A well-coordinated team with strong
effort based on metrics of former similar projects, or
leadership can be more efficient
based on typical values
Testing Result •The expert-based technique: estimating the test
effort based on the experience of the owners of the
15
•The number and severity of defects found: More defects, especially testing tasks or by experts
critical ones, may necessitate additional testing.
•The amount of rework required: Fixing defects often requires re-testing,
adding to the total effort.
III. TEST MONITORING
AND CONTROL
The purpose of test monitoring is to gather information and
provide feedback and visibility about test activities. The
information acquired can be used to measure progress or to
see if the targets were hit in case of agile style projects.

Test control describes any guiding or corrective actions taken


as a result of information and metrics gathered. Examples of
test control actions include:

•Re-prioritizing tests when an identified risk occurs (e.g.,


software delivered late)
•Changing the test schedule due to availability or
unavailability of a test environment or other resources
• Re-evaluating whether a test item meets an entry or exit
criterion due to rework.

16
Learn More
III.1 METRICS USED IN
TESTING

Metrics can be collected during and at the end of Common test metrics include:
test activities in order to assess:
•Percentage of planned work done in test case preparation (or
•Progress against the planned schedule and percentage of planned test cases implemented)
budget
•Percentage of planned work done in test environment
•Current quality of the test object preparation

•Adequacy of the test approach •Test case execution (e.g., number of test cases run/not run,
test cases passed/failed, and/or test conditions passed/failed)
•Effectiveness of the test activities with respect
to the objectives •Defect information (e.g., defect density, defects found and
fixed, failure rate, and confirmation test results)

17
III.2 PURPOSES, CONTENTS, AND
AUDIENCES FOR TEST REPORTS

The purpose of test reporting is to summarize and


communicate test activity information, both during and at the
end of a test activity (e.g., a test level). The test report
prepared during a test activity may be referred to as a test
progress report, while a test report prepared at the end of a
test activity may be referred to as a test summary report.

18
Learn More
During test monitoring and control, the test manager regularly issues test
progress reports for stakeholders. In addition to content common to test progress
reports and test summary reports,

typical test progress reports may also include:

•The status of the test activities and progress against the test plan
•Factors impeding progress
•Testing planned for the next reporting period
•The quality of the test object
When exit criteria are reached, the test manager issues the test summary report.
This report provides a summary of the testing performed, based on the latest test
progress report and any other relevant information.

19
Typical test summary reports may include:

••Summary of testing performed


•Information on what occurred during a test period
•Deviations from plan, including deviations in schedule, duration, or effort of test
activities
•Status of testing and product quality with respect to the exit criteria or definition of
done
•Factors that have blocked or continue to block progress

The contents of a test report will vary depending on the project, the organizational
requirements, and the software development lifecycle. More complex projects may
require longer, more thorough and detailed test reports as opposed to agile style projects
which may only require a short daily meeting. It is also important that test reports are
relevant to the audience it will be shown to, for example it has to reflect the cost and the
failures and success when it reaches the higher ups but for testers it can be detailed
more on where things are failing.
20
IV. CONFIGURATION MANAGEMENT

The purpose of configuration management is to establish and


maintain the integrity of the component or system, the test-
ware, and their relationships to one another through the
project and product lifecycle.

21
Learn More
To properly support testing, configuration management may involve
ensuring the following:

1. All test items are uniquely identified, version controlled, tracked for
changes, and related to each other

2. All items of test-ware are uniquely identified, version controlled, tracked


for changes, related to each other and related to versions of the test item(s)
so that traceability can be maintained throughout the test process

3.All identified documents and software items are referenced


unambiguously in test documentation
During test planning, configuration management procedures and
infrastructure (tools) should be identified and implemented.

22
V. RISKS AND TESTING

A risk is any potential event or circumstance that could have a


negative impact on the objectives, goals, or success of a
project, initiative, or organization. Risks can arise from various
sources, including internal processes, external factors,
uncertainties, or changes in the environment. They can affect
different aspects of a project, such as cost, schedule, quality,
scope, and resources.
The level of risk is determined by the likelihood of the event
and the impact (the harm) from that event.

23
Learn More
V. 1 P R O D U C T A N D P R O J E C T
RISKS
Product risk involves the possibility that a work product (e.g., a
specification, component, system, or test) may fail to satisfy the legitimate
needs of its users and/or stakeholders. When the product risks are
associated with specific quality characteristics of a product (e.g., functional
suitability, reliability, performance efficiency, usability, security,
compatibility, maintainability, and portability), product risks are also called
quality risks.

Examples of product risks include:


•Software might not perform its intended functions according to the
specification
•Software might not perform its intended functions according to user,
customer, and/or stakeholder needs
•A system architecture may not adequately support some non-functional
requirement(s)
•A particular computation may be performed incorrectly in some
circumstances
Project risk involves situations that, should they occur, may have a negative
effect on a project's ability to achieve its objectives. Examples of project 24

risks include:
•Project issues:
•Delays may occur in delivery, task completion, or satisfaction of exit
criteria or definition of done
•Inaccurate a, reallocation of funds to higher priority projects, or general
cost cutting across the organization may result in inadequate funding
•Late changes may result in substantial re-work

•Organizational issues:
•Skills, training, and staff may not be sufficient
•Personnel issues may cause conflict and problems
•Users, business staff, or subject matter experts may not be available due
to conflicting business priorities

• Technical issues:
•Requirements may not be defined well enough
•The requirements may not be met, given existing constraints
•The test environment may not be ready on time

25
•Political issues:
•Testers may not communicate their needs and/or the test results
adequately
•Developers and/or testers may fail to follow up on information found in
testing and reviews (e.g., not improving development and testing practices)
•There may be an improper attitude toward, or expectations of, testing
(e.g., not appreciating the value of finding defects during testing).

•Supplier issues:
•A third party may fail to deliver a necessary product or service, or go
bankrupt
•Contractual issues may cause problems to the project
Project risks may affect both development activities and test activities. In
some cases, project managers are responsible for handling all project risks,
but it is not unusual for test managers to have responsibility for test-related
project risks.

26
V.2 RISK-BASED TESTING AND
PRODUCT QUALITY
Risk-based testing is a strategic approach that utilizes
thorough analysis to identify, assess, and mitigate potential
risks to a product's quality and performance. It involves
evaluating the likelihood and impact of various risks, then
tailoring testing efforts accordingly to minimize the chances of
adverse events occurring.

Learn More

27
S E V E RA L K E Y S T E P S :

•Product Risk Analysis: This entails identifying potential risks to the product, assessing their likelihood and
impact on the project's success.
•Test Planning and Execution: The insights gained from risk analysis guide decisions on test techniques, levels,
and types of testing required. This ensures that testing efforts are focused on areas with the highest risk.
•Prioritization of Testing: Critical defects are targeted early on through prioritized testing, aiming to uncover and
address them as soon as possible.
•Risk Mitigation: Besides testing, other activities may be employed to reduce risk, such as providing training to
inexperienced designers or implementing additional security measures.
•Continuous Risk Management: Risk analysis is an ongoing process, with regular re-evaluation to adapt to
changing circumstances and emerging risks.

•Testing not only helps identify existing risks but may also uncover new ones. It provides valuable insights into
which risks should be addressed and helps lower uncertainty surrounding potential risks. For instance, during
testing, a software application may reveal vulnerabilities to data breaches, prompting the team to prioritize
security measures to mitigate this risk.
•In essence, risk-based testing ensures that resources are allocated efficiently, focusing on areas of highest risk
to enhance product quality and minimize the likelihood of failure.

28
VI. DEFECT MANAGEMENT
•Defect Management is a critical component of the
software testing process, aimed at identifying and
resolving issues, or defects, found during testing.
Defects are logged and tracked from discovery to
resolution, which could include correction, deferral,
or acceptance as a product limitation.

Learn More

29
KEY ELEMENTS OF DEFECT 1.Defect Management Process: Organizations establish a defect
M A N AG E M E N T I N C LU D E : management process with a defined workflow and rules for
1.Defect Logging: Defects are logged based on classification, agreed upon by all stakeholders involved.
the context of the component or system being
2.Minimizing False Positives: It's important to differentiate
tested, the test level, and the software
between actual defects and false positives to avoid unnecessary
development lifecycle model.
reporting and resource wastage.

2.Defect Investigation: Once logged, defects are


3.Defect Reporting: Defect reports provide detailed information
investigated to understand the root cause and
about the defect, including a summary, description, expected
impact on the system.
and actual results, and impact severity.

3.Defect Resolution: Defects are tracked to


4.Defect Impact: Defect reports help track the quality of the
resolution, which may involve correcting the
work product and identify areas for process improvement.
defect, deferring it to a future release, or
accepting it as a product limitation.
5.Defect Management Tools: These tools automate the
assignment of identifiers, state updates, and other aspects of
the defect management process.

30
DEFECT REPORTS
Typical defect reports have the following objectives:
•Provide developers and other parties with information about any adverse
event that occurred, to enable them to identify specific effects, to isolate the
problem with a minimal reproducing test, and to correct the potential defect(s),
as needed or to otherwise resolve the problem
•Provide test managers a means of tracking the quality of the work product
and the impact on the testing (e.g., if a lot of defects are reported, the testers
will have spent a lot of time reporting them instead of running tests, and there
will be more confirmation testing needed)
•Provide ideas for development and test process improvement

Learn More

31
A defect report filed during dynamic testing typically includes:

•An identifier
•A title and a short summary of the defect being reported
•Date of the defect report, issuing organization, and author
•Identification of the test item (configuration item being tested) and environment
•The development lifecycle phase(s) in which the defect was observed
•A description of the defect to enable reproduction and resolution, including logs, database
dumps, screenshots, or recordings (if found during test execution)
•Expected and actual results
•Scope or degree of impact (severity) of the defect on the interests of stakeholder(s)
•Urgency/priority to fix

32
GROUP 5
NAMES
25011 Niyigena Angelique
25130 Ruzibiza Kelia
24035 Ishimwe Fleury Belami
24560 Ashimwe Aurore
23663 Iradukunda Divine
24061 Karagire Vincent
24757 Manzi Theogene
24345 Habinka Raissa
23956 Muhire David
25131 Mutesa Kenny Elvis
24282 Ndatimana Eric
23952 Nshizirungu Butera Kennedy
24581 Ayoub Mahamat Abakar
24098 Mugisha Thomas
23265 Ruhinda Benjamin
24768 Ndacyayisenga Herve
24078 Dushimimana Irakiza Olivier
THANK
G R O U P 5

You might also like