0% found this document useful (0 votes)
246 views37 pages

ISTQB CTFL 4.0 Notes - Umama Sumlin Tasnuva

The document provides an in-depth overview of software testing, emphasizing its purpose in ensuring quality and reducing operational risks. It outlines various testing types, objectives, and principles, while distinguishing between testing and debugging. Additionally, it discusses the impact of different software development lifecycles on testing practices and highlights the importance of collaboration and independence in testing roles.

Uploaded by

Jamal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
246 views37 pages

ISTQB CTFL 4.0 Notes - Umama Sumlin Tasnuva

The document provides an in-depth overview of software testing, emphasizing its purpose in ensuring quality and reducing operational risks. It outlines various testing types, objectives, and principles, while distinguishing between testing and debugging. Additionally, it discusses the impact of different software development lifecycles on testing practices and highlights the importance of collaboration and independence in testing roles.

Uploaded by

Jamal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

CHAPTER 1

1.1 What is Testing?

1 Purpose of Software Testing:

• Ensures software quality and reduces risks of failure in operation.


• Prevents problems like financial loss, reputational damage, or even physical harm.

2 What Software Testing Involves:

• Not just running tests but also planning, analyzing, and verifying test objects (software work products).
• Includes verification (meeting requirements) and validation (fulfilling users' needs).

3 Types of Testing:

• Dynamic Testing: Running the software to find defects.


• Static Testing: No software execution; includes reviews and static analysis.

4 Misconceptions About Testing:

• It's not just running tests; testing involves both technical and strategic elements.
• It’s not solely about finding defects but also ensuring the software serves its intended purpose.

1.1.1 Test Objectives

Objectives depend on the test level, risks, SDLC model, and business needs (e.g., competition, corporate
goals, time to market).

The typical test objectives are:

• Evaluating work products such as requirements, user stories, designs, and code

• Causing failures and finding defects

• Ensuring required coverage of a test object

• Reducing the risk level of inadequate software quality

• Verifying whether specified requirements have been fulfilled

• Verifying that a test object complies with contractual, legal, and regulatory requirements

• Providing information to stakeholders to allow them to make informed decisions

• Building confidence in the quality of the test object

• Validating whether the test object is complete and works as expected by the stakeholders
1.1.2 Testing and Debugging

Key Difference:

• Testing: Finds defects by triggering failures (dynamic testing) or identifying defects directly (static
testing).

• Debugging: Diagnoses and fixes the defects causing failures.

Dynamic Testing & Debugging Process:

• Triggering Failures: Testing causes failures due to defects.

• Debugging involves:

o Reproducing the failure.

o Diagnosing the defect.

o Fixing the defect.

Post-Debugging Testing:

• Confirmation Testing: Ensures the fix resolved the issue (often done by the tester who found it).

• Regression Testing: Verifies the fix doesn’t cause new issues elsewhere.

Static Testing:

• Directly identifies defects without triggering failures.

• Debugging in this case simply removes the defect—no need for failure reproduction or diagnosis.

1.2 Why is Testing Necessary?

Testing, as a form of quality control, helps in achieving the agreed upon test objectives within the set
scope, time, quality, and budget constraints.

1.2.1 Testing’s Contribution to Success

• Purpose: Detects defects cost-effectively; indirectly boosts software quality by enabling defect fixes.

• Evaluation: Assesses test object quality throughout the SDLC to support decisions like release approvals.

• User Representation: Simulates user needs and requirements, avoiding the cost of direct user involvement.

• Compliance: Often mandated by legal, contractual, or regulatory standards.


1.2.2 Testing and Quality Assurance (QA)

Testing vs. QA:

• Testing:
o Product-oriented and corrective, focuses on achieving quality through defect detection.
o A form of quality control, including activities like simulation, prototyping, and formal methods.
• QA (Quality Assurance):
o Process-oriented and preventive, aims at ensuring quality by improving and adhering to
processes.
o Applies to both development and testing, emphasizing that good processes lead to good products.
o Key Difference: Testing fixes defects; QA improves processes to avoid defects.

1.2.3 Errors, Defects, Failures, and Root Causes

Errors: Human mistakes caused by stress, complexity, fatigue, or inadequate training.

Defects:

• Found in work products like code, documentation, or test scripts.

• Undetected defects propagate and lead to defective outputs later in the lifecycle.

Failures:

• Occur when defects are executed, causing incorrect or unintended behavior.

• Can also result from environmental factors (e.g., radiation or electromagnetic interference).

Root Cause:

• The underlying reason behind a problem (e.g., process flaws causing errors).

• Addressing root causes helps prevent similar defects or failures.

1.3 Testing Principles

Testing shows the presence, not the absence of defects


Testing can only prove that defects exist; it can’t guarantee that all defects are gone. Even if tests pass,
bugs might still exist.

Exhaustive testing is impossible


You can’t test every possible input, condition, or path—there are too many combinations. Focus on the
most important scenarios. [EP and BVA supports this principle]

Early testing saves time and money


Finding and fixing defects early (e.g., during requirement analysis) prevents costly changes later in the
development process.
Defects cluster together
Most defects tend to appear in a few specific areas of the software. These "hot spots" need extra
attention during testing.

Tests wear out


Running the same tests repeatedly will eventually stop finding new defects. Regularly update and
improve your tests.

Testing is context-dependent
Testing approaches vary based on the type of software (e.g., medical vs. gaming). The context determines
the strategy.

Absence-of-defects fallacy
Even if the software has no defects, it may still fail to meet user expectations or business needs. Quality
is more than just defect-free code.

1.4 Test Activities, Testware and Test Roles


1.4.1 Test Activities and Tasks

Test Planning
Define test objectives and decide the best approach to meet these goals within constraints (time,
budget, etc.).

• Focuses on defining the test strategy, scope, schedule, and objectives.

• Entry and exit criteria are set.

• High-level decisions about the types of testing (e.g., functional, non-functional) and resources
needed are made, but specific test techniques are not determined.

Test Monitoring and Control

• Monitoring: Check if testing is on track by comparing actual progress to the plan.

• Control: Take corrective actions to meet objectives if deviations occur.

Test Analysis (What to test)

• Review requirements and identify what features to test.

• Define and prioritize test conditions based on risks.

• Assess testability and use techniques to ensure measurable coverage.

• At this stage, testers identify what needs to be tested but do not determine specific methods for
designing test cases.
Test Design (How to test)

• In this phase, specific test design techniques, such as EP, BVA etc., are selected to design detailed
test cases for the identified test conditions.

• Specify test inputs, test data requirements, and expected results.

• Plan the test environment, tools, and infrastructure.

Test Implementation

• Create and organize testware (test data, scripts, etc.).

• Arrange tests into test procedures which are often assembled to test suites.

• Create manual and automated test scripts

• Prioritize test procedures within a test execution schedule. (which to run first kinda thingy)

• Build and ensure the test environment is set up correctly.

Test Execution

• Run tests per the test execution schedule, manually or automatically.

• Log test results and compare actual vs. expected outcomes.

• Analyze and report defects based on observed failures.

Test Completion

• Create unresolved defects, change requests or product backlog items.

• Archive or handover to other team any useful testware and close the test environment.

• Analyze test activities to identify lessons learned and improvements for future.

• Share a test completion report with stakeholders.

1.4.2 Test Process in Context

Testing is integrated into development processes and shaped by various factors:

• Stakeholders: Their needs, expectations, and cooperation level matter.

• Team members: Skills, availability, and training influence the process.

• Business domain: Criticality, risks, and market/legal needs impact testing.

• Technical factors: Software type, architecture, and tools are essential.

• Constraints: Scope, budget, time, and resources define the approach.

• Organizational factors: Policies and structure affect testing.


• SDLC: Methods and practices guide test execution.

• Tools: Availability and usability of tools shape automation and efficiency.


Impact: These factors affect strategy, coverage, automation, and reporting.

1.4.3 Testware

Testware includes all test-related work products created during testing. Proper management ensures
consistency. Examples:

1. Test Planning: Test plans, schedules, risk registers, entry/exit criteria.

2. Test Monitoring & Control: Progress reports and control directives.

3. Test Analysis: Test conditions (e.g., acceptance criteria) and defect reports.

4. Test Design: Test cases, test charters, coverage items, test data requirements, and environment
requirements.

5. Test Implementation: Test suites, test procedures, test scripts, and test data, test execution
schedule.

6. Test Execution: Test logs and defect reports.

7. Test Completion: test completion reports, lessons learned, product backlog and archived
testware.

1.4.4 Traceability between the Test Basis and Testware

Purpose: Ensure connection between test basis (e.g., requirements) and test outputs (e.g., cases,
results).

• Benefits:

o Verifies requirements coverage with test cases.

o Evaluates residual risks using test results.

o Facilitates impact analysis, audits, and progress evaluation.

o Helps communicate testing status and quality to stakeholders.

• Value: Supports product quality and project goals.


1.4.5 Roles in Testing

Two Principal Roles:

1. Test Management Role:

o Oversees test planning, test monitoring and controlling, test completion

o Adapts based on context (e.g., Agile teams may share tasks).

2. Testing Role:

o Handles test analysis, test design, test implementation and test execution.

Key Points:

• Roles may overlap; one person can handle both tasks.

• Context (project, product, team) shapes role responsibilities.

1.5 Essential Skills and Good Practices in Testing


1.5.1 Generic Skills Required for Testing

Key skills that enhance testing effectiveness:

• Testing knowledge: Use test techniques to improve quality.

• Attention to detail: Be thorough, careful, and curious to spot hard-to-find defects.

• Communication skills: Collaborate effectively, convey findings clearly, and handle criticism
constructively.

• Analytical and critical thinking: Think logically and creatively to address challenges.

• Technical expertise: Use tools to improve efficiency.

• Domain knowledge: Understand business needs and end-user perspectives.

Challenges: Testers may face blame or criticism for reporting defects, as it may be seen as negative
feedback. Effective communication helps reduce resistance and ensures testing is viewed as a
constructive process.

1.5.2 Whole Team Approach

• Concept: The entire team shares responsibility for quality, fostering collaboration and synergy.
• Key Practices:
o Anyone with the required skills can perform testing tasks.
o Co-located (or virtual) teams improve communication and teamwork.
o Testers work closely with developers, business representatives, and stakeholders.
o Testers help create acceptance tests and share testing knowledge.
Benefits: Enhanced collaboration, shared accountability, and improved quality.
Limitations: Not ideal in contexts requiring high independence, such as safety-critical systems.

1.5.3 Independence of Testing

• Definition: Testing by individuals not directly involved in creating the product to reduce bias.

• Levels of Independence:

o Low: Testing by the author of the work.

o Moderate: Testing by peers within the team.

o High: Testing by a separate team within the organization.

o Very High: Testing by external testers outside the organization.

Advantages:

• Independent testers bring fresh perspectives and can identify different types of defects.

• They can challenge assumptions and improve the product.

Drawbacks:

• Risk of poor communication or isolation between testers and developers.

• Developers may feel less accountable for quality.

• Independent testers may be seen as slowing down the process.

Best Practice: Use multiple levels of independence to balance collaboration and impartiality.
CHAPTER 2

2.1 Testing in the Context of a Software Development Lifecycle

2.1.1 Explain the impact of the chosen software development lifecycle on testing

The Software Development Lifecycle (SDLC) influences several aspects of testing:

• Scope & timing: When and what to test (e.g., test levels, test types).
• Documentation: Detailed in sequential models; lightweight in iterative/agile models.
• Techniques: Static/dynamic tests in iterative models; automated tests in agile.
• Roles: More collaborative in agile; isolated in sequential models.

Sequential models: Testers focus on reviews, analysis, and design early, with dynamic testing occurring in later
phases.
Iterative/agile models: Testing happens in every cycle, emphasizing fast feedback and regression testing. Agile
relies on lightweight documentation and experience-based techniques.

Testing in the sequential, iterative, and incremental SDLC models differs based on when and how it is
conducted:

1. Sequential SDLC: Testing is a dedicated phase after development is complete for the entire system.
It focuses on ensuring the final product meets requirements after all coding is done (e.g., in the Waterfall
model).
2. Iterative SDLC: Testing is conducted at the end of each iteration, focusing on refining and
improving existing functionality through repeated cycles of analysis, design, coding, and testing.
3. Incremental SDLC: Testing is also done at the end of each iteration, but the focus is on verifying
and integrating new increments with previously developed ones to build the system progressively.

Key Difference:

While testing occurs per iteration in both iterative and incremental models, iterative testing prioritizes
refinement of existing components, and incremental testing prioritizes verification of new functionality and
its integration into the growing system.

In sequential SDLC, testing is performed only after the full system is developed, making it distinct from the
iterative and incremental approaches.

0020

2.1.2 Recall good testing practices that apply to all software development lifecycles

Key practices for effective testing, regardless of the SDLC:

• Link every development activity to a corresponding test activity for quality control.

• Use different test levels with specific objectives to avoid redundancy.

• Start test analysis and design early to detect defects sooner (principle of early testing).

• Testers should review work products (e.g., requirements) early to support a shift-left approach.
2.1.3 Recall the examples of test-first approaches to development

Development approaches that use testing to guide coding:

• Test-Driven Development (TDD): Write test cases first, then code to satisfy those tests.

• Acceptance Test-Driven Development (ATDD): Derive tests from acceptance criteria.

• Behavior-Driven Development (BDD): Use simple, stakeholder-friendly test descriptions (e.g.,


Given/When/Then). Test cases should then automatically be translated into executable tests

Advantages: Encourages early testing, ensures high-quality code, and supports automation.

2.1.4 Summarize how DevOps might have an impact on testing

DevOps integrates development, testing, and operations for faster, higher-quality releases. Key testing
aspects:

• Benefits:

o Fast feedback on code quality.

o CI promotes shift left in testing by encouraging the developers to submit high quality codes
accompanied by component tests and static analysis.

o Continuous Integration (CI) & Continuous Delivery (CD) enable automated testing.

o Minimized regression risk due to extensive automation.

o Better non-functional testing visibility (e.g., reliability, performance).

o Automation through a pipeline reduces the need for repetitive manual testing.

• Challenges:

o Requires well-defined pipelines and tools (e.g., CI/CD).

o Test automation setup is resource-intensive.

Manual testing still plays a role, especially from the user perspective.

2.1.5 Explain shift left

Testing earlier in the SDLC to find defects sooner and reduce costs later. Practices:

• Review specifications early to identify ambiguities or defects.

• Write test cases before coding and have the code run in a test harness during code
implementation.

• Use CI/CD for automated testing and feedback loops.


• Complete static analysis of source code before dynamic testing.

• Perform non-functional tests at the component level if feasible.

Considerations: Shift-left requires stakeholder buy-in and additional training upfront.

2.1.6 Explain how retrospectives can be used as a mechanism for process improvement

Retrospectives evaluate project successes, challenges, and improvements. This is mostly conducted at
the end of a project or iteration or whenever needed. Participants can be testers, developers, product
owners, architect etc.) The results of retrospective should be recorded and are normally part of the test
completion report. Benefits for testing:

• Better test effectiveness and process efficiency.

• Improved test documentation and testware quality.

• Enhanced collaboration between developers and testers.

• Opportunity to address issues in requirements and processes.

Goal: Promote team learning and continuous improvement.

2.2 Test Levels and Test Types

Test levels organize test activities by software phase, focusing on the system at different stages (e.g.,
components, entire systems).

Test types target specific quality attributes (e.g., functionality, performance) and apply across all test
levels.

2.2.1 Distinguish the different test levels

Component Testing

• Focus: Individual units or components.

• Tools: Unit test frameworks (e.g., JUnit).

• Who: Typically done by developers.

• Support: May require test harnesses.

Component Integration Testing

• Focus: Interaction between integrated components.

• Strategies:

o Bottom-up: Test lower-level components first.


o Top-down: Test higher-level components first.

o Big-bang: Test all components together.

System Testing

• Focus: Complete system behavior and capabilities.

• Scope: Functional and non-functional (e.g., usability).

• Environment: Simulated or representative.

• Who: Often done by independent testers.

System Integration Testing

• Focus: Interactions with external systems or services.

• Requirement: Operational-like test environments.

Acceptance Testing

• Focus: Validating readiness for deployment.

• Types:

o User Acceptance Testing (UAT): Business needs validation.

o Operational Testing: Operational support validation.

o Contractual Testing and Regulatory testing.

o Alpha(done by dev/testers)/ Beta Testing(by users): Early user feedback.

2.2.2 Distinguish the different test types

Functional Testing

• Focus: "What" the system does.

• Goals: Functional completeness, correctness, and appropriateness.

Non-Functional Testing

• Focus: "How well" the system performs.

• Attributes:

o Performance (e.g., speed, responsiveness).

o Security (e.g., vulnerabilities).

o Usability (e.g., user experience).

o Reliability, Maintainability, Portability, etc.


Black-Box Testing

• Basis: External specifications.

• Goal: Validate system behavior against requirements.

White-Box Testing

• Basis: System’s implementation or Internal structure (e.g., code, architecture, work flows and
data flows).

• Goal: Ensure structural coverage.

2.2.3 Distinguish confirmation testing from regression testing

Confirmation Testing

• Confirms a defect fix.

• Scope:

o Retest previously failing tests.

o Add new tests for changes.

Regression Testing

• Ensures no new defects from changes.

• Adverse consequences could affect the same component where the change was made, other
components in the same system, or even other connected systems.

• This testing can be related to the environment as well, not just the test object.

• Impact analysis determines test scope for regression as it shows where the impact can be higher.

• Ideal for automation due to repetition.

2.3 Maintenance Testing

• Triggered by:

o Modifications: Enhancements, bug fixes.

o Upgrades: Platform migrations or environment changes.

o Retirement: Data archiving or system decommissioning.

• Activities:

o Validate changes.
o Confirm no regressions.

o Test new and existing functionality.

• Scope depends on:

o Degree of Risk of the change, system size, and size of change.

*Test Harness - a test harness is a collection of software and test data used by developers to unit test
software models during development.
CHAPTER 3

• Static analysis can detect unreachable or dead code because it can analyze control flow to see if
certain code sections are never executed, such as code after a return statement in a function.
• Static analysis cannot determine if the value stored in a variable is correct because it doesn’t
execute the code and lacks context for whether a variable’s value is logically appropriate.
• Static analysis can detect dead code and undeclared variables. Static analysis can’t detect
encrypted code. It can not detect runtime issues like memory leaks which need dynamic analysis.

3.1 Static Testing Basics

Unlike dynamic testing, static testing doesn’t execute the software.

It evaluates work products like code, specifications, or architecture using:

• Manual Reviews: Peer reviews, walkthroughs, or inspections.

• Tools: Static analysis tools for automated checks.

Goals:

• Improve quality.

• Detect defects early (e.g., inconsistencies, omissions).

• Assess non-functional qualities (e.g., readability, completeness, maintainability, security).

3.1.1 Recognize types of work products that can be examined by static testing

• Suitable work products include (any product that can be read and has a structure to check
against):

o Requirements, designs, source code, test cases, test plans, etc.

• Prerequisites: Work products must have a structure (e.g., formal syntax).

• Exclusions:

o Items difficult to interpret manually. (encrypted code)

o Proprietary third-party executables.

3.1.2 Explain the value of static testing

Early Defect Detection:

• Catch issues early in the SDLC, reducing cost and effort later.
• Detect defects missed by dynamic testing (e.g., unreachable code, design patterns not
implemented as desired, defects in non-executable work products).

Improves Work Product Quality:

• Ensures requirements match user needs.

• Enhances understanding among stakeholders, fostering better collaboration.

Cost Savings:

• Reviews might seem expensive but save on fixing downstream defects.

Efficiency:

• Finds defects like coding errors (e.g., unused variables, code complexity) quickly.

3.1.3 Compare and contrast static testing and dynamic testing

Comparison:

Static Testing Dynamic Testing

No execution needed. Requires code execution.

Finds defects directly (e.g., unused variables). Causes failures to trace back to defects.

Covers non-executable items (e.g., Covers executable systems only.


documents).

Evaluates non-functional aspects like Focuses on execution-dependent aspects like


maintainability. performance.

Examples of Defects Caught via Static Testing (cheaper/easier):

• Requirements: Inconsistencies, omissions, ambiguities.

• Design: Poor modularization, inefficient database structures.

• Code: undeclared variables, variables with undefined values, unreachable or duplicate code,
standard deviations.

• Interfaces: Parameter mismatches (type or order of parameters).

• Security: Vulnerabilities like buffer overflows.

• Gaps or inaccuracies in test basis coverage: missing tests for an acceptance criterion
3.2 Feedback and Review Process

3.2.1 Identify the benefits of early and frequent stakeholder feedback

Why It Matters:

• Early feedback prevents costly rework and project failure.

• Ensures the product aligns with stakeholders’ current vision.

Key Advantages:

• Reduces misunderstandings about requirements.

• Focuses development efforts on features with the most stakeholder value.

3.2.2 Summarize the activities of the review process

The ISO/IEC 20246 standard outlines a generic review framework:

1. Planning: Define review scope, objectives, timeline, and criteria (e.g., standards to follow).

2. Initiation: Prepare participants, ensure access to documents, and clarify roles.

3. Individual Review: Reviewers independently identify issues using methods like checklist-based
or scenario-based reviewing.

4. Communication & Analysis: Discuss findings, assign ownership, and decide follow-ups in a
meeting.

5. Fixing & Reporting: Log defects - for every defect a defect report is created to track corrective
actions, and once exit criteria is met the product can be accepted so report review outcomes.

3.2.3 Recall which responsibilities are assigned to the principal roles when performing reviews

Manager: Allocates resources (staffs & time) and decides what to review.

Author: Creates and fixes the work product.

Moderator: Facilitates effective meetings and ensures a safe, respectful review environment.

Scribe: Records findings and decisions.

Reviewer: Examines the work product for defects.

Review Leader: takes overall responsibility for the review such as deciding who will be involved, and
organizing when and where the review will take place.
3.2.4 Compare and contrast the different review types

Different review types serve specific objectives based on project needs:

1. Informal Reviews:

o Flexible, quick, and undocumented.

o Focus: Identify anomalies.

2. Walkthroughs:

o Led by the author; reviewers may or may not prepare individually.

o Focus: Gain consensus, generate ideas, evaluate quality, educating reviewers, motivating
author to improve and detect anomalies.

3. Technical Reviews:

o Led by a moderator; involves technical expert reviewers.

o Focus: gain consensus and make decisions regarding a technical problem, but also to
detect anomalies, evaluate quality and build confidence in the work product, generate new
ideas, and motivate and enable authors to improve.

4. Inspections:

o Most formal; involves metrics collection. [author cannot act as the review leader or scribe]

o Focus: Detect the maximum number of anomalies, ensure high quality, and improve
processes.

3.2.5 Recall the factors that contribute to a successful review

1. Set clear objectives and exit criteria.

2. Choose the right review type for the context.

3. Divide work into smaller chunks to avoid reviewer fatigue.

4. Ensure stakeholder support and management buy-in.

5. Allocate adequate time for preparation and training.

6. Provide constructive feedback to foster process improvement.

7. Make reviews part of the organization’s culture and learning practices.


CHAPTER 4

4.1 Test Techniques Overview


4.1.1 Distinguish black-box test techniques, white-box test techniques and experience-based test
techniques

• White box testing cannot identify gaps in requirements as it only focuses on the test object
structure, not on the requirements specification
• In branch testing, the coverage items are branches. So number of branches is the coverage

Purpose:

• Help testers determine what to test and how to test.

• Create a small yet effective set of test cases systematically.

Key Classifications:

1. Black-Box Techniques:

o Based on external behavior; do not consider internal code structure.

o Test cases remain useful even if internal implementation changes.

2. White-Box Techniques:

o Focus on internal structure and processing.

o Require design or implementation to be ready before test case creation.

3. Experience-Based Techniques:

o Relies on tester's expertise and intuition.

o Effective for uncovering defects missed by formal techniques.


4.2 Black-box testing (Specification Based)
4.3 White-box testing (Structure Based)
4.4 Experience Based Test Techniques

4.4.1. Error Guessing

• What It Is:

o Predict errors based on past knowledge, typical developer mistakes, and known software
failures.

• Common Defect Areas:

o Input issues (e.g., invalid data), output errors, logic gaps, and interface mismatches.

• Fault Attacks:

o Use predefined error lists to design tests targeting specific defects.

o Fault attack uses checklist of failures (wrong test cases). vs Check-list based testing uses
checklist of test conditions.

4.4.2. Exploratory Testing

• What It Is:

o Design, execute, and evaluate tests simultaneously while exploring the software.

• Session-Based Approach:

o Work within a set time, guided by objectives (test charters).

o Debriefing follows each session to document findings and guide further testing.

• When to Use:

o Insufficient documentation or high time pressure.

o Complement to formal techniques, leveraging tester creativity and domain knowledge.

4.4.3. Checklist-Based Testing

• What It Is:

o Use checklists derived from experience, user needs, or defect history to guide testing.

• Benefits:

o Provides consistency and guidelines when detailed test cases are unavailable.

• Maintenance:

o Update regularly to stay relevant, but avoid excessive length.


4.5 Collaboration-based Test Techniques

4.5.1. Collaborative User Story Writing

• Reviews are not for collaborative user story writing. Conversation is done for user story writing.

• Definition:
o A collaborative method to create concise user stories that detail features and their
business value.
o 3Cs:

▪ Card – the medium describing a user story (e.g., an index card, an entry in an
electronic board)
▪ Conversation – explains how the software will be used (can be documented or
verbal)
▪ Confirmation – the acceptance criteria
• Good User Story Traits:

o INVEST: Independent, Negotiable, Valuable, Estimable, Small, Testable.

4.5.2. Acceptance Criteria

• Purpose:

o Define the scope and test conditions of a user story.

• Formats:

o Scenario-oriented (Given/When/Then format used in BDD).

o Rule-oriented (e.g bullet point verification list, or tabulated form of input-output mapping)

4.5.3. Acceptance Test-Driven Development (ATDD)

• What It Is:

o Write test cases before implementing user stories.

• Steps:

1. Conduct a workshop to finalize acceptance criteria.

2. Write tests based on these criteria.

3. Start with positive tests, followed by negative and non-functional tests.

• Benefits:

o Tests double as executable requirements when automated.


CHAPTER 5

5.1 Test Planning

5.1.1 Exemplify the purpose and content of a test plan

What is a Test Plan?

• Documents the objectives, resources, and processes for testing.

• Ensures activities align with goals and policies.

• Acts as a communication tool among stakeholders.

Purpose:

• Define means and schedules to meet test objectives.

• Anticipate risks, schedules, and other challenges.

Key Elements:

• Test scope, objectives, and assumptions.

• Roles, responsibilities, and communication plans.

• Test approach (techniques, entry/exit criteria, etc.)

• Risks, budget, and schedule.

5.1.2 Recognize how a tester adds value to iteration and release planning

Release Planning:

• Long-term view for the entire product release.

• Define backlog and refine user stories.

• Testers contribute by:

o Writing testable user stories.

o Analyzing risks and estimating efforts.

o Planning the overall test approach.

Iteration Planning:

• Short-term focus for a single iteration.

• Testers contribute by:

o Checking user story testability.

o Breaking stories into tasks (especially testing).


o Estimating and planning test activities.

5.1.3 Compare and contrast entry criteria and exit criteria

Entry criteria and exit criteria should be defined for each test level and will differ based on the test
objectives.

Entry Criteria:

• Conditions required before testing begins. Examples:

o Resources and tools are ready.

o Test data and test cases are prepared.

o Initial quality checks (e.g., smoke tests) are passed.

Exit Criteria:

• Define when testing can end. Examples:

o Coverage and defect density meet goals.

o All planned tests are executed.

o Defects are reported.

o And binary “yes/no” criteria (e.g., planned tests have been executed, static testing has
been performed, all defects found are reported, all regression tests are automated).

o Running out of time or budget can be also an exit criteria

o Even without meeting any of the exit criteria, testing can be ended at any time if the
stakeholders are ready to go live with the risks of no further testing.

In Agile:

• Definition of Done = Exit criteria for completion.

• Definition of Ready = Entry criteria for starting tasks.

5.1.4 Use estimation techniques to calculate the required test effort

• Estimation Based on Ratios:


o Uses historical ratios from similar projects (e.g., development-to-test effort) to estimate
effort.
o Example: If the previous project had a 3:2 development-to-test ratio, and development for
the new project is 600 days, testing is estimated at 400 days.
• Extrapolation:
o Collects data early in the current project and uses it to forecast remaining effort, often with
a mathematical model.
o Example: In an iterative development model, the team averages test effort from past
iterations to estimate the next one.
• Wideband Delphi:
o Experts individually estimate effort, then review and adjust until they reach consensus.
o Example: Used in Agile as "Planning Poker," where experts pick effort estimates from
numbered cards.
• Three-Point Estimation:
o Experts provide optimistic (a), most likely(m), and pessimistic estimates(b), and a weighted
average is calculated.
o Example: In the most popular version of this technique, the estimate is calculated as E = (a
+ 4*m + b) / 6. The advantage of this technique is that it allows the experts to calculate the
measurement error: SD = (b – a) / 6. For example, if the estimates (in person-hours) are:
a=6, m=9 and b=18, then the final estimation is 10±2 person-hours (i.e., between 8 and 12
person-hours), because E = (6 + 4*9 + 18) / 6 = 10 and SD = (18 – 6) / 6 = 2.

5.1.5 Apply test case prioritization

• Purpose:

o Arrange test cases in an execution schedule based on priority.

• Strategies:

1. Risk-Based Prioritization:

▪ Focus on high-risk areas first.

2. Coverage-Based Prioritization:

▪ Test cases are ran based on Maximize coverage (e.g., statement or functionality).

3. Requirements-Based Prioritization:

▪ Prioritize critical requirements defined by stakeholders.

• Dependencies:

o Consider dependencies between test cases/features.

• Resource Constraints:

o Schedule based on tool or environment availability.


5.1.6 Recall the concepts of the test pyramid

Definition:

• A model showing test types and their granularity:

o Bottom Layer: Unit Tests – These are small, isolated tests that check individual pieces of
code. They are fast and inexpensive, making them the base of the pyramid.

o Middle Layer: Service/Integration Tests – These tests check the interactions between
components or services. They are more complex and slower than unit tests, but fewer are
needed.

o Top Layer: End-to-End (UI) Tests – These high-level tests validate complete workflows,
mimicking the user's experience. They are slower, more expensive, and usually fewer in
number.

This pyramid shape helps ensure efficient and balanced test automation, with the largest volume of tests
being unit tests, fewer integration tests, and the smallest number of UI tests.

_____________

| End-to-End |

|_____________| <– High-level tests, slow and few

_____________

| Integration |

|_____________| <– Medium complexity, slower

_____________

| Unit |

|_____________| <– Small, isolated tests, fast and many

Principles:

• Lower layers should have more tests for fast execution.

• Higher layers focus on broader functionality with fewer tests.


5.1.7 Summarize the testing quadrants and their relationships with test levels and test types

What It Is:

• The Testing Quadrants model organizes testing activities in Agile development into four
quadrants.

• Groups tests based on purpose:

o Technology-facing vs. Business-facing.

o Support the team vs. Critique the product.

The four quadrants are:

Q1: Technology-facing, Support the Team

• Purpose: Help development by focusing on technical aspects.

• Types: Component tests, component integration tests.

• Characteristics: Automated and part of the CI process.

Q2: Business-facing, Support the Team

• Purpose: Ensure the product meets business requirements.

• Types: Functional tests, user story tests, user experience prototypes, API testing, and simulations.

• Characteristics: Check acceptance criteria, can be manual or automated.

Q3: Business-facing, Critique the Product

• Purpose: Test from the user perspective to improve usability and quality.

• Types: Exploratory testing, usability testing, user acceptance testing (UAT).

• Characteristics: User-oriented, often manual.

Q4: Technology-facing, Critique the Product

• Purpose: Validate non-functional requirements.

• Types: Smoke tests, performance tests, security tests, other non-functional tests (except
usability).

• Characteristics: Often automated, assess robustness and performance.


5.2 Risk Management

Purpose:

• Increase the likelihood of achieving objectives.

• Improve product quality and build stakeholder confidence.

Key Activities:

1. Risk Analysis: risk identification and risk assessment

2. Risk Control: risk mitigation and risk monitoring

Risk-Based Testing:

• Test activities are prioritized and managed based on risks.

5.2.1 Identify risk level by using risk likelihood and risk impact

What is Risk?

• A potential event or condition causing negative effects.

Risk Attributes:

1. Likelihood: Probability of occurrence (0 < likelihood < 1).

2. Impact: the consequences of this occurrence.

Risk Level:

• Calculation: Risk level = Risk impact * Risk Likelihood

• A combination of likelihood and impact.

• Higher levels require more urgent attention.

5.2.2 Distinguish between project risks and product risks

Project Risks:

• Affect project management and delivery.

• Examples:

o Organizational: Delays, inaccurate estimation, cost cutting.

o People: Skill shortages, conflicts, communication problem, shortage of staffs.

o Technical: Scope creep, inadequate tools.

o Supplier: Delivery issues, third-party failures.


Product Risks:

• Impact product quality and user experience.

• Examples:

o Missing/wrong functionality, runtime errors.

o Poor performance, security issues, bad UX.

• Consequences:

o User dissatisfaction, revenue loss, reputational damage, criminal penalties, overload of


help desk

5.2.3 Explain how product risk analysis may influence thoroughness and test scope

Goal:

• Identify and minimize risks to ensure product quality.

Steps:

1. Risk Identification:

o Use tools like brainstorming, workshops, interviews.

2. Risk Assessment:

o Categorize, prioritize, and propose mitigation actions.

o Approaches:

▪ Quantitative: Calculate risk level (likelihood × impact).

▪ Qualitative: Use risk matrices to evaluate levels.

Influence on Testing:

• Define test scope, levels, and techniques.

• Prioritize critical tests to find major defects early.

• Estimate test effort and propose additional risk reduction methods.

5.2.4 Explain what measures can be taken in response to analyzed product risks

Purpose:

• Reduce identified risks through targeted actions.


Components:

1. Risk Mitigation: Implement strategies to lower risk levels.

2. Risk Monitoring:

o Check if mitigation actions are effective.

o Identify new risks.

Mitigation Strategies:

• Assign skilled testers for specific risk areas.

• Use independent testing and reviews.

• Select appropriate test types and techniques.

• Conduct dynamic testing, including regression tests.

• Perform reviews and static analysis

Risk Response Options:

• Mitigation: Reduce impact via testing or other actions.

• Acceptance: Acknowledge and proceed without action.

• Transfer: Delegate risk (e.g., to insurance).

• Contingency Plans: Prepare backup solutions.

5.3 Test Monitoring, Test Control and Test Completion

Test Monitoring

• Purpose: Collect data to track test progress and verify if exit criteria are met.

• Examples of What It Tracks:

o Risk coverage.

o Progress against schedules.

o Achievement of acceptance criteria.

Test Control

• Purpose: Use monitoring data to guide corrective actions.

• Examples of Control Actions:

o Reprioritize tests when risks materialize.

o Reassess entry/exit criteria.


o Adjust schedules for delays.

o Add resources where needed.

Test Completion

• Purpose: Collect and document outcomes for future learning and reference.

• When It Happens:

o After milestones (e.g., project completion, iteration end, software release).

5.3.1 Recall metrics used for testing

Purpose of Metrics: Measure progress, quality, and effectiveness.

• Common Metrics:

1. Project Progress: Task completion, resource usage, test effort.

2. Test Progress: Test cases executed, pass/fail rates, test environment preparation progress,

test execution time

3. Product Quality: availability, response time, mean time to failure.

4. Defect Metrics: Number of defects, their severity, and fix rates.

5. Coverage: Code or requirements coverage.

6. Cost: Testing costs and quality-related expenses.

5.3.2 Summarize the purposes, content, and audiences for test reports

Types:

1. Progress Reports (Ongoing) is generated by team during test monitoring and test control to
stakeholders informed:

o Period summary, deviations from the plan, obstacles.

o Includes metrics and updated risks.

o Testing planned for the next period

2. Completion Reports (End of Phase) is generated during test completion, when a project, test
level, or test type is complete and when, ideally, its exit criteria have been met.:

o Summarizes test activities, quality evaluation, and unmet risks.

o Deviations from the test plan (e.g., differences from the planned test schedule, duration,
and effort).
o Test metrics based on test progress reports

o Unmitigated risks, defects not fixed

o Highlights lessons learned.

Test progress reporting to others in the same team is often frequent and informal, while test completion
reporting follows a set template and occurs only once.

5.3.3 Exemplify how to communicate the status of testing

Methods:

• Verbal updates, dashboards, online docs, or formal reports, email, chat.

• Choose based on team structure, e.g., distributed teams may need more formal methods.

5.4 Configuration Management

Purpose:

• Manage and track testing work products (e.g., test cases, logs, results).

• Maintain traceability and control changes.

Complex Items: For complex configuration items like test environments, CM tracks components,
versions, and their relationships. Approved configurations become a baseline, modifiable only through
formal change control.

Baseline Reversion: CM allows reverting to previous baselines to reproduce earlier test outcomes if
necessary.

Support for Testing: CM uniquely identifies, version-controls, and tracks all configuration items,
maintaining traceability across test processes and ensuring all test items and documentation are clearly
referenced.

Integration with DevOps: Automated CM is typically part of DevOps pipelines, supporting continuous
integration, delivery, and deployment.

5.5 Defect Management

Purpose: Systematically handle defects from discovery to closure and rules for their classification.

Workflow:

1. Log the anomaly.


2. Analyze and classify (e.g., severity, priority).

3. Decide actions (fix, defer, etc.).

4. Close the defect after resolution.

Defect Reports:

• Key Elements:

o Identifier, title, date, author, environment details.

o Identification of the test object and test environment

o Context of the defect (e.g., test case being run, test activity being performed, SDLC phase,
and other relevant information such as the test technique, checklist or test data being
used)

o Steps to reproduce, expected vs. actual results.

o Severity, priority, and status (e.g., open, closed, duplicate).

Tools: Automate elements like unique IDs and logs for streamlined reporting.

Examples of Use: Lessons from defects can improve processes and quality.
CHAPTER 6

6.1 Tool Support for Testing

Tools help streamline and enhance various testing activities:

1. Test Management Tools:

o Manage SDLC, requirements, tests, and defects efficiently.

2. Static Testing Tools:

o Assist with reviews and code analysis.

3. Test Design & Implementation Tools:

o Help generate test cases, data, and procedures.

4. Test Execution & Coverage Tools:

o Enable automated test execution and coverage measurement.

5. Non-Functional Testing Tools:

o Perform tasks like performance or load testing that are hard to do manually.

6. DevOps Tools:

o Automate CI/CD pipelines, build processes, and workflow tracking.

7. Collaboration Tools:

o Enhance team communication and coordination.

8. Scalability Tools:

o Use virtual machines or containers for standardized environments.

9. Other Tools:

o Even general tools like spreadsheets can support testing tasks.

6.2 Benefits and Risks of Test Automation

Benefits:

1. Efficiency Gains:

o Saves time on repetitive tasks (e.g., regression tests, data re-entry).

2. Consistency & Accuracy:

o Reduces human errors and ensures repeatability.

3. Objective Measurements:
o Provides metrics like coverage and execution stats.

4. Faster Feedback:

o Early defect detection and shorter execution times.

5. More Tester Focus:

o Frees up testers for designing better tests.

Risks:

1. Over-Reliance on Tools:

o Ignoring manual testing or critical thinking.

2. Unrealistic Expectations:

o Assuming tools are a one-stop solution.

3. High Maintenance Costs:

o Updating scripts and adapting to changes.

4. Vendor Dependency:

o Risks if a tool is discontinued or poorly supported.

5. Compatibility Issues:

o Mismatch with development platforms or standards.

6. Open Source Challenges:

o Abandonment or frequent update needs.

7. Inappropriate Tool Selection:

o Choosing tools that don’t meet regulatory or safety requirements.

You might also like