0% found this document useful (0 votes)
21 views31 pages

Unit 3 - Testing Techniques - Complete Study Guide

The document provides a comprehensive study guide on various testing techniques including Boundary Value Analysis (BVA), Equivalence Partitioning (EP), Decision Tables, State Transition Testing, Use Case Testing, and the Defect Life Cycle. Each section includes definitions, examples, test cases, and practical applications to illustrate the concepts. Additionally, it covers effective test case writing using BVA and EP, defect logging in JIRA, and designing end-to-end test cases for an E-commerce Shopping Cart feature.

Uploaded by

Bhavani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views31 pages

Unit 3 - Testing Techniques - Complete Study Guide

The document provides a comprehensive study guide on various testing techniques including Boundary Value Analysis (BVA), Equivalence Partitioning (EP), Decision Tables, State Transition Testing, Use Case Testing, and the Defect Life Cycle. Each section includes definitions, examples, test cases, and practical applications to illustrate the concepts. Additionally, it covers effective test case writing using BVA and EP, defect logging in JIRA, and designing end-to-end test cases for an E-commerce Shopping Cart feature.

Uploaded by

Bhavani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Unit 3: Testing Techniques - Complete Study Guide

Question 1: What is Boundary Value Analysis (BVA)? Explain with examples


and test cases.

Solution:
Boundary Value Analysis is a black-box testing technique that focuses on testing the boundary values of
input domains. It's based on the observation that errors tend to occur at the boundaries of input domains
rather than in the center.

Key Principles:
Test minimum boundary value

Test maximum boundary value


Test values just inside boundaries
Test values just outside boundaries

Example 1: Age Input Field (18-65)


Scenario: Testing an age input field that accepts ages between 18 and 65.

Test Cases:

1. TC001: Age = 17 (Below minimum) - Expected: Error message


2. TC002: Age = 18 (Minimum boundary) - Expected: Accept
3. TC003: Age = 19 (Just above minimum) - Expected: Accept

4. TC004: Age = 64 (Just below maximum) - Expected: Accept


5. TC005: Age = 65 (Maximum boundary) - Expected: Accept

6. TC006: Age = 66 (Above maximum) - Expected: Error message

Boundary Values Test Design:


Min-1 | Min | Min+1 | Max-1 | Max | Max+1
17 | 18 | 19 | 64 | 65 | 66

Example 2: File Upload Size (1MB - 10MB)


Scenario: Testing file upload functionality with size limits.

Test Cases:
1. TC007: File size = 0.9MB - Expected: Error "File too small"
2. TC008: File size = 1.0MB - Expected: Upload successful

3. TC009: File size = 1.1MB - Expected: Upload successful


4. TC010: File size = 9.9MB - Expected: Upload successful

5. TC011: File size = 10.0MB - Expected: Upload successful


6. TC012: File size = 10.1MB - Expected: Error "File too large"

Question 2: Explain Equivalence Partitioning (EP) with practical examples and


test case design.

Solution:
Equivalence Partitioning divides input data into equivalent classes where all members of each class are
expected to be processed similarly by the system.

Types of Equivalence Classes:


Valid Equivalence Classes: Contain valid inputs

Invalid Equivalence Classes: Contain invalid inputs

Example 1: Student Grade System (0-100)


Scenario: Testing a grading system where grades are classified as:

A: 90-100

B: 80-89

C: 70-79

D: 60-69

F: 0-59

Equivalence Classes:

Valid Classes: [0-59], [60-69], [70-79], [80-89], [90-100]

Invalid Classes: [<0], [>100], [Non-numeric]

Test Cases:

1. TC013: Grade = 45 (F grade) - Expected: Grade F

2. TC014: Grade = 65 (D grade) - Expected: Grade D

3. TC015: Grade = 75 (C grade) - Expected: Grade C


4. TC016: Grade = 85 (B grade) - Expected: Grade B

5. TC017: Grade = 95 (A grade) - Expected: Grade A

6. TC018: Grade = -5 (Invalid) - Expected: Error message

7. TC019: Grade = 105 (Invalid) - Expected: Error message

8. TC020: Grade = "ABC" (Invalid) - Expected: Error message

Example 2: Email Validation


Scenario: Testing email field validation.

Equivalence Classes:

Valid: Contains @ symbol, valid domain, proper format

Invalid: Missing @, invalid domain, special characters

Test Cases:

1. TC021: Email = "user@[Link]" - Expected: Valid

2. TC022: Email = "[Link]" - Expected: Invalid (missing @)

3. TC023: Email = "user@" - Expected: Invalid (incomplete domain)

4. TC024: Email = "@[Link]" - Expected: Invalid (missing username)

Question 3: What are Decision Tables? Create decision tables with examples
and test cases.

Solution:
Decision tables are a systematic way to represent complex business logic by showing all possible
combinations of conditions and their corresponding actions.

Components:
Conditions: Input conditions or factors

Actions: Resulting actions or outputs

Rules: Combinations of conditions leading to specific actions

Example 1: Online Shopping Discount System


Business Rules:

Customer type: Premium/Regular


Order amount: >$100 or ≤$100

Promo code: Valid/Invalid

Condition Rule 1 Rule 2 Rule 3 Rule 4 Rule 5 Rule 6 Rule 7 Rule 8

Customer Type Premium Premium Premium Premium Regular Regular Regular Regular

Order Amount >$100 >$100 ≤$100 ≤$100 >$100 >$100 ≤$100 ≤$100

Promo Code Valid Invalid Valid Invalid Valid Invalid Valid Invalid

Actions

20% Discount ✓

15% Discount ✓ ✓ ✓

10% Discount ✓ ✓ ✓

No Discount ✓
 

Test Cases:

1. TC025: Premium, $150, Valid Code - Expected: 20% discount

2. TC026: Premium, $150, Invalid Code - Expected: 15% discount

3. TC027: Regular, $50, Valid Code - Expected: 10% discount


4. TC028: Regular, $50, Invalid Code - Expected: No discount

Example 2: ATM Withdrawal System


Conditions: Account Balance, PIN Status, Card Status

Condition Rule 1 Rule 2 Rule 3 Rule 4 Rule 5 Rule 6

Sufficient Balance Yes Yes Yes No No No

Valid PIN Yes Yes No Yes No No

Valid Card Yes No Yes Yes Yes No

Actions

Dispense Cash ✓

Block Card ✓ ✓ ✓ ✓

Show Error ✓ ✓ ✓ ✓ ✓
 

Question 4: Explain State Transition Testing with examples and test case
design.

Solution:
State Transition Testing is used to test the behavior of a system that transitions through different states
based on events or inputs.

Components:
States: Different conditions the system can be in

Events: Triggers that cause state changes


Transitions: Movement from one state to another

Actions: What happens during transitions

Example 1: User Login System

States: Logged Out → Entering Credentials → Logged In → Locked Out

State Transition Diagram:


[Logged Out] --enter credentials--> [Entering Credentials]
[Entering Credentials] --valid login--> [Logged In]
[Entering Credentials] --invalid login (< 3 attempts)--> [Logged Out]
[Entering Credentials] --invalid login (3 attempts)--> [Locked Out]
[Logged In] --logout--> [Logged Out]
[Locked Out] --admin unlock--> [Logged Out]

Test Cases:

1. TC029: From "Logged Out" with valid credentials → Expected: "Logged In"

2. TC030: From "Logged Out" with invalid credentials (1st attempt) → Expected: "Logged Out"

3. TC031: From "Logged Out" with invalid credentials (3rd attempt) → Expected: "Locked Out"

4. TC032: From "Logged In" click logout → Expected: "Logged Out"

5. TC033: From "Locked Out" admin unlock → Expected: "Logged Out"

Example 2: Order Processing System


States: Order Placed → Payment Processing → Shipped → Delivered → Cancelled

State Transition Table:


Current State | Event | Next State
Order Placed | Payment Success | Payment Processing
Order Placed | Cancel Order | Cancelled
Payment Processing| Ship Order | Shipped
Shipped | Deliver Order | Delivered
Any State | Cancel Order | Cancelled

Test Cases:

1. TC034: Order Placed + Payment Success → Expected: Payment Processing

2. TC035: Payment Processing + Ship Order → Expected: Shipped


3. TC036: Shipped + Deliver Order → Expected: Delivered

4. TC037: Any State + Cancel Order → Expected: Cancelled

Question 5: What is Use Case Testing? Explain with examples and detailed test
cases.

Solution:
Use Case Testing is a black-box testing technique that uses use cases to design test cases. It focuses on
testing the system's functionality from an end-user perspective.

Components:
Actor: User or system that interacts with the system
Use Case: Specific functionality or feature

Preconditions: What must be true before the use case

Main Flow: Normal sequence of steps


Alternative Flows: Variations from main flow

Postconditions: System state after use case completion

Example 1: E-commerce Shopping Cart


Use Case: Add Item to Cart

Actors: Customer Preconditions: Customer is on product page, product is available Main Flow:
1. Customer selects product

2. Customer specifies quantity

3. Customer clicks "Add to Cart"

4. System adds item to cart

5. System displays confirmation message

Alternative Flows:

A1: Product out of stock

A2: Invalid quantity entered


A3: Customer not logged in

Test Cases:

1. TC038: Main Flow - Add available product with valid quantity


Steps: Select product → Enter quantity "2" → Click "Add to Cart"

Expected: Item added, confirmation shown

2. TC039: Alternative Flow A1 - Product out of stock


Steps: Select out-of-stock product → Click "Add to Cart"
Expected: Error "Product unavailable"

3. TC040: Alternative Flow A2 - Invalid quantity


Steps: Select product → Enter quantity "0" → Click "Add to Cart"

Expected: Error "Invalid quantity"

Example 2: Bank Account Transfer


Use Case: Transfer Money Between Accounts

Test Cases:

1. TC041: Successful transfer with sufficient balance

2. TC042: Transfer with insufficient balance


3. TC043: Transfer to invalid account number

4. TC044: Transfer with network timeout

Question 6: Explain Defect Life Cycle with detailed phases and examples.

Solution:
The Defect Life Cycle describes the various stages a defect goes through from discovery to closure.

Defect Life Cycle Phases:

New → Assigned → Open → Fixed → Test → Verified → Closed


↓ ↓ ↓ ↓ ↓ ↓ ↓
Rejected Deferred Reopen Reopen Reopen Reopen Reopen

Phase Details:
1. New: Defect reported by tester

2. Assigned: Defect assigned to developer

3. Open: Developer starts working on defect

4. Fixed: Developer fixes the defect


5. Test: Defect sent back to tester for verification

6. Verified: Tester confirms fix is working


7. Closed: Defect is closed

8. Rejected: Defect is not valid


9. Deferred: Fix postponed to future release
10. Reopen: Defect still exists after fix

Example 1: Login Page Defect


Defect: "Login button not working on mobile devices"

Life Cycle Journey:

Day 1: Tester finds issue → Status: New


Day 2: Assigned to Mobile Developer → Status: Assigned

Day 3: Developer starts investigation → Status: Open


Day 5: Developer implements fix → Status: Fixed

Day 6: Sent to tester for verification → Status: Test

Day 7: Tester verifies fix works → Status: Verified

Day 7: Defect marked complete → Status: Closed

Example 2: Payment Processing Defect


Defect: "Payment fails for amounts over $1000"
Life Cycle with Reopen:

Initial fix attempted but didn't work → Reopen

Second fix applied → Fixed → Test → Verified → Closed

Question 7: How do you write effective test cases using BVA and EP? Provide
detailed examples.

Solution:
Combining BVA and EP creates comprehensive test coverage by testing both boundary conditions and
representative values from each equivalence class.

Step-by-Step Process:
1. Identify input domains

2. Determine equivalence classes

3. Identify boundary values

4. Design test cases covering both techniques

Example 1: Password Validation System


Requirements:

Length: 8-20 characters

Must contain: uppercase, lowercase, number, special character

Equivalence Classes:

Valid: 8-20 chars with all required elements

Invalid: <8 chars, >20 chars, missing required elements

Boundary Values: 7, 8, 9, 19, 20, 21 characters

Combined Test Cases:

1. TC045: Length=7, all elements - Expected: Invalid (too short)

2. TC046: Length=8, all elements - Expected: Valid (min boundary)

3. TC047: Length=9, all elements - Expected: Valid (min+1)

4. TC048: Length=19, all elements - Expected: Valid (max-1)

5. TC049: Length=20, all elements - Expected: Valid (max boundary)


6. TC050: Length=21, all elements - Expected: Invalid (too long)
7. TC051: Length=10, no uppercase - Expected: Invalid (missing element)

8. TC052: Length=10, no numbers - Expected: Invalid (missing element)

Example 2: Discount Calculation System


Requirements:

Order amount: $1-$10000

Discount rates:
$1-$100: 5%

$101-$500: 10%

$501-$10000: 15%

Combined Test Cases:

1. TC053: Amount=$0.99 - Expected: Invalid (below minimum)


2. TC054: Amount=$1.00 - Expected: 5% discount (boundary)

3. TC055: Amount=$50 - Expected: 5% discount (equivalence)


4. TC056: Amount=$100 - Expected: 5% discount (boundary)

5. TC057: Amount=$101 - Expected: 10% discount (boundary)


6. TC058: Amount=$500 - Expected: 10% discount (boundary)

7. TC059: Amount=$501 - Expected: 15% discount (boundary)


8. TC060: Amount=$10000 - Expected: 15% discount (boundary)

9. TC061: Amount=$10001 - Expected: Invalid (above maximum)

Question 8: How do you log defects in JIRA? Explain the process with
examples.

Solution:
JIRA is a popular defect tracking tool that helps manage the entire defect lifecycle from creation to
resolution.

JIRA Defect Fields:


Project: Project name
Issue Type: Bug, Story, Task, etc.
Summary: Brief description
Priority: Highest, High, Medium, Low, Lowest

Severity: Critical, Major, Minor, Trivial


Component: Module/feature affected

Environment: Test environment details

Description: Detailed steps and expected vs actual results


Attachments: Screenshots, logs, videos

Example 1: Login Functionality Defect


JIRA Defect Entry:

Project: E-commerce Website


Issue Type: Bug
Summary: Login fails with valid credentials on Chrome browser
Priority: High
Severity: Major
Component: Authentication
Environment: Windows 10, Chrome 91.0.4472.124

Description:
Steps to Reproduce:
1. Navigate to login page
2. Enter valid username: "testuser@[Link]"
3. Enter valid password: "ValidPass123"
4. Click Login button

Expected Result: User should be logged in successfully


Actual Result: Error message "Invalid credentials" appears

Additional Information:
- Issue occurs only on Chrome browser
- Works fine on Firefox and Safari
- Tested on multiple user accounts

Attachments:
- login_error_screenshot.png
- browser_console_log.txt

Example 2: Shopping Cart Defect


JIRA Fields:

Summary: Items removed from cart when page refreshed

Priority: Medium
Severity: Minor

Steps: Add items → Refresh page → Items disappear

Environment: Mobile Safari, iOS 14.6

Question 9: Design end-to-end test cases for E-commerce Shopping Cart


feature.

Solution:
End-to-end testing verifies the complete user journey from start to finish, ensuring all integrated
components work together.

E-commerce Cart User Journey:


1. Browse products

2. Add items to cart


3. View cart
4. Update quantities

5. Apply coupons

6. Proceed to checkout

7. Complete payment
8. Order confirmation

Detailed Test Scenarios:

Test Scenario 1: Happy Path - Complete Purchase


TC062: End-to-end successful purchase flow Preconditions: User has account, products available,
payment method setup

Test Steps:

1. Login to application
2. Search for "Laptop"

3. Select product from search results


4. Add product to cart (quantity: 2)
5. View cart contents

6. Apply discount coupon "SAVE10"

7. Proceed to checkout

8. Enter shipping address


9. Select payment method (Credit Card)
10. Complete payment

11. Verify order confirmation

Expected Results:

Each step completes successfully


Cart shows correct items and quantities

Discount applied correctly

Payment processed
Order confirmation email received

Order appears in user's order history

Test Scenario 2: Cart Management


TC063: Add, update, and remove items from cart

Test Steps:

1. Add 3 different products to cart


2. Verify cart shows 3 items

3. Update quantity of first item from 1 to 3


4. Remove second item completely

5. Verify cart totals update correctly

Test Scenario 3: Error Handling


TC064: Handle out-of-stock scenarios

Test Steps:

1. Add product to cart


2. Product becomes out-of-stock
3. Attempt to proceed to checkout
4. Verify appropriate error handling

Question 10: What is Bug Tracking? Explain the process with examples and
metrics.

Solution:
Bug tracking is the process of logging, monitoring, and managing defects throughout their lifecycle to
ensure systematic resolution.

Bug Tracking Process:


1. Detection: Finding the defect
2. Logging: Recording defect details
3. Assignment: Assigning to appropriate team member

4. Resolution: Fixing the defect


5. Verification: Confirming the fix

6. Closure: Marking defect as resolved

Key Metrics:
Defect Density: Defects per module/KLOC
Defect Removal Efficiency: % of defects found before release

Defect Age: Time from detection to closure


Defect Severity Distribution: Critical/Major/Minor/Trivial ratios

Example Bug Tracking Workflow:


Bug: Shopping cart total calculation incorrect

Tracking Details:

Bug ID: CART-001

Reporter: Test Engineer


Assignee: Development Team Lead

Priority: High
Status History:
Day 1: New
Day 2: Assigned

Day 3: In Progress

Day 5: Resolved
Day 6: Verified

Day 6: Closed

Metrics Example:

Detection Time: 2 hours of testing

Resolution Time: 3 days


Total Age: 6 days

Severity: Major

Question 11: Compare and contrast different testing techniques with


examples.

Solution:

Comparison Matrix:
Technique Type Focus When to Use Strengths Weaknesses

Black- Efficient bug Limited to boundary


BVA Boundaries Numeric inputs
box detection errors

Black- Equivalence May miss boundary


EP Input validation Reduces test cases
box classes issues

Black-
Decision Table Business logic Complex rules Complete coverage Can become complex
box

State Black- Models real Requires state


State changes Workflow testing
Transition box behavior knowledge

Black- End-to-end
Use Case User scenarios User-focused Time-consuming
box testing
 

Example Comparison: Testing Age Input Field


Using BVA:

Test ages: 17, 18, 19, 64, 65, 66

Focus: Boundary values


Using EP:

Test representatives: 10 (child), 25 (adult), 70 (senior), -5 (invalid)

Focus: Each category

Using Decision Table:

Consider age ranges, user types, and permissions


Focus: Business rules

When to Use Which Technique:


BVA: Numeric inputs, ranges, limits
EP: Input validation, categories
Decision Tables: Complex business logic

State Transition: Workflow applications


Use Case: End-user functionality

Question 12: How do you ensure comprehensive test coverage using multiple
testing techniques?

Solution:

Test Coverage Strategy:


1. Requirement Analysis: Identify all testable requirements
2. Technique Selection: Choose appropriate techniques

3. Test Case Design: Create comprehensive test suite

4. Coverage Measurement: Track coverage metrics


5. Gap Analysis: Identify and fill coverage gaps

Example: Login System Comprehensive Testing


Requirements Coverage:

Username validation (EP + BVA)


Password validation (EP + BVA)

Login attempts (State Transition)

User roles (Decision Table)


Complete login flow (Use Case)

Test Case Distribution:

1. BVA Test Cases (20%): Username length, password length boundaries

2. EP Test Cases (30%): Valid/invalid credentials, user types

3. Decision Table (20%): Role-based access combinations


4. State Transition (15%): Login state changes
5. Use Case (15%): End-to-end login scenarios

Coverage Metrics:

Requirement Coverage: 100% (all requirements tested)

Code Coverage: 85% (lines of code executed)


Path Coverage: 90% (execution paths covered)

Condition Coverage: 95% (decision points tested)

Coverage Example:

Login System - Total Requirements: 10


├── Functional Requirements: 7
│ ├── Covered by EP: 4
│ ├── Covered by BVA: 2
│ └── Covered by Use Case: 1
├── Non-functional Requirements: 2
│ └── Covered by Performance Tests: 2
└── Interface Requirements: 1
└── Covered by UI Tests: 1

Coverage Percentage: 100%

Question 13: Explain integration of QA practices in SDLC with examples.

Solution:

QA Integration in SDLC Phases:

1. Requirements Phase
QA Activities:
Requirements review

Testability analysis

Acceptance criteria definition

Example: E-commerce project requirements review

Requirement: "User should be able to add items to cart"


QA Input: "Define maximum cart size, behavior when limit exceeded"
Testable Criteria: "Cart can hold maximum 50 items, error message for exceeding limit"

2. Design Phase
QA Activities:

Test strategy creation

Test plan development


Test case design

Example: Shopping cart design review

Design: Cart uses session storage


QA Consideration: "What happens if session expires? How is data persisted?"

Test Strategy: Include session timeout scenarios

3. Development Phase
QA Activities:

Unit test review


Static code analysis

Test environment setup

Example: Code review for payment module

Developer Code: Payment processing function

QA Review: Check error handling, input validation, logging

4. Testing Phase
QA Activities:

Test execution
Defect reporting
Test result analysis

Example: Comprehensive cart testing

Functional Testing: Add/remove items, calculations

Integration Testing: Cart + Payment + Inventory


System Testing: End-to-end user scenarios

5. Deployment Phase
QA Activities:

Production testing

Monitoring setup

Go-live support

Example: Production deployment verification

Smoke Testing: Core functionality verification

Performance Monitoring: Response time tracking


Error Monitoring: Exception tracking setup

Question 14: Design test cases for complex business scenarios using decision
tables.

Solution:

Complex Scenario: Insurance Premium Calculation


Business Rules:

Age categories: <25, 25-40, >40

Vehicle types: Car, Motorcycle, Truck


Driving history: Clean, Minor violations, Major violations
Coverage types: Basic, Standard, Premium

Decision Table Design:


Rule 1 2 3 4 5 6 7 8 9 10 11

Conditions

Age 25-
<25 <25 <25 25-40 25-40 >40 >40 >40 Any A
Category 40

Vehicle
Car Motorcycle Truck Car Motorcycle Truck Car Motorcycle Truck Any A
Type

Driving
Clean Clean Clean Clean Clean Clean Clean Clean Clean Minor M
History

Coverage
Basic Basic Basic Basic Basic Basic Basic Basic Basic Any A
Type

Actions

Premium
$800 $1200 $1500 $600 $1000 $1300 $500 $800 $1100 +20% +
Rate

Discount
No No No Yes Yes Yes Yes Yes Yes No N
Eligible

Additional
Yes Yes Yes No No No No No No Yes Ye
Verification
 

Test Cases from Decision Table:


TC065: Young driver with car and clean record

Input: Age=22, Vehicle=Car, History=Clean, Coverage=Basic

Expected: Premium=$800, No discount, Additional verification required

TC066: Middle-aged driver with motorcycle

Input: Age=35, Vehicle=Motorcycle, History=Clean, Coverage=Basic


Expected: Premium=$1000, Discount eligible, No additional verification

TC067: Senior driver with major violations

Input: Age=45, Vehicle=Car, History=Major, Coverage=Standard

Expected: Base premium + 50% surcharge, No discount

Complex Scenario 2: Online Banking Transaction Approval


Business Rules Matrix:

Transaction amount ranges


User authentication levels
Time of transaction

Geographical location
Account standing

This creates a comprehensive decision table with 32+ combinations requiring systematic test case design.

Question 15: How do you measure the effectiveness of different testing


techniques? Provide metrics and examples.

Solution:

Key Effectiveness Metrics:

1. Defect Detection Efficiency


Formula: (Defects found by technique / Total defects) × 100

Example Measurement:

Project: E-commerce Application


Total Defects Found: 100

Technique Effectiveness:
├── Boundary Value Analysis: 25 defects (25%)
├── Equivalence Partitioning: 20 defects (20%)
├── Decision Table Testing: 15 defects (15%)
├── State Transition: 12 defects (12%)
├── Use Case Testing: 18 defects (18%)
└── Ad-hoc Testing: 10 defects (10%)

2. Cost-Benefit Analysis
Metrics:

Time to create test cases


Time to execute test cases

Number of defects found


Cost per defect found

Example:
BVA vs EP Comparison:
┌────────────────┬─────────┬─────────┐
│ Metric │ BVA │ EP │
├────────────────┼─────────┼─────────┤
│ Test Cases │ 20 │ 15 │
│ Creation Time │ 4 hrs │ 3 hrs │
│ Execution Time │ 6 hrs │ 5 hrs │
│ Defects Found │ 12 │ 8 │
│ Cost per Defect│ $83.33 │ $62.50 │
└────────────────┴─────────┴─────────┘

3. Coverage Metrics
Types:

Requirements coverage

Code coverage

Path coverage
Condition coverage

Example Coverage Report:

Login Module Coverage:


├── Requirements Coverage: 95%
│ ├── BVA covered: 8/10 requirements
│ ├── EP covered: 9/10 requirements
│ └── Use Case covered: 6/10 requirements
├── Code Coverage: 88%
│ ├── Lines covered: 440/500
│ └── Branches covered: 35/40
└── Defect Coverage: 92%
├── Critical defects: 5/5 found
├── Major defects: 12/13 found
└── Minor defects: 8/10 found

4. Quality Metrics
Measurements:

Defect density per technique

False positive rate


Test case effectiveness ratio

Example Quality Analysis:

Technique Quality Comparison:


┌─────────────────┬──────────┬─────────────┬──────────────┐
│ Technique │ Defects │ False │ Effectiveness│
│ │ Found │ Positives │ Ratio │
├─────────────────┼──────────┼─────────────┼──────────────┤
│ BVA │ 25 │ 2 │ 92% │
│ EP │ 20 │ 1 │ 95% │
│ Decision Table │ 15 │ 0 │ 100% │
│ State Transition│ 12 │ 3 │ 75% │
│ Use Case │ 18 │ 1 │ 94% │
└─────────────────┴──────────┴─────────────┴──────────────┘

5. ROI Calculation
Formula: (Benefit - Cost) / Cost × 100

Example ROI Analysis:

Testing Technique ROI (6-month project):

BVA Implementation:
├── Cost: $5,000 (time + resources)
├── Benefit: $15,000 (defects prevented in production)
├── ROI: (15,000 - 5,000) / 5,000 × 100 = 200%

EP Implementation:
├── Cost: $3,500
├── Benefit: $12,000
├── ROI: (12,000 - 3,500) / 3,500 × 100 = 243%

Decision Table Testing:


├── Cost: $4,000
├── Benefit: $18,000
├── ROI: (18,000 - 4,000) / 4,000 × 100 = 350%

6. Technique Comparison Matrix


Effectiveness Factor BVA EP Decision Table State Transition Use Case

Defect Detection Rate High Medium Very High Medium High

Implementation Cost Low Low Medium High Medium

Maintenance Effort Low Low Medium Medium High

Learning Curve Easy Easy Medium Hard Medium

Automation Potential High High Medium Low Medium

Scalability High High Medium Low Medium


 

7. Practical Effectiveness Assessment


Case Study: Banking Application

6-Month Testing Results:

Total Project Defects: 150


Production Defects: 25 (83% caught in testing)

Technique Contribution:
├── BVA: 35 defects (23.3%)
│ ├── Input validation errors: 30
│ └── Boundary condition failures: 5
├── EP: 30 defects (20%)
│ ├── Invalid data handling: 25
│ └── Category-specific errors: 5
├── Decision Tables: 40 defects (26.7%)
│ ├── Business logic errors: 35
│ └── Rule combination issues: 5
├── State Transition: 25 defects (16.7%)
│ ├── Workflow issues: 20
│ └── State inconsistencies: 5
└── Use Case Testing: 20 defects (13.3%)
├── End-to-end flow issues: 15
└── Integration problems: 5

8. Recommendations for Technique Selection


High-ROI Combinations:

1. BVA + EP: For input validation (90% coverage with minimal effort)
2. Decision Tables + State Transition: For complex business logic
3. Use Case + BVA: For user-facing features
Project-Specific Recommendations:

E-commerce Platform:
├── Core Features: Use Case Testing (70%)
├── Payment Module: Decision Tables (60%)
├── Search/Filter: BVA + EP (80%)
└── User Management: State Transition (50%)

Banking System:
├── Transaction Processing: Decision Tables (80%)
├── Account Management: State Transition (70%)
├── Input Validation: BVA + EP (90%)
└── User Workflows: Use Case Testing (60%)

Comprehensive Test Case Templates

Template 1: BVA Test Case


Test Case ID: TC_BVA_001
Test Case Title: Age Input Boundary Testing
Technique Used: Boundary Value Analysis
Priority: High
Severity: Medium

Pre-conditions:
- Application is accessible
- Age input field is visible

Test Data:
- Valid Range: 18-65
- Test Values: 17, 18, 19, 64, 65, 66

Test Steps:
1. Navigate to age input field
2. Enter test value
3. Submit form
4. Observe result

Expected Results:
- 17: Error message "Age must be 18 or above"
- 18: Accepted
- 19: Accepted
- 64: Accepted
- 65: Accepted
- 66: Error message "Age must be 65 or below"

Post-conditions:
- System returns to initial state for next test

Template 2: EP Test Case


Test Case ID: TC_EP_001
Test Case Title: Email Format Validation
Technique Used: Equivalence Partitioning
Priority: High
Severity: Major

Equivalence Classes:
- Valid: Proper email format with @ and domain
- Invalid: Missing @, missing domain, invalid characters

Test Data Sets:


Valid Class: "user@[Link]", "[Link]@[Link]"
Invalid Class: "[Link]", "user@", "@[Link]", "user@domain"

Test Steps:
1. Enter email from test data
2. Attempt to submit
3. Verify system response

Expected Results:
- Valid emails: Accepted and processed
- Invalid emails: Error message displayed

Template 3: Decision Table Test Case


Test Case ID: TC_DT_001
Test Case Title: Loan Approval Decision Testing
Technique Used: Decision Table
Priority: Critical
Severity: High

Decision Factors:
- Credit Score: Good/Average/Poor
- Income Level: High/Medium/Low
- Employment Status: Employed/Unemployed
- Loan Amount: Within/Exceeds limit

Test Scenarios: (Based on decision table rules)


Rule 1: Good credit + High income + Employed + Within limit = Approved
Rule 2: Poor credit + Low income + Unemployed + Exceeds limit = Rejected
[Continue for all rules...]

Expected Results:
Each rule combination should produce the correct approval/rejection decision

Template 4: State Transition Test Case

Test Case ID: TC_ST_001


Test Case Title: Order Status Transition Testing
Technique Used: State Transition
Priority: High
Severity: Major

State Transitions:
Placed → Processing → Shipped → Delivered

Cancelled

Test Scenarios:
1. Normal flow: Placed → Processing → Shipped → Delivered
2. Cancellation: Placed → Cancelled
3. Invalid transitions: Delivered → Processing (should fail)

Expected Results:
- Valid transitions: State changes successfully
- Invalid transitions: Error message, state unchanged
Template 5: Use Case Test Case

Test Case ID: TC_UC_001


Test Case Title: Complete Purchase Flow
Technique Used: Use Case Testing
Priority: Critical
Severity: High

Use Case: Complete Online Purchase


Actor: Customer
Goal: Successfully purchase items

Main Success Scenario:


1. Customer browses products
2. Customer adds items to cart
3. Customer proceeds to checkout
4. Customer enters payment details
5. System processes payment
6. System confirms order
7. Customer receives confirmation

Alternative Flows:
A1: Payment fails - return to payment page
A2: Item out of stock - remove from cart
A3: Session timeout - save cart, redirect to login

Test Steps: [Detailed steps for main and alternative flows]

Expected Results:
- Main flow: Order completed successfully
- Alternative flows: Appropriate error handling

Quality Assurance Best Practices

1. Test Case Design Principles


Completeness: Cover all requirements

Consistency: Use standard formats

Traceability: Link to requirements


Maintainability: Easy to update
Reusability: Can be used across projects
2. Defect Prevention Strategies
Early Testing: Start testing in requirements phase
Risk-Based Testing: Focus on high-risk areas

Continuous Integration: Automated testing in CI/CD

Code Reviews: Peer review before testing


Static Analysis: Automated code quality checks

3. Testing Tool Integration

Testing Ecosystem:
├── Test Management: JIRA, TestRail, Zephyr
├── Automation Tools: Selenium, Cypress, Playwright
├── Performance Tools: JMeter, LoadRunner
├── API Testing: Postman, REST Assured
├── CI/CD Integration: Jenkins, GitLab CI
└── Reporting: Allure, ExtentReports

4. Metrics and KPIs


Test Coverage: % of requirements tested
Defect Density: Defects per module/KLOC

Test Execution Rate: Tests executed per day


Defect Leakage: Production defects vs test defects

Customer Satisfaction: User-reported issues

5. Continuous Improvement
Retrospectives: Regular team reviews

Process Optimization: Streamline workflows


Tool Evaluation: Adopt better tools
Skill Development: Team training programs

Industry Benchmarking: Compare with standards

Conclusion
This comprehensive study guide covers all major testing techniques with:
✅ 15 detailed questions with solutions
✅ 2 practical examples per technique
✅ Detailed test cases (TC001-TC067+)
✅ Process workflows and diagrams
✅ JIRA integration examples
✅ E-commerce end-to-end scenarios
✅ Metrics and effectiveness measurements
✅ Best practices and templates
Key Takeaways:

1. Each testing technique has specific strengths and use cases


2. Combining techniques provides better coverage than using single techniques
3. Proper documentation and traceability are essential

4. Metrics help measure and improve testing effectiveness


5. Integration with development lifecycle improves overall quality

Exam Preparation Tips:

Practice writing test cases using each technique

Understand when to apply which technique


Memorize key formulas for metrics calculation
Practice drawing state transition diagrams

Be able to create decision tables for complex scenarios

Good luck with your exams!

You might also like