0% found this document useful (0 votes)
67 views

Testing: Instructor: Iqra Javed

The document provides an overview of software testing. It discusses the role of static and dynamic analysis in software quality assessment. It defines key terms like verification, validation, faults, errors, defects, and failures. It describes the objectives of testing, what constitutes a test case, expected outcomes, and challenges with complete testing. It also covers different testing levels, sources of information for test selection, and white-box vs black-box testing approaches.

Uploaded by

zagi tech
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views

Testing: Instructor: Iqra Javed

The document provides an overview of software testing. It discusses the role of static and dynamic analysis in software quality assessment. It defines key terms like verification, validation, faults, errors, defects, and failures. It describes the objectives of testing, what constitutes a test case, expected outcomes, and challenges with complete testing. It also covers different testing levels, sources of information for test selection, and white-box vs black-box testing approaches.

Uploaded by

zagi tech
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 32

Lecture 2

Testing
INSTRUCTOR: IQRA JAVED
2
Role of Testing

 Software quality assessment divide into two


categories:
 Static analysis

 It examines the code and reasons over all behaviors that might
arise during run time
Examples: Code review, inspection, and algorithm analysis
 Dynamic analysis

 Actual program execution to expose possible program failure


 One observe some representative program behavior, and reach conclusion about the quality of the system
3
Role of Testing Cont.

 Static and Dynamic Analysis are complementary


in nature
 Focus is to combines the strengths of both
approaches
4
Verification & Validation

 Verification
 Evaluation of software system that help in determining whether the product of a
given development phase satisfy the requirements established before the start of
that phase
 Building the product correctly
 Validation
 Evaluation of software system that help in determining whether the product meets
its intended use
 Building the correct product
5
Failure, Error, Fault and Defect
 Fault
 A fault is the adjudged (declared) cause of an error
Error
 Defect
 It is synonymous of fault
 It a.k.a. bug
 Error
 An error is a state of the system.
 An error state could lead to a failure in the absence of any corrective action by the system
 Failure
 A failure is said to occur whenever the external behavior of a system does not conform to that
prescribed in the system specification
The Objectives of Testing

 It does work:
 Objective is to test that unit of code or system works
 It does not work
 Once working, next objective is to find faults in unit or system. The idea is to try to make the unit
(or the system) fail

 Reduce the risk of failures


 Objective  To bring down the risk of failing to an acceptable level by iterative testing and
removing faults.
 Reduce the cost of testing
 Number of test cases are directly proportional to cost
 Objective: To produce low-risk software with fewer number of test cases
What is a Test Case?

 Test Case is a simple pair of


<input, expected outcome>
 State-less systems:
 A compiler is a stateless system
 Computing square root of nonnegative numbers is also a stateless system

 Stateless test cases are very simple


 Outcome depends solely on the current input
8
What is a Test Case? Cont.

 State-oriented: ATM is a state oriented system


In state-oriented systems, the program outcome depends both on the current state of
the system and the current input, a test case may consist of a sequence of < input,
expected outcome > pairs.

ATM example:
 < check balance, $500.00 >,
< withdraw, “amount?” >,
< $200.00, “$200.00” >,
< check balance, $300.00 >
9
Expected Outcome

 An outcome of program execution may include


 Value produced by the program
 State Change
 A sequence of values which must be interpreted together for the outcome to be valid
 A test oracle is a mechanism that verifies the correctness of program outputs
 Generate expected results for the test inputs
 Compare the expected results with the actual results of execution of the program.
10
The Concept of Complete Testing

 Complete or exhaustive testing means


“There are no undisclosed faults at the end of test phase”
 Complete testing is near impossible for most of the systems
 The domain of possible inputs of a program is too large
 Valid inputs
 Invalid inputs

 The design issues may be too complex to completely test. I.e. Implicit Design
Decisions. Programmer may use a global variable to control program execution
 It may not be possible to create all possible execution environments of the system
such as weather, temperature, altitude, pressure
The Central Issue in Testing

 Discovering all faults is desirable but near impossible


 Therefore selecting a subset of the input domain exercising a subset of the program behavior can
give desired results
 Divide the input domain D into D1 and D2
 Select a subset D1 of D to test program P
 It is possible that D1 exercise only a part P1 of P
Testing Activities

Different activities in process testing


Testing Activities Cont.

 Identify the objective to be tested


 Select inputs
 Compute the expected outcome
 Set up the execution environment of the program
 Execute the program
 Analyze the test results
Testing Level

Development and testing phases in the V model


Testing Level Cont.

 Unit testing
 Individual program units, such as procedure, methods in isolation
 Integration testing
 Modules are assembled to construct larger subsystem and tested
 System testing
 Includes wide spectrum of testing such as functionality, and load
 Acceptance testing
 Customer’s expectations from the system
 Two types of acceptance testing: UAT and BAT
 UAT: System satisfies the contractual acceptance criteria
 BAT: System will eventually pass the user acceptance test
Testing Level Cont.

Regression testing at different software testing levels


Testing Level Cont.

 Regression testing is another level of testing that is performed throughout


the life cycle of a system.
 Regression testing is performed whenever a component of
the system is modified.

 New test cases are not designed


 Test are selected, prioritized and executed
 To ensure that nothing is broken in the new version of the software
Source of Information for Test
Selection
Source of Information for Test Selection

 Source Code
 Input and output Domain
 Operational Profile
 Fault Model
Source of Information for Test Selection Cont.

 Source Code

 Requirement specifications describe intended


behaviour of system whereas source code describes
the actual behaviour of the system
 Though designers produce detail design,
programmer may add details. For example to sort
any array programmer can use iteration, recursion
etc.
 Therefore, test cases must be designed based on the
program
Source of Information for Test Selection
 Input and output Domain

 Some values in the input domain of a program


have special meanings, and hence must be
treated separately e.g.
 The factorial of a nonnegative integer n is computed as follows:

 factorial(0) = 1;
 factorial(1) = 1;
 Factorial(n) = n * factorial(n-1);
 A programmer may wrongly implement the factorial function as

 factorial(n) = 1 * 2 * ... * n;
 without considering the special case of n = 0.
Source of Information for Test Selection
 Operational Profile
 Quantitative characterization of how a system will be used
 Test engineers select test cases (inputs) using samples of
system usage
 This testing help to develop more reliable systems
 Test inputs are assigned a probability distribution, or
profile, according to their occurrences in actual operation
 Often used to test web applications
Source of Information for Test Selection
 Fault Model

 Previously encountered faults are an excellent source of


information in designing new test cases.
 The known faults are classified into different classes, such as
initialization faults, logic faults, and interface faults, and
stored in a repository
 There are three types of fault-based testing:
 Error Guessing: (TE uses experience to guess faults and design tests to expose
these faults
 Fault Seeding: Known faults seeded into program to check effectiveness of test
suite(Also called as fault injection)
 Mutation Analysis: Program statement are altered to check effectiveness of test
suite (Also called as fault simulation)
White-box Testing

 White-box testing a.k.a. structural testing


 Examines source code with focus on:
 Control flow
 Data flow
 Control flow refers to flow of control from one instruction to another
 Data flow refers to propagation of values from one variable or constant
to another variable
 It is applied to individual units of a program
 Software developers perform structural testing on the individual program
units they write
Black-box Testing

 Black-box testing a.k.a. functional testing


 Examines the program that is accessible from outside
 Applies the input to a program and observe the externally visible outcome
 It is applied to both an entire program as well as to individual program units
 It is performed at the external interface level of a system
 It is conducted by a separate software quality assurance group
Test Planning and Design

 The purpose is to get ready and organized for test execution


 A test plan provides a:
 Framework
 A set of ideas, facts or circumstances within which the tests will be conducted

 Scope
 The domain or extent of the test activities

 Details of resource needed


 Effort required
 Schedule of activities
 Budget
Test Planning and Design Cont.

 Test objectives are identified from different sources


 Each test case is designed as a combination of modular test components called test steps
 Test steps are combined together to create more complex tests
Monitoring and Measuring Test Execution

 Metrics for monitoring test execution

 Metrics for monitoring defects

 Test case effectiveness metrics


 Measure the “defect revealing ability” of the test suite
 Use the metric to improve the test design process

 Test-effort effectiveness metrics


 Number of defects found by the customers that were not found by the test engineers
Test Tools and Automation

• Increased productivity of • The test cases to be automated are


the testers well defined
• Better coverage of • Test tools and an infrastructure are
regression testing in place
• Reduced durations of the • The test automation professionals
testing phases have prior successful experience in
• Reduced cost of software automation
maintenance • Adequate budget have been
• Increased effectiveness of allocation for the procurement of
test cases software tools
Test Team Organization and Management

Structure of test groups


Test Team Organization and Management
Cont.

 Hiring and retaining test engineers is a challenging task


 Interview is the primary mechanism for evaluating applicants
 Interviewing is a skills that improves with practice
 To retain test engineers, management must recognize the importance of testing efforts at
par with development effort
Q&A

You might also like