0% found this document useful (0 votes)
54 views20 pages

Sivatest 1

This document discusses testing strategies for object-oriented software. It defines key testing concepts like unit testing, integration testing, and system testing. It also covers black box versus white box testing and describes test cases, stubs, drivers, and regression testing. The overall goal of testing is to systematically find errors or "falsify" the system through exercising potential defects. While testing cannot prove the absence of bugs, it can help improve software quality and reliability over time.

Uploaded by

sivamskr
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views20 pages

Sivatest 1

This document discusses testing strategies for object-oriented software. It defines key testing concepts like unit testing, integration testing, and system testing. It also covers black box versus white box testing and describes test cases, stubs, drivers, and regression testing. The overall goal of testing is to systematically find errors or "falsify" the system through exercising potential defects. While testing cannot prove the absence of bugs, it can help improve software quality and reliability over time.

Uploaded by

sivamskr
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 20

TCSS 360, Spring 2005 Lecture Notes

Testing
Object-Oriented Software Engineering, Ch. 9
B. Bruegge, A. Dutoit
1

Relevant Reading:

Case study: Scrabble moves

Let's think about code to validate moves on a Scrabble board

Where can we start a move? Where can tiles be in relation to the starting tile? How do we compute scores for a move? How do we do word challenges?
2

Bugs

even if we correctly discover all cases for placing words on the Scrabble board, it is very likely that we'll have some bugs when we code it

bugs are inevitable in any complex software system a bug can be very visible or can hide in your code until a much later date

we can hunt down the cause of a known bug using print statements or our IDE's debugger ... but how do we discover all of the bugs in our system, even those with low visibility?

ANSWER: testing and Quality Assurance practices


3

Testing
What is the overall goal of testing? What claims can we make when testing "passes" or "fails" ? Can we prove that our code has no bugs?

testing: systematic attempt to reveal the presence of errors (to "falsify" system)

accomplished by exercising defects in the system and revealing problems


failed test: an error was demonstrated passed test: no error was found, so far

not used to show absence of errors in software does not directly reveal the actual bugs in the code
4

Difficulties of testing

testing is seen as a beginner's job, often assigned to the least experienced team members testing often done as an afterthought (if at all) testing cannot be "conquered"; it is impossible to completely test a system
example: Space Shuttle Columbia launch, 1981 programmer set 50ms delay to 80ms, led to 1/67 chance of launch failure top-down testing poopooed by shuttle guy because when error is found, impossible to fix without redesign or expense 5

Software reliability
What is software reliability? How do we measure reliability?

reliability: how closely the system conforms to expected behavior software reliability: probability that a software system will not cause failure for the specified time under specified conditions

reliability measured by uptime, MTTF (mean time till failure), program crash data
6

Faults
What is the difference between a fault and an error? What are some kinds of faults?

error: incorrect software behavior

example: message box text said "Welcome null." example: account name field is not set properly.
a fault is not an error, but it can lead to them need requirements to specify desired behavior, and need to see system deviate from that behavior, to have a failure
7

fault: mechanical or algorithmic cause of error


Some types of faults

algorithmic faults

design produces a poor algorithm fail to implement the software to match the spec subsystems don't communicate properly

mechanical faults

earthquake virtual machine failure (why is this a "mechanical" fault?)


8

Quality control techniques


Any large system is bound to have faults. What are some quality control techniques we can use to deal with them?

fault avoidance: prevent errors by finding faults before the system is released fault detection: find existing faults without recovering from the errors fault tolerance: when system can recover from failure by itself
9

Fault avoidance techniques

development methodologies: use requirements and design to minimize introduction of faults

get clear requirements minimize coupling

configuration management: don't allow changes to subsystem interfaces verification: find faults in system execution

problems: not mature yet, assumes requirements are correct, assumes pre/postconditions are adequate

review: manual inspection of system walkthrough: dev presents code to team inspection: team looks at code without dev's guidance

shown effective at finding errors

10

Fault detection techniques

fault detection: find existing faults without recovering from the errors debugging: move through steps to reach erroneous state testing: tries to expose errors in planned way
(we are here) a good test model has test cases and test data that identify errors ideally, every possible input to a system should be tested this is prohibitively time-consuming
11

Kinds of testing
What is the difference between "unit" testing, "integration" testing, and so on? Why do we use many different kinds of tests?

unit testing: looks for errors in objects or subsystems integration testing: find errors with connecting subsystems together

system structure testing: integration testing all parts of system together

system testing: test entire system behavior as a whole, with respect to scenarios and requirements

functional testing: test whether system meets requirements performance testing: nonfunctional requirements, design goals acceptance / installation testing: done by client
12

Types of testing (Fig9.2 p335)

13

Fault tolerance techniques

fault tolerance: recovery from failure by system

modular redundancy: assumes failures usually occur at subsystem level, and assigns more than one component for same task

example: database transaction rollbacks

example: a RAID-1 hard disk array uses more than one hard disk to store the same data, so that in case one disk breaks down, the rest still contain the important data

14

Testing concepts
What is a test case? What is a failure? How are they related?

failure: particular instance of a general error, which is caused by a fault test case: set of inputs and outputs to cause failures
15

Test cases
What are the five elements of a well-written test case, according to the authors? (Hint: one of these is an "oracle." What is this?)

name: descriptive name of what is being tested location: full path/URL to test input: arguments, commands, input files to use

entered by tester or test driver

oracle: expected output log: actual output produced


16

Black and white box testing


What is the difference between "black-box" and "white-box" testing?

black-box test: focuses on input/output of each component white-box test: focuses on internal states of objects

requires internal knowledge of the component to craft input data example: knowing that the internal data structure for a spreadsheet program uses 256 rows and columns, choose a test case that tests 255 or 257 to test near that boundary
17

Test stubs and drivers

test stub: partial implementation on which tested component depends

simulates parts that are called by the tested component Bridge pattern (fig 9-11, p342): isolate subsystems by interfaces, to facilitate stubs

test driver: code that depends on test case (runs it) and tested component correction: change made to fix faults (can introduce new faults)

can be a simple bug fix or a redesign likely to introduce new bugs problem tracking: documenting each failure and remedy

18

Regression testing
What is regression testing, and why is it important?

regression testing: re-executing all prior tests after a code change

often done by scripts, automated testing used to ensure that old fixed bugs are still fixed

a new feature or a fix for one bug can cause a new bug or reintroduce an old bug

especially important in evolving object-oriented systems


19

For next time...

in the next part of this reading, we'll learn about:

ways to conduct good unit tests that cover many representative cases of the program's usage, without covering every possible input structures for doing integration between components in integration testing useful types of system testing how to plan and document a testing policy something about sandwiches (?)
20

You might also like