0% found this document useful (0 votes)
10 views74 pages

SWEN3165 Lecture 1

The document outlines key concepts in software quality and testing, including definitions of errors, faults, and failures, as well as the distinction between quality assurance (QA) and quality control (QC). It emphasizes the importance of testing in ensuring software reliability and quality, detailing various testing strategies, processes, and the significance of prioritizing tests based on risk. Additionally, it discusses the psychological aspects of testing, the roles and responsibilities of testers, and the necessity of independence in the testing process.

Uploaded by

kid unique
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views74 pages

SWEN3165 Lecture 1

The document outlines key concepts in software quality and testing, including definitions of errors, faults, and failures, as well as the distinction between quality assurance (QA) and quality control (QC). It emphasizes the importance of testing in ensuring software reliability and quality, detailing various testing strategies, processes, and the significance of prioritizing tests based on risk. Additionally, it discusses the psychological aspects of testing, the roles and responsibilities of testers, and the necessity of independence in the testing process.

Uploaded by

kid unique
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

SWEN3165

Software Quality & The Principles of Testing

Mr. Matthew Ormsby

University of the West Indies


Mona, Kingston, Jamaica
Testing Terms
• Error: a human action that produces an incorrect result

• Fault: a manifestation of an error in software


– also known as a defect or bug
– if executed, a fault may cause a failure

• Failure: deviation of the software from its expected delivery or service


– identified a defect
Testing Terms
• Reliability: the probability that software will not cause the failure of the system for
a specified time under specified conditions

– Can a system be fault-free? (zero faults, right first time)


– Can a software system be reliable but still have faults?
– Is a “fault-free” software application always reliable?
What Is Software Quality?
In the context of software engineering, software quality measures how well
the software is designed (quality of design), and how well the software conforms
to that design (quality of conformance). It is often described as the 'fitness for
purpose' of a piece of software.

Software quality includes activities related to both

– Process, and the


– Product
QA ≠ QC
Software Quality: Quality Assurance
• Quality assurance activities are work process oriented.

• They measure the process, identify deficiencies, and suggest improvements.

• The direct results of these activities are changes to the process.

• These changes can range from better compliance with the process to entirely
new processes.
Software Quality: Quality Assurance
• The output of quality control activities is often the input to quality assurance
activities.

• Audits are an example of a QA activity which looks at whether and how the
process is being followed. The end result may be suggested improvements or
better compliance with the process.
Software Quality: Quality Control
• Quality control activities are work product oriented.

• They measure the product, identify deficiencies, and suggest improvements.

• The direct results of these activities are changes to the product.

• These can range from single-line code changes to completely reworking a


product from design.
Software Quality: Quality Control
• They evaluate the product, identify weaknesses and suggest improvements.

• Testing and reviews are examples of QC activities since they usually result in
changes to the product, not the process.

• QC activities are often the starting point for quality assurance (QA) activities.
Which is better: QA or QC?
Prevention is better than cure . . .

. . . but not everything can be prevented!

Prevention

Detection

Cure
Quality... it’s all about the End-User

Does this software Will the users be able Can they bet their
product work as to do their jobs using business on this
advertised? this product? software product?
• Functionality Testing • Compatibility Testing • Reliability Testing
• Performance Testing • Load Testing • Security Testing
• System Testing • Stress Testing • Scalability Testing
• User Acceptance
Testing
Software Quality
There are two main approaches to software quality:

defect management quality attributes


Software Quality: Defect Management Approach
A software defect can be regarded as any failure to address end-user
requirements. Common defects include missed or misunderstood requirements
and errors in design, functional logic, data relationships, process timing, validity
checking, and coding errors.

The software defect management approach is based on counting and managing


defects. Defects are commonly categorized by severity, and the numbers in each
category are used for planning. More mature software development organizations
use tools, such as defect leakage matrices (for counting the numbers of defects
that pass through development phases prior to detection) and control charts, to
measure and improve development process capability.
Software Quality: Quality Attributes Approach
This approach to software quality is best exemplified by fixed quality models, such
as ISO/IEC 25010:2011. This standard describes a hierarchy of eight quality
characteristics, each composed of sub-characteristics:
Why do faults occur in software?
• Software is written by human beings
– who know something, but not everything
– who have skills, but aren’t perfect
– who do make mistakes (errors)

• Under increasing pressure to deliver to strict deadlines


– no time to check but assumptions may be wrong
– systems may be incomplete
What do software faults cost?
• Huge sums
– Ariane 5 ($7billion)
– Mariner space probe to Venus ($250m)
– American Airlines ($50m)

• Very little or nothing at all


– minor inconvenience
– no visible or physical detrimental impact

• Software is not “linear”:


– small input may have very large effect
Safety-critical systems
Software faults can cause death or injury
– radiation treatment kills patients (Therac-25)
– train driver killed (fault in train brake system)
– aircraft crashes (Airbus & Korean Airlines)
– bank system overdraft letters cause suicide
So why is testing necessary?

• because software is likely to have faults


• to learn about the reliability of the software
• to fill the time between delivery of the software and the release date
• to prove that the software has no faults
• because testing is included in the project plan
• because failures can be very expensive
• to avoid being sued by customers
• to stay in business
Should we just test everything?

Let’s discuss …
How much testing is enough?

• it’s never enough


• when you have done what you planned
• when your customer/user is happy
• when you have proved that the system works correctly
• when you are confident that the system works correctly
• it depends on the risks for your system
How much testing?

• It depends on RISK
• risk of missing important faults
• risk of incurring failure costs
• risk of releasing untested or under-tested software
• risk of losing credibility and market share
• risk of missing a market window
• risk of over-testing, ineffective testing
So little time, so much to test ...

• test time will always be limited

• use RISK to determine:


}
• what to test first i.e. where to
• what to test most place emphasis
• how thoroughly to test each item
• what not test (this time)

• use RISK to:


• allocate the time available for testing by prioritising testing ...
Most important principle

Prioritise tests
so that,
whenever you stop testing,
you have done the best testing
in the time available.
Testing and quality

• Testing measures software quality

• Testing can find faults; when they are removed, software quality (and possibly
reliability) is improved

• What does testing test?


• system function, correctness of operation
• non-functional qualities: reliability, usability, maintainability, reusability, testability, etc.
Other factors that influence testing

• Contractual requirements

• Legal requirements

• Industry-specific requirements
• e.g. pharmaceutical industry (FDA), compiler standard tests, safety-critical or safety-related
such as railroad switching, air traffic control

It is difficult to determine
how much testing is enough
but it is not impossible
Fundamental Test Process
Test Planning - different levels

Test
Policy

Company level

Test
Strategy

High Level Project level


HighPlan
Test Level (one for each project)
Test Plan

Test stage level (IEEE 829)


Detailed
Detailed (one for each stage within a project,
TestDetailed
Plan
TestDetailed
Plan e.g. Component, System, etc.)
Test Plan
Test Plan
The Test Process

Planning (detailed level)

check
specification execution recording
completion
Test Planning

• How the test strategy and project test plan apply to the software under test

• Document any exceptions to the test strategy


• e.g. only one test case design technique needed for this functional area because it is less
critical

• Other software needed for the tests, such as stubs and drivers, and
environment details

• Set test completion criteria


Test Specification

Planning (detailed level)

check
specification execution recording
completion

Identify conditions

Design test cases

Build tests
A good test case
Finds faults
• Effective

• Exemplary
Represents others
• Evolvable
Easy to maintain
• Economic

Cheap to use
Test Specification

Test specification can be broken down into three distinct tasks:

1. identify: determine ‘what’ is to be tested (identify


test conditions) and prioritise

2. design: determine ‘how’ the ‘what’ is to be tested


(i.e. design test cases)

3. build: implement the tests (data, scripts, etc.)


Task 1: identify conditions
(determine ‘what’ is to be tested and prioritise)
• list the conditions that we would like to test:
– use the test design techniques specified in the test plan
– there may be many conditions for each system function or attribute
– e.g.
• “life assurance for a winter sportsman”
• “number items ordered > 99”
• “date = 29-Feb-2004”

• prioritise the test conditions


– must ensure most important conditions are covered
Selecting test conditions

Importance


Best set

✘ First set Time


Task 2: design test cases
(determine ‘how’ the ‘what’ is to be tested)
• design test input and test data
each test exercises one or more test conditions

• determine expected results


predict the outcome of each test case, what is output, what is changed and what is not
changed

• design sets of tests


different test sets for different objectives such as regression, building confidence, and finding
faults
Most important
Designing test cases test conditions

Least important
test conditions
Importance
Test cases

Time
Task 3: build test cases
(implement the test cases)
• prepare test scripts
– less system knowledge tester has the more detailed the scripts will have to be
– scripts for tools have to specify every detail

• prepare test data


– data that must exist in files and databases at the start of the tests

• prepare expected results


– should be defined before the test is executed
Test execution

Planning (detailed level)

check
specification execution recording
completion
Execution
• Execute prescribed test cases
– most important ones first
– would not execute all test cases if
• testing only fault fixes
• too many faults found by early test cases
• time pressure
– can be performed manually or automated
Test recording

Planning (detailed level)

check
specification execution recording
completion
Test recording 1
• The test record contains:
– identities and versions (unambiguously) of
• software under test
• test specifications

• Follow the plan


– mark off progress on test script
– document actual outcomes from the test
– capture any other ideas you have for new test cases
– note that these records are used to establish that all test activities have been carried out as
specified
Test recording 2
• Compare actual outcome with expected outcome. Log discrepancies
accordingly:
– software fault
– test fault (e.g. expected results wrong)
– environment or version fault
– test run incorrectly
• Log coverage levels achieved (for measures specified as test completion criteria)
• After the fault has been fixed, repeat the required test activities (execute, design,
plan)
Check test completion

Planning (detailed level)

check
specification execution recording
completion
Check test completion
• Test completion criteria were specified in the test plan
• If not met, need to repeat test activities, e.g. test specification to design more
tests

Coverage too low

check
specification execution recording
completion
Coverage
OK
Test completion criteria
• Completion or exit criteria apply to all levels of testing - to determine when to
stop
– coverage, using a measurement technique, e.g.
• branch coverage for unit testing
• user requirements
• most frequently used transactions
– faults found (e.g. versus expected)
– cost or time
Governs the
Comparison of tasks
quality of tests
Planning Intellectual
one-off
Specification activity Good to
activity automate
Execute repeated
many times
Recording
Clerical
Psychology of
Testing
Why test?
• build confidence
• prove that the software is correct
• demonstrate conformance to requirements
• find faults
• reduce costs
• show system meets user needs
• assess the software quality
Confidence

Confidence
Fault
Faultsfound
found

Time

No faults found = confidence?


Assessing software quality You think
you are here

Many High Few


Few
Faults Faults
Faults

Low High
Software Quality

Few Test Few


Faults Quality Faults

You may
be here

Low
A traditional testing approach
• Show that the system:
– does what it should
– doesn't do what it shouldn't

Goal: show working


Success: system works

Fastest achievement: easy test cases

Result: faults left in


A better testing approach
• Show that the system:
– does what it shouldn't
– doesn't do what it should

Goal: find faults


Success: system fails

Fastest achievement: difficult test cases

Result: fewer faults left in


The testing paradox

Purpose of testing: to find faults


Finding faults destroys confidence
Purpose of testing: destroy confidence

Purpose of testing: build confidence

The best way to build confidence


is to try to destroy it
Who wants to be a tester?
• A destructive process
• Bring bad news (“your baby is ugly”)
• Under worst time pressure (at the end)
• Need to take a different view, a different mindset (“What if it isn’t?”, “What could
go wrong?”)
• How should fault information be communicated (to authors and managers?)
Tester’s have the right to:
– accurate information about progress and changes
– insight from developers about areas of the software
– delivered code tested to an agreed standard
– be regarded as a professional (no abuse!)
– find faults!
– challenge specifications and test plans
– have reported faults taken seriously (non-reproducible)
– make predictions about future fault levels
– improve your own testing process
Testers have responsibility to:
– follow the test plans, scripts etc. as documented
– report faults objectively and factually (no abuse!)
– check tests are correct before reporting s/w faults
– remember it is the software, not the programmer, that you are testing
– assess risk objectively
– prioritise what you report
– communicate the truth
Independence
• Test your own work?
– find 30% - 50% of your own faults
– same assumptions and thought processes
– see what you meant or want to see, not what is there
– emotional attachment
• don’t want to find faults
• actively want NOT to find faults
Levels of independence
• None: tests designed by the person who wrote the software
• Tests designed by a different person
• Tests designed by someone from a different department or team (e.g. test team)
• Tests designed by someone from a different organisation (e.g. agency)
• Tests generated by a tool (low quality tests?)
Re-testing and
regression
testing
Re-testing after faults are fixed
• Run a test, it fails, fault reported
• New version of software with fault “fixed”
• Re-run the same test (i.e. re-test)
– must be exactly repeatable
– same environment, versions (except for the software which has been intentionally changed!)
– same inputs and preconditions
• If test now passes, fault has been fixed correctly - or has it?
Re-testing (re-running failed tests)

New faults introduced by the first


fault fix not found during re-testing

x
ü
Fault now fixed
Re-test to check
Regression test

²to look for any unexpected side-effects

x
ü

Can’t guarantee
to find them all
Regression testing 1
• misnomer: "anti-regression" or "progression"
• standard set of tests - regression test pack
• at any level (unit, integration, system, acceptance)
• well worth automating
• a developing asset but needs to be maintained
Regression testing 2
• Regression tests are performed
– after software changes, including faults fixed
– when the environment changes, even if application functionality stays the same
– for emergency fixes (possibly a subset)
• Regression test suites
– evolve over time
– are run often
– may become rather large
Regression testing 3
• Maintenance of the regression test pack
– eliminate repetitive tests (tests which test the same test condition)
– combine test cases (e.g. if they are always run together)
– select a different subset of the full regression suite to run each time a regression test is
needed
– eliminate tests which have not found a fault for a long time (e.g. old fault fix tests)
Regression testing and automation
• Test execution tools (e.g. capture replay) are regression testing tools - they re-
execute tests which have already been executed
• Once automated, regression tests can be run as often as desired (e.g. every
night)
• Automating tests is not trivial (generally takes 2 to 10 times longer to automate a
test than to run it manually
• Don’t automate everything - plan what to automate first, only automate if
worthwhile
Expected
Results
Expected results
• Should be predicted in advance as part of the test design process
– ‘Oracle Assumption’ assumes that correct outcome can be predicted.
• Why not just look at what the software does and assess it at the time?
– subconscious desire for the test to pass - less work to do, no incident report to write up
– it looks plausible, so it must be OK - less rigorous than calculating in advance and
comparing
A test expected
inputs outputs

A Program:

3 6?

Read A
IF (A = 8) THEN
PRINT (“10”)
ELSE
PRINT (2*A) 8 10?

Source: Carsten Jorgensen, Delta, Denmark


Prioritising tests
• We can’t test everything
• There is never enough time to do all the testing you would like
• So what testing should you do?
Most important principle

Prioritise tests
so that,
whenever you stop testing,
you have done the best testing
in the time available.
How to prioritise?
• Possible ranking criteria (all risk based)
– test where a failure would be most severe
– test where failures would be most visible
– test where failures are most likely
– ask the customer to prioritise the requirements
– what is most critical to the customer’s business
– areas changed most often
– areas with most problems in the past
– most complex areas, or technically critical
Key Points
• Testing is necessary because people make errors

• The test process: planning, specification, execution, recording, checking


completion

• Independence & relationships are important in testing

• Re-test fixes; regression test for the unexpected

• Expected results from a specification in advance

• Prioritise to do the best testing in the time you have

You might also like