SWEN3165 Lecture 1
SWEN3165 Lecture 1
• These changes can range from better compliance with the process to entirely
new processes.
Software Quality: Quality Assurance
• The output of quality control activities is often the input to quality assurance
activities.
• Audits are an example of a QA activity which looks at whether and how the
process is being followed. The end result may be suggested improvements or
better compliance with the process.
Software Quality: Quality Control
• Quality control activities are work product oriented.
• Testing and reviews are examples of QC activities since they usually result in
changes to the product, not the process.
• QC activities are often the starting point for quality assurance (QA) activities.
Which is better: QA or QC?
Prevention is better than cure . . .
Prevention
Detection
Cure
Quality... it’s all about the End-User
Does this software Will the users be able Can they bet their
product work as to do their jobs using business on this
advertised? this product? software product?
• Functionality Testing • Compatibility Testing • Reliability Testing
• Performance Testing • Load Testing • Security Testing
• System Testing • Stress Testing • Scalability Testing
• User Acceptance
Testing
Software Quality
There are two main approaches to software quality:
Let’s discuss …
How much testing is enough?
• It depends on RISK
• risk of missing important faults
• risk of incurring failure costs
• risk of releasing untested or under-tested software
• risk of losing credibility and market share
• risk of missing a market window
• risk of over-testing, ineffective testing
So little time, so much to test ...
Prioritise tests
so that,
whenever you stop testing,
you have done the best testing
in the time available.
Testing and quality
• Testing can find faults; when they are removed, software quality (and possibly
reliability) is improved
• Contractual requirements
• Legal requirements
• Industry-specific requirements
• e.g. pharmaceutical industry (FDA), compiler standard tests, safety-critical or safety-related
such as railroad switching, air traffic control
It is difficult to determine
how much testing is enough
but it is not impossible
Fundamental Test Process
Test Planning - different levels
Test
Policy
Company level
Test
Strategy
check
specification execution recording
completion
Test Planning
• How the test strategy and project test plan apply to the software under test
• Other software needed for the tests, such as stubs and drivers, and
environment details
check
specification execution recording
completion
Identify conditions
Build tests
A good test case
Finds faults
• Effective
• Exemplary
Represents others
• Evolvable
Easy to maintain
• Economic
Cheap to use
Test Specification
Importance
✔
Best set
Least important
test conditions
Importance
Test cases
Time
Task 3: build test cases
(implement the test cases)
• prepare test scripts
– less system knowledge tester has the more detailed the scripts will have to be
– scripts for tools have to specify every detail
check
specification execution recording
completion
Execution
• Execute prescribed test cases
– most important ones first
– would not execute all test cases if
• testing only fault fixes
• too many faults found by early test cases
• time pressure
– can be performed manually or automated
Test recording
check
specification execution recording
completion
Test recording 1
• The test record contains:
– identities and versions (unambiguously) of
• software under test
• test specifications
check
specification execution recording
completion
Check test completion
• Test completion criteria were specified in the test plan
• If not met, need to repeat test activities, e.g. test specification to design more
tests
check
specification execution recording
completion
Coverage
OK
Test completion criteria
• Completion or exit criteria apply to all levels of testing - to determine when to
stop
– coverage, using a measurement technique, e.g.
• branch coverage for unit testing
• user requirements
• most frequently used transactions
– faults found (e.g. versus expected)
– cost or time
Governs the
Comparison of tasks
quality of tests
Planning Intellectual
one-off
Specification activity Good to
activity automate
Execute repeated
many times
Recording
Clerical
Psychology of
Testing
Why test?
• build confidence
• prove that the software is correct
• demonstrate conformance to requirements
• find faults
• reduce costs
• show system meets user needs
• assess the software quality
Confidence
Confidence
Fault
Faultsfound
found
Time
Low High
Software Quality
You may
be here
Low
A traditional testing approach
• Show that the system:
– does what it should
– doesn't do what it shouldn't
x
ü
Fault now fixed
Re-test to check
Regression test
x
ü
Can’t guarantee
to find them all
Regression testing 1
• misnomer: "anti-regression" or "progression"
• standard set of tests - regression test pack
• at any level (unit, integration, system, acceptance)
• well worth automating
• a developing asset but needs to be maintained
Regression testing 2
• Regression tests are performed
– after software changes, including faults fixed
– when the environment changes, even if application functionality stays the same
– for emergency fixes (possibly a subset)
• Regression test suites
– evolve over time
– are run often
– may become rather large
Regression testing 3
• Maintenance of the regression test pack
– eliminate repetitive tests (tests which test the same test condition)
– combine test cases (e.g. if they are always run together)
– select a different subset of the full regression suite to run each time a regression test is
needed
– eliminate tests which have not found a fault for a long time (e.g. old fault fix tests)
Regression testing and automation
• Test execution tools (e.g. capture replay) are regression testing tools - they re-
execute tests which have already been executed
• Once automated, regression tests can be run as often as desired (e.g. every
night)
• Automating tests is not trivial (generally takes 2 to 10 times longer to automate a
test than to run it manually
• Don’t automate everything - plan what to automate first, only automate if
worthwhile
Expected
Results
Expected results
• Should be predicted in advance as part of the test design process
– ‘Oracle Assumption’ assumes that correct outcome can be predicted.
• Why not just look at what the software does and assess it at the time?
– subconscious desire for the test to pass - less work to do, no incident report to write up
– it looks plausible, so it must be OK - less rigorous than calculating in advance and
comparing
A test expected
inputs outputs
A Program:
3 6?
Read A
IF (A = 8) THEN
PRINT (“10”)
ELSE
PRINT (2*A) 8 10?
Prioritise tests
so that,
whenever you stop testing,
you have done the best testing
in the time available.
How to prioritise?
• Possible ranking criteria (all risk based)
– test where a failure would be most severe
– test where failures would be most visible
– test where failures are most likely
– ask the customer to prioritise the requirements
– what is most critical to the customer’s business
– areas changed most often
– areas with most problems in the past
– most complex areas, or technically critical
Key Points
• Testing is necessary because people make errors