0% found this document useful (0 votes)
35 views51 pages

Software Engg Week 11 Quality + Testing

This document discusses software quality, including definitions of key terms like error, fault, failure, and bug. It explains that human errors inevitably occur when writing complex code, and on average experienced programmers commit one error for every 10 lines of code. The document also discusses why software fails, such as due to wrong, missing, or impossible requirements as well as faulty design or code. It provides definitions and explanations of quality, quality assurance, and related terms. Finally, it discusses the importance of software quality and its relationship to software engineering.

Uploaded by

Ibrahim Qamar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views51 pages

Software Engg Week 11 Quality + Testing

This document discusses software quality, including definitions of key terms like error, fault, failure, and bug. It explains that human errors inevitably occur when writing complex code, and on average experienced programmers commit one error for every 10 lines of code. The document also discusses why software fails, such as due to wrong, missing, or impossible requirements as well as faulty design or code. It provides definitions and explanations of quality, quality assurance, and related terms. Finally, it discusses the importance of software quality and its relationship to software engineering.

Uploaded by

Ibrahim Qamar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 51

Software Quality Assurance

Lecture
Error, Bug, Fault & Failure
 Error
It is a human action that produces the incorrect result that produces a fault.
 Fault
State of software caused by an error.
 Failure
Deviation of the software from its expected result. It is an event.
 Bug
The presence of error at the time of execution of the software.
Why does software have errors?

 When executing complex tasks, human


beings do commit errors
 This cannot be avoided
 Experienced programmers commit 1 error in
every 10 lines of code (average)
 About 50% of these errors are corrected
during code compilation
 Further errors are corrected in testing
 About 15% of the errors are still in the
system when delivering it to the customer
Software Faults and Failures, Why Does Software Fail
 Wrong requirement: not what the customer wants
 Missing requirement
 Requirement impossible to implement
 Faulty design
 Faulty code
 Improperly implemented design
Quality

 Traditional definition of quality:


 fitness of purpose,
 a quality product does exactly what the users want it
to do.
 Quality – Quality refers to any measurable characteristics such
as correctness, maintainability, portability, testability, usability,
reliability, efficiency, integrity, reusability and interoperability
Quality Terminologies
 Quality of Design refers to the characteristics that designer’s specify for an item.
 Quality of Conformance is the degree to which the design specifications are followed during
manufacturing.
 Quality Control is the series of inspections, reviews and tests used throughout the development
cycle to ensure that each work product meets the requirements placed upon it.
 Quality policy refers to the basic aims and objectives of an organization regarding quality as
stipulated by the management.
 Quality assurance consists of the auditing and reporting functions of management.
 Cost of Quality includes all costs incurred in the pursuit of quality or in performing quality
related activities such as appraisal costs, failure costs and external failure costs.
 Quality planning is the process of assessing the requirements of the procedure and of the
product and the context in which these must be observed.
 Quality testing is assessment of the extent to which a test object meets given requirements
 Quality assurance plan is the central aid for planning and checking the quality assurance.
 Quality assurance system is the organizational structure, responsibilities, procedures,
processes and resources for implementing quality management.
Software Product

 Consider a software product:


 functionally correct,
 i.e. performs all functions as specified in the SRS
document,
 but has an almost unusable user interface.
 cannot be considered as a quality product.
Software Product

 Another example:
 a product which does everything that users want.
 but has an almost incomprehensible and unmaintainable code.
Quality

Definition of quality
 “Degree of excellence, relative nature or kind of character, class or grade of
thing as determined by this, general excellence"
 (The New Oxford Illustrated Dictionary)
Quality

Software Quality is conformance to:


 explicitly stated functional and performance requirements,
 explicitly documented development standards,
 implicit characteristics that are expected of all professionally
developed software.
 Developed product meet the its specification
Problems with Software Quality
 Software specifications are usually incomplete and often inconsistent

 There is tension between:


 customer quality requirements (efficiency, reliability, etc.)
 developer quality requirements (maintainability, reusability, etc.)

 Some quality requirements are hard to specify in an unambiguous way


 directly measurable qualities (e.g., errors/KLOC),
 indirectly measurable qualities (e.g., usability).

Quality management is not just about reducing defects!


Quality

Quality Management – ensuring that required level of product quality is


achieved
• Defining procedures and standards
• Applying procedures and standards to the product and process
• Checking that procedures are followed
• Collecting and analyzing various quality data
Problems:
• Intangible aspects of software quality can’t be standardized
(i.e elegance and readability)
Importance of Software Quality
 Ariane 5 crash June 4, 1996
 Maiden flight of the European Ariane 5 launcher crashed
about 40 seconds after takeoff
 Lost was about half a billion dollars
 Explosion was the result of a software error
 Uncaught exception due to floating-point error: conversion
from a 64-bit integer to a 16-bit signed integer applied to a
larger than expected number
 Module was re-used without proper testing from Ariane 4
 Error was not supposed to happen with Ariane 4
 No exception handler
Importance of Software Quality
 Several historic disasters attributed to software
 First operational launch attempt of the space shuttle, whose
real-time operating software consists of about 500,000 lines of
code, failed
 Synchronization problem among its flight-control
computers
 9 hour breakdown of AT&T's long-distance telephone
network
 Caused by an untested code patch
Relation of Software Engineering with SQA

 Software engineering is the application of a systematic, disciplined,


quantifiable approach to the development, operation and
maintenance of software.
 The characteristics of software engineering, especially its
systematic, disciplined and quantitative approach, make software
engineering a good environment for achieving SQA objectives.
 It is commonly accepted that cooperation between software
engineers and the SQA team is the way to achieve efficient and
economic development and maintenance activities that, at the
same time, assure the quality of the products of these activities.
Quality Attributes

...

Reliability

Software Efficiency
Quality
Usability

Maintainability

Portability

© Oscar Nierstrasz ESE 11.16


Quality Attributes
 product: delivered to the customer
 process: produces the software product
 Project: The actual act of executing the activities for some
specific user need is a S/w project.
 resources: (both the product and the process require
resources)
 Underlying assumption: a quality process leads to a
quality product (cf. metaphor of manufacturing lines)
Quality Attributes
Correctness
 A system is correct if it behaves according to its specification
 An absolute property (i.e., a system cannot be “almost correct”)
 ... in theory and practice undecidable
Reliability
 The user may rely on the system behaving properly
 Reliability is the probability that the system will operate as expected over a specified interval
 A relative property (a system has a mean time between failure of 3 weeks)
Robustness
 A system is robust if it behaves reasonably even in circumstances that were not specified
 A vague property (once you specify the abnormal circumstances they become part of the requirements)
Efficiency (Performance)
 Use of resources such as computing time, memory
 Affects user-friendliness and scalability
Quality Attributes
Usability (User Friendliness, Human Factors)
 The degree to which the human users find the system (process) both “easy to use” and useful
 Depends a lot on the target audience (novices vs. experts)
External product attributes
Maintainability
 How easy it is to change a system after its initial release
Repairability, Evolvability , Portability
Internal product attributes
Verifiability
 How easy it is to verify whether desired attributes are there?
 internally: e.g., verify requirements, code inspections
Understandability
 How easy it is to understand the system
 internally: contributes to maintainability
External process attribute
Productivity
 Amount of product produced by a process for a given number of resources
 productivity among individuals varies a lot Function
Timeliness System
User needs capability
 Ability to deliver the product on time
 important for marketing (“short time to market”)
Visibility (Transparency)
 Current process steps and project status are accessible Time
t0 t1 t 2 t 3 t4
 important for management initial
delivery redesign
 also deal with staff turn-over
Quality Control Assumption
Project Concern = Deliver on time and within budget

External (and Internal) Process Attributes


Product Attributes

Assumptions:

Internal quality External quality


Process quality Product quality

Control during project Obtain after project

Otherwise, quality is mere coincidence!


Quality Plan
A quality plan should:
 set out desired product qualities and how these are assessed
 define the most significant quality attributes
 define the quality assessment process
 i.e., the controls used to ensure quality
 set out which organizational standards should be applied
 may define new standards, i.e., if new tools or methods are used
Software Quality Management System
 Quality management system (or quality system):
 principal methodology used by organizations to ensure that the
products have desired quality.
Activities
 Auditing of projects
 Development of:
 standards, procedures, and guidelines, etc.
 Production of reports for the top management
 summarizing the effectiveness of the quality system in the
organization.
 Review of the quality system itself.
Software Quality Assurance
 Basic premise of modern quality assurance:
 if an organization's processes are good and are followed rigorously,
 the products are bound to be of good quality.
 All modern quality paradigms include:
 guidance for recognizing, defining, analyzing, and improving the production
process.
 SOFTWARE QUALITY ASSURANCE (SQA) is a set of activities for ensuring quality
in software engineering processes (that ultimately result in quality of software
products).
Software Testing
TESTING - INTRODUCTION

 Checks for the validation of functional and non functional requirements.


 Verifies that whether a given software product matching its SRS.
 Validate the quality of a software testing using the minimum cost and
efforts.
 It is the process used to identify the correctness and completeness of
developed computer software.
 It is the process of executing a program/application under positive and
negative conditions by manual or automated means.
Objectives of Testing
 Objective of testing: discover faults
 A test is successful only when a fault is discovered
 Fault identification is the process of determining what fault caused the
failure
 Fault correction is the process of making changes to the system so that
the faults are removed
Test plan Test Case
 It is a systematic approach to test a  It is a specific procedure of testing a
system i.e. software. particular requirement.
  It is usually made for a specific use case.
The plan typically contains a
detailed understanding of what the  It will include:
eventual testing workflow will be.  Identification of specific requirement

 Test plan explains  Test case success/failure criteria


 Specific steps to execute test
 who does the testing
 Test input data
 why the tests are performed
 Expected and Actual result.
 how tests are conducted
 when the tests are scheduled

 Test Plan Contains


 What the test objectives are
 How the test will be run
 What criteria will be used to determine when the
testing is complete
Test Case Format
Test Case Example: Box Testing
TC Title Input Expected Actual Status:
id output Output

TC1 addition of X=10, 21 21 Pass


two y=11
numbers
TC2 a=10 3 17 Fail
subtraction b=7
of two
numbers
Test case examples
Test Scenario/ Steps Expected Result Actual result Status
case Description
1 As a User I 1- Open the app and Showing the activity Showing the activity page Pass
should be able to navigate to add new page
add New category button.
Activity 2- click add new activity
button
2 As a User I 1- Open the add activity Date picker should popup Date picker should popup Pass
should be able to page by clicking the add to enter the date to enter the date
specify date for new activity button
my new added 2- Browse to date button
activity and enter date by using the
pop up date picker
3 As a User I 1- Click on the view activities page should activities page is not Fail
should be able to activities button to see the show activities by showing activities by
see the activities previous activities. category category
that I have 2- Browse to see the
entered activities by category
Testing Methodologies
 White box testing
 Black box testing
 Black Box vs White Box

This is how a driver or user looks at me. I


am opaque or black box for him.
White box testing
 White Box Testing (also known as Clear Box Testing, Open Box
Testing, Glass Box Testing, Transparent Box Testing, Code-Based
Testing or Structural Testing) is a software testing method in which
the internal structure/ design/ implementation of the item being
tested is known to the tester.

 White box testing is mainly used for detecting errors in the program
code.

 It can be applied at all levels of system development especially Unit and


integration testing.
Black box testing
 In Black Box Testing we just focus on inputs and output of the software system without
bothering about internal knowledge of the software program.
Contd…
BLACK BOX TESTING, also known as Behavioral Testing
 is a software testing method in which the internal
structure/design/implementation of the item being tested is not
known to the tester.
 Tests are based on requirements written in SRS
 Black-box testing is a method of software testing that examines the
functionality of an application (e.g. what the software does)
Levels of Testing
Unit Testing

 Tests each module individually


 Code walkthrough
 Code inspection

Integration Testing

 Once all the modules have been unit tested,


integration testing is performed
System Testing

 Verifies that all system elements work properly and


that overall system function and performance has
been achieved.
Acceptance Testing

 It is performed by the customer to determine


whether to accept or reject the delivery of the
system.
System Testing and its types
• In system testing the behavior of whole system/product is tested as defined by the
scope of the development project or product.
• System testing is most often the final test to verify that the system to be delivered
meets the specification and its purpose.
• System testing is carried out by specialists testers or independent testers.

 Types of System Testing:


• Function testing (GUI) • Scalability testing
• Performance Testing • Compatibility testing
• Load testing • Smoke Testing
• Usability testing • Regression Testing
• Volume testing • Sanity testing
• Stress Testing • Installation testing
• Security Testing • Ad Hoc Testing
 Graphical User Interface Testing: To generate a set of test cases, test designers
attempt to cover all the functionality of the system and fully exercise the GUI itself.
 Performance Testing: It is testing that is performed, to determine how fast some
aspect of a system performs under a particular workload.
 Load Testing: It is a type of software testing which is conducted to understand the
behavior of the application under a specific expected load. Load testing is
performed to determine a system’s behavior under both normal and at peak
conditions.
 Usability Testing: In usability testing basically the testers tests the ease with which
the user interfaces can be used. It tests that whether the application or the product
built is user-friendly or not.
 Volume Testing: Volume testing refers to testing a software application or the
product with a certain amount of data. E.g., if we want to volume test our
application with a specific database size, we need to expand our database to that
size and then test the application’s performance on it.
 Stress Testing: It involves testing beyond normal operational capacity, often to
a breaking point, in order to observe the results
 Security Testing: Security testing is basically a type of software testing that’s done to
check whether the application or the product is secured or not. It checks to see if the
application is vulnerable to attacks, if anyone hack the system or login to the
application without any authorization.
 Scalability Testing: Testing the ability of a system, a network, or a process to continue
to function well when it is changed in size or volume in order to meet a growing need.
 Compatibility Testing: Compatibility testing is a type of software testing used to
ensure compatibility of the system/application/website with various other objects
such as other web browsers, hardware platforms, operating systems etc
 Smoke Testing: Smoke Testing is a kind of Software Testing performed after software
build to ascertain that the critical functionalities of the program are working fine. It is
executed "before" any detailed functional or regression tests are executed on the
software build. The purpose is to reject a badly broken application so that the QA
team does not waste time installing and testing the software application.
 Acceptance testing: Acceptance testing is basically done by the user or customer
although other stakeholders may be involved as well. It enables customers and users
to determine if the built system meets their needs and expectations.
USE CASE BASED Testing
 Test cases that exercise a system's functionalities from start to finish by testing
each of it individual transactions.
How to make a test case from use case.
• use cases do not specify input data, the tester must select it.
• To create test cases, start with normal data for the most often used
transactions.
• Then move to boundary values and invalid data.
• Make sure you have at least one test case for every Extension in the
use case.
Identifier

Related requirements(s)

Short description

Pre-condition(s)

Input data

Detailed steps

Expected result(s)
Post-condition(s)
Actual result(s)
Test Case Result Pass

You might also like