0% found this document useful (0 votes)
2 views38 pages

Ste

Unit 1 covers the fundamentals of software testing, including definitions of key concepts such as bugs, faults, and failures, as well as the objectives and roles of testing. It emphasizes the importance of verification and validation processes, detailing entry and exit criteria for software testing phases, and outlines various testing methods and skills required for testers. The document also discusses quality assurance and the significance of standards in ensuring software quality.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views38 pages

Ste

Unit 1 covers the fundamentals of software testing, including definitions of key concepts such as bugs, faults, and failures, as well as the objectives and roles of testing. It emphasizes the importance of verification and validation processes, detailing entry and exit criteria for software testing phases, and outlines various testing methods and skills required for testers. The document also discusses quality assurance and the significance of standards in ensuring software quality.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Unit 1: Basics of Software Testing and Testing Methods Unit 1: Basics of Software Testing and Testing Methods

• Curiosity
CO1 : Apply various software testing methods • Think from users perspective
• Be a good judge of your product
Bug, Fault & Failure
Software testing:
 Software testing is defined as performing Verification and Validation of the Software • A person makes an Error
Product for its correctness and accuracy of working. • That creates a fault in software
 Software Testing is the process of executing a program with the intent of finding errors. • That can cause a failure in operation
 A successful test is one that uncovers an as-yet-undiscovered error. • Error: An error is a human action that produces the incorrect result that results in
 Testing can show the presence of bugs but never their absence. a fault.
 Testing is a support function that helps developers look good by finding their mistakes • Bug: The presence of error at the time of execution of the software.
before anyone else does. • Fault: State of software caused by an error.
• Failure: Deviation of the software from its expected result. It is an event.
Role of testing / Objectives of testing: • Defect: A defect is an error or a bug, in the application which is created. A programmer
1. Finding defects which may get created by the programmer while developing the software. while designing and building the software can make mistakes or error. These mistakes or
2. Gaining confidence in and providing information about the level of quality. errors mean that there are flaws in the software. These are called defects.
3. To prevent defects.
Why do defects occur in software?
4. To make sure that the end result meets the business and user requirements.
5. To ensure that it satisfies the BRS that is Business Requirement Specification and SRS that is Software is written by human beings
System Requirement Specifications.
6. To gain the confidence of the customers by providing them a quality product � Who know something, but not everything
� Who have skills, but aren’t perfect
What is Software testing? � Who don’t usually use rigorous methods
• Finding defects � Who do make mistakes (errors)
• Trying to break the system Under increasing pressure to deliver to strict deadlines
• Finding and reporting defects � No time to check, assumptions may be wrong
• Demonstrating correct functionality
� Systems may be incomplete
• Demonstrating incorrect functionality
• Demonstrating robustness, reliability, security, maintainability, … Software is complex, abstract and invisible
• Measuring performance, reliability, … � Hard to understand
• Evaluating and measuring quality � Hard to see if it is complete or working correctly
• Proving the software correct � No one person can fully understand large systems
• Executing pre-defined test cases � Numerous external interfaces and dependencies
• Automatic error detection

Skills Required for Tester


Sources of defects
• Communication skills
• Domain knowledge Education
• Desire to learn
• Technical skills � Developers does not understand well enough what he or she is doing
• Analytical skills � Lack of proper education leads to errors in specification, design, coding, and testing
• Planning Communication
• Integrity

Course Coordinator: Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 1 Course Coordinator: Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 2
Unit 1: Basics of Software Testing and Testing Methods Unit 1: Basics of Software Testing and Testing Methods

� Developers do not know enough 1. All test plans have been run
� Information does not reach all stakeholders 2. All requirements coverage has been achieved.
� Information is lost 3. All severe bugs are resolved.
Oversight ENTRY CRITERIA
� Omitting to do necessary things Entry Criteria for QA testing is defined as “Specific conditions or on-going activities that must
Transcription be present before a process can begin”. In the Systems Development Life Cycle it also specifies
which entry criteria are required at each phase. Additionally, it is also important to define the
� Developer knows what to do but simply makes a mistake time interval or required amount of lead time that an entry criteria item is available to the
process. Input can be divided into two categories. The first is what we receive from development.
Process
The second is what we produce that acts as input to later test process steps.
� Process is not applicable for the actual situation
The type of required input from development includes:
� Process places restrictions that cause errors
1. Technical Requirements/Statement of Need
Test Plan
2. Design Document
A test plan is a systematic approach to testing a system i.e. software. The plan typically contains
a detailed understanding of what the eventual testing workflow will be. 3. Change Control

4. Turnover Document
Test Case
The type of required input from test includes:
A test case is a specific procedure of testing a particular requirement.
1. Evaluation of available software test tools
It will include:
2. Test Strategy
• Identification of specific requirement tested
• Test case success/failure criteria 3. Test Plan
• Specific steps to execute test
• Test Data 4. Test Incident Reports

By referencing the Entry Exit Criteria matrix, we get the clarity of the deliverables expected
Entry and Exit Criteria for software testing from each phase. The matrix should contain “date required” and should be modified to meet the
specific goals and requirements of each test effort based on size and complexity.
Process model is a way to represent any given phase of software development that prevent and
minimize the delay between defect injection and defect detection/ correction. EXIT CRITERIA
Entry criteria, specifies when that phase can be started also included the inputs for the phase.
Tasks or steps that need to be carried out in that phase along with measurements that characterize Exit Criteria is often viewed as a single document concluding the end of a life cycle phase. Exit
the tasks. Verification, which specifies methods of checking that tasks have been carried out Criteria is defined as “The specific conditions or on-going activities that must be present before a
correctly. Clear entry criteria make sure that a given phase does not start prematurely. The
life cycle phase can be considered complete. The life cycle specifies which exit criteria are
verification for each phase helps to prevent defects. At least defects can be minimized.
required at each phase”. This definition identifies the intermediate deliverables, and allows us to
Exit criteria, which stipulate the conditions under which one can consider the phases as done and track them as independent events.
included are the outputs for the phase.
Exit criteria may include: The type of output from test includes:

Course Coordinator: Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 3 Course Coordinator: Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 4
Unit 1: Basics of Software Testing and Testing Methods Unit 1: Basics of Software Testing and Testing Methods

1. Test Strategy What is Validation?


2. Test Plan Definition: The process of evaluating software during or at the end of the development
process to determine whether it satisfies specified requirements.
3. Test Scripts/Test Case Specifications
• Validation is the process of evaluating the final product to check whether the software
4. Test Logs
meets the customer expectations and requirements. It is a dynamic mechanism of
5. Test Incident Report Log validating and testing the actual product.

6. Test Summary Report/Findings Report • Methods of Validation : Dynamic Testing

By identifying the specific Exit criteria, we are able to identify and plan how these steps and • Testing
processes fit into the life cycle. All of the Exit Criteria listed above, less the Test
• End Users
Summary/Findings Report; act as Entry Criteria to alter process.
Verification Validation
Verification & Validation

• Verification 1. Verification is a static practice of verifying 1. Validation is a dynamic mechanism of


documents, design, code and program. validating and testing the actual product.
• Are you building the product right?
• Software must conform to its specification 2. It does not involve executing the code. 2. It always involves executing the code.
• Validation
3. It is human based checking of documents and 3. It is computer based execution of
• Are you building the right product? files. program.
• Software should do what the user really requires
4. Verification uses methods like inspections, 4. Validation uses methods like black box
What is Verification? reviews, walkthroughs, and Desk-checking etc. (functional) testing, gray box testing, and
white box (structural) testing etc.
Definition: The process of evaluating software to determine whether the products of a given
development phase satisfy the conditions imposed at the start of that phase.
5. Verification is to check whether the software 5. Validation is to check whether software
• Verification is a static practice of verifying documents, design, code and program. It conforms to specifications. meets the customer expectations and
includes all the activities associated with producing high quality software: inspection, requirements.
design analysis and specification analysis. It is a relatively objective process.
6. It can catch errors that validation cannot 6. It can catch errors that verification cannot
• Verification will help to determine whether the software is of high quality, but it will not catch. It is low level exercise. catch. It is High Level Exercise.
ensure that the system is useful. Verification is concerned with whether the system is
well-engineered and error-free. 7. Target is requirements specification, 7. Target is actual product-a unit, a module,
application and software architecture, high a bent of integrated modules, and effective
• Methods of Verification : Static Testing level, complete design, and database design etc. final product.

• Walkthrough
8. Verification is done by QA team to ensure 8. Validation is carried out with the
• Inspection that the software is as per the specifications in involvement of testing team.
the SRS document.
• Review

Course Coordinator: Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 5 Course Coordinator: Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 6
Unit 1: Basics of Software Testing and Testing Methods Unit 1: Basics of Software Testing and Testing Methods

9. It generally comes first-done before 9. It generally follows after verification.


validation.

Verification and validation model makes the V-model. It is sequential path of execution of
processes. Each phase must be completed before the next phase begins. Under V-model, the
corresponding testing phase of the development phase is planned in parallel. So there is
verification on one side of V & validation phase on the other side of V.
Verification Phase:
1. Overall Business Requirement: In this first phase of the development cycle, the product
requirements are understood from customer perspective. This phase involves detailed
communication with the customer to understand his expectations and exact requirements. The
acceptance test design planning is done at this stage as business requirements can be used as an
input for acceptance testing.
2. Software Requirement: Once the product requirements are clearly known, the system can be
designed. The system design comprises of understanding & detailing the complete hardware,
software & communication set up for the product under development. System test plan is
designed based on system design. Doing this at earlier stage leaves more time for actual test
execution later.
3. High level design: High level specification are understood & designed in this phase. Usually
more than one technical approach is proposed & based on the technical & financial feasibility,
the final decision is taken. System design is broken down further into modules taking up
different functionality.
4. Low level design: In this phase the detailed integral design for all the system modules is
specified. It is important that the design is compatible with the other modules in the system &
other external system. Components tests can be designed at this stage based on the internal
module design,
5. Coding: The actual coding of the system modules designed in the design phase is taken up in
the coding phase. The base suitable programming language is decided base on requirements.
Coding is done based on the coding guidelines & standards.

Course Coordinator: Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 7 Course Coordinator: Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 8
Unit 1: Basics of Software Testing and Testing Methods Unit 1: Basics of Software Testing and Testing Methods

Validation: v. Quality Control is the process involved within the system to ensure job management,
1. Unit Testing: Unit testing designed in coding are executed on the code during this validation competence and performance during the manufacturing of the product or service to ensure it
phase. This helps to eliminate bugs at an early stage. meets the quality plan as designed.
2. Components testing: This is associated with module design helps to eliminate defects in vi. Quality Control just measures and determines the quality level of products or services.
individual modules.
3. Integration Testing: It is associated with high level design phase & it is performed to test the Methods of testing:
coexistence & communication of the internal modules within the system 1. Static Testing :
5. System Testing: It is associated with system design phase. It checks the entire system  Static testing is the testing of the software work products manually, or with a set of tools,
functionality & the communication of the system under development with external systems. but they are not executed.
Most of the software & hardware compatibility issues can be uncovered using system test
execution.  It starts early in the Life cycle and so it is done during the verification process.
6. Acceptance Testing: It is associated with overall & involves testing the product in user  It does not need computer as the testing of program is done without executing the
environment. These tests uncover the compatibility issues with the other systems available in the program. For example: reviewing, walk through, inspection, etc.
user environment. It also uncovers the non-functional issues such as load & performance defects  Static testing consists of following methods
in the actual user environment. 1) Walkthrough
2) Inspection
Quality Assurance:
i. It is Process oriented activities. 3) Technical Review
ii. A part of quality management focused on providing confidence that quality requirements will
be fulfilled. Advantages of Static Testing
iii. All the planned and systematic activities implemented within the quality system that can be
demonstrated to provide confidence that a product or service will fulfill requirements for quality  Since static testing can start early in the life cycle, early feedback on quality issues can be
iv. Quality Assurance is fundamentally focused on planning and documenting those processes to established.
assure quality including things such as quality plans and inspection and test plans.  By detecting defects at an early stage, rework costs are most often relatively low.
v. Quality Assurance is a system for evaluating performance, service, of the quality of a product  Since rework effort is substantially reduced, development productivity figures are likely
against a system, standard or specified requirement for customers. to increase.
vi. Quality Assurance is a complete system to assure the quality of products or services. It is not  The evaluation by a team has the additional advantage that there is an exchange of
only a process, but a complete system including also control. It is a way of management. information between the participants.
 Static tests contribute to an increased awareness of quality issues.
● Standards: Standards are the criteria’s to which the s/w product is compared.
● Documentations Standards: Specify form and Contents for planning, analysis and Disadvantages of Static Testing
product documentation and consistency throughout a project.
● Design Standards: Specify forms and contents of design product. They provide rules and  Demand great amount of time when done manually
methods for translating the s/w requirements into the s/w design.  Automated tools works with only few programming languages
● Code Standards: Specify the language in which code is to written and define any  Automated tools may provide false positives and false negatives
restrictions on use of language features.  Automated tools only scan the code
● They define legal language structures, style conversions, rules for data structure and  Automated tools cannot pinpoint weak points that may create troubles in run-time
interface.
● Procedure: Expected steps to be followed in carrying out a process. 2. Dynamic Testing
 Dynamic testing (or dynamic analysis) is a term used in software engineering to describe
Quality Control: the testing of the dynamic behavior of code.
i. It is Product oriented activities.
 That is, dynamic analysis refers to the examination of the physical response from the
ii. A part of quality management focused on fulfilling quality requirements.
system to variables that are not constant and change with time.
iii. The operational techniques and activities used to fulfill requirements for quality.
iv. Quality Control on the other hand is the physical verification that the product conforms to  In dynamic testing the software must actually be compiled and run.
these planned arrangements by inspection, measurement etc.

Course Coordinator: Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 9 Course Coordinator: Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 10
Unit 1: Basics of Software Testing and Testing Methods Unit 1: Basics of Software Testing and Testing Methods

 It involves working with the software, giving input values and checking if the output is as White box testing
expected by executing specific test cases which can be done manually or with the use of
an automated process
 The process and function of dynamic testing in software development, dynamic testing
can be divided into unit testing, integration testing, system testing, acceptance testing and
finally regression testing.
 Unit testing is a test that focuses on the correctness of the basic components of software.
Unit testing falls into the category of white-box testing. In the entire quality inspection
system, unit testing needs to be completed by the product group, and then the software is
handed over to the testing department.
 Integration testing is used to detect if the interfaces between the various units are properly
connected during the integration process of the entire software.
 Testing a software system that has completed integration is called a system test, and the
purpose of the test is to verify that the correctness and performance of the software
system meet the requirements specified in its specifications. Testers should follow the
established test plan. When testing the robustness and ease of use of the software, its
input, output, and other dynamic operational behaviour should be compared to the
software specifications. If the software specification is incomplete, the system test is
more dependent on the tester's work experience and judgment, such a test is not
sufficient. The system test is Black-box testing.
 This is the final test before the software is put into use. It is the buyer's trial process of the
software. In the actual work of the company, it is usually implemented by asking the
customer to try or release the Beta version of the software. The acceptance test is Black-
box testing. 1) Walkthrough
 The purpose of regression testing is to verify and modify the acceptance test results in  In walkthrough, author guides the review team via the document to fulfill the common
the software maintenance phase. In practical applications, the handling of customer understanding and collecting the feedback.
complaints is an embodiment of regression testing.  Walkthrough is not a formal process.
Advantages of Dynamic Testing  In walkthrough, a review team does not require to do detailed study before meeting while
authors are already in the scope of preparation.
 Dynamic testing could identify the weak areas in the runtime environment.  Walkthrough is useful for higher-level documents i.e requirement specification and
 Dynamic testing supports application analysis even if the tester does not have an actual architectural documents.
code.
 Dynamic testing could identify some vulnerabilities that are difficult to find by static Goals of Walkthrough
testing.
 Dynamic testing also could verify the correctness of static testing results.  Make the document available for the stakeholders both outside and inside the software
 Dynamic testing could be applied to any application. discipline for collecting the information about the topic under documentation.
 Describe and evaluate the content of the document.
Disadvantages of Dynamic Testing  Study and discuss the validity of possible alternatives and proposed solutions.
 Automated tools may give the wrong security, such as check everything. Participants of Structured Walkthrough
 Automated tools can generate false positives and false negatives.
 Finding trained dynamic test professionals is not easy.  Author - The Author of the document under review.
 Dynamic testing is hard to track down the vulnerabilities in the code, and it takes longer  Presenter - The presenter usually develops the agenda for the walkthrough and presents
to fix the problem. Therefore, fixing bugs becomes expensive. the output being reviewed.
 Moderator - The moderator facilitates the walkthrough session, ensures the walkthrough
agenda is followed, and encourages all the reviewers to participate.

Course Coordinator: Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 11 Course Coordinator: Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 12
Unit 1: Basics of Software Testing and Testing Methods Unit 1: Basics of Software Testing and Testing Methods

 Reviewers - The reviewers evaluate the document under test to determine if it is Recorder records the defects Author makes a note of defects and suggestions
technically accurate. offered by team mate
 Scribe - The scribe is the recorder of the structured walkthrough outcomes who records
the issues identified and any other technical comments, suggestions, and unresolved Moderator has a role in making sure that the Informal, so there is no moderator
questions. discussions proceed on the productive lines

Benefits of Structured Walkthrough


3) Technical Review
 Saves time and money as defects are found and rectified very early in the lifecycle.  Technical review is a discussion meeting that focuses on technical content of the
 This provides value-added comments from reviewers with different technical document. It is a less formal review.
backgrounds and experience.
 It is guided by a trained moderator or a technical expert.
 It notifies the project management team about the progress of the development process.
 It creates awareness about different development or maintenance methodologies which
Goals of Technical Review
can provide a professional growth to participants.  The goal is to evaluate the value of technical concept in the project environment.
 Build the consistency in the use and representation of the technical concepts.
2) Inspection  In early stages it ensures that the technical concepts are used correctly.
 The trained moderator guides the Inspection. It is most formal type of review.  Notify the participants regarding the technical content of the document.
 The reviewers are prepared and check the documents before the meeting.
 In Inspection, a separate preparation is achieved when the product is examined and Code Functional Testing:
defects are found. These defects are documented in issue log.
 In Inspection, moderator performs a formal follow-up by applying exit criteria. i. Code Functional Testing involves tracking a piece of data completely through the software.
ii. At the unit test level this would just be through an individual module or function.
Goals of Inspection iii. The same tracking could be done through several integrated modules or even through the
 Assist the author to improve the quality of the document under inspection. entire software product although it would be more time consuming to do so.
 Efficiently and rapidly remove the defects. iv. During data flow, the check is made for the proper declaration of variables declared and
 Generating the documents with higher level of quality and it helps to improve the product the loops used are declared and used properly.
quality.
 It learns from the previous defects found and prohibits the occurrence of similar defects. For example
 Generate common understanding by interchanging information. 1. #include<stdio.h>
2. void main()
3. {
Difference between Inspection and Walkthrough
4. int i , fact= 1, n;
6. scan
Inspection Walkthrough 7. for(i =1; i<=n; i++)
8. fact = fact * i;
Formal Informal 9.printf(“\n Factorial of number is %d”,fact);
10. }
Initiated by the project team Initiated by the author
Code Coverage Testing:
Planned meeting with fixed roles assigned to all Unplanned.
the members involved i. The logical approach is to divide the code just as you did in black-box testing into its data
and its states (or program flow).
Reader reads the product code. Everyone inspects Author reads the product code and his team mate ii. By looking at the software from the same perspective, you can more easily map the white-
it and comes up with defects. comes up with defects or suggestions box information you gain to the black-box case you have already written.

Course Coordinator: Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 13 Course Coordinator: Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 14
Unit 1: Basics of Software Testing and Testing Methods Unit 1: Basics of Software Testing and Testing Methods

iii. Consider the data first. Data includes all the variables, constants, arrays, data structures, v. Every branch (decision) taken each way, true and false. vi. It helps in validating all the
keyboard and mouse input, files and screen input and output, and I/O to other devices such as branches in the code making sure that no branch leads to abnormal behavior of the
modems, networks, and so on. application.

For example Condition Coverage (Code Complexity Testing)


1. #include<stdio.h> i. Just when you thought you had it all figured out, there‘s yet another Complication to path
2. void main() testing.
3. { ii. Condition coverage testing takes the extra conditions on the branch statements into
4. int i , fact= 1, n; account.
5. printf(“Enter the number”);
6. scanf(“%d”,&n); Black Box Testing
7. for(i =1; i<=n; i++)  Black Box Testing, also known as Behavioral Testing, is a software testing method in
8. fact = fact * i; which the internal structure/ design/ implementation of the item being tested is not known
9.printf(“\n Factorial of number is %d”,fact); to the tester. These tests can be functional or non-functional, though usually functional
10. }
The declaration of data is complete with the assignment statement and the variable
declaration statements. All the variable declared are properly utilized.

Program Statements and Line Coverage (Code Complexity Testing)

i. The most straightforward form of code coverage is called statement coverage or line
coverage.
ii. If you‘re monitoring statement coverage while you test your software, your goal is to
make sure that you execute every statement in the program at least once.
iii. With line coverage the tester tests the code line by line giving the relevant output.
For example
This method attempts to find errors in the following categories:
1. #include • Incorrect or missing functions
2. void main()
• Interface errors
3. {
4. int i , fact= 1, n; • Errors in data structures or external database access
5. printf(―enter the number ―); • Behavior or performance errors
6. scanf(―%d‖, &n); • Initialization and termination errors
7. for(i =1 ;i <=n; i++) • EXAMPLE : A tester, without knowledge of the internal structures of a website, tests the
8. fact = fact * i; web pages by using a browser; providing inputs (clicks, keystrokes) and verifying the
9. printf(“\n Factorial of number is %d”,fact);
outputs against the expected outcome.
10. }
Advantages of black box testing
Branch Coverage (Code Complexity Testing)
i. Attempting to cover all the paths in the software is called path testing.  Tests are done from a user’s point of view and will help in exposing discrepancies in the
ii. The simplest form of path testing is called branch coverage testing. specifications.
iii. To check all the possibilities of the boundary and the sub boundary conditions and it‘s
branching on those values.  Tester need not know programming languages or how the software has been
iv. Test coverage criteria requires enough test cases such that each condition in a decision implemented.
takes on all possible outcomes at least once, and each point of entry to a program or
subroutine is invoked at least once.

Course Coordinator: Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 15 Course Coordinator: Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 16
Unit 1: Basics of Software Testing and Testing Methods Unit 1: Basics of Software Testing and Testing Methods

 Tests can be conducted by a body independent from the developers, allowing for an  Track and Manage Defects - Any defects detected during the testing process goes
objective perspective and the avoidance of developer-bias. through the defect life cycle and are tracked to resolution. Defect Statistics are
maintained which will give us the overall status of the project.
 Test cases can be designed as soon as the specifications are complete.

Disadvantages of black box testing 2) Boundary Value Analysis

 Only a small number of possible inputs can be tested and many program paths will be left  For the most part, errors are observed in the extreme ends of the input values, so these
untested. extreme values like start/end or lower/upper values are called Boundary values and
analysis of these Boundary values is called “Boundary value analysis”. It is also
 Without clear specifications, which is the situation in many projects, test cases will be sometimes known as ‘range checking’.
difficult to design.
 Boundary value analysis is used to find the errors at boundaries of input domain rather
 Tests can be redundant if the software designer/ developer has already run a test case. than finding those errors in the center of input.
 Ever wondered why a soothsayer closes the eyes when foretelling events? So is almost  This is one of the software testing technique in which the test cases are designed to
the case in Black Box Testing. include values at the boundary. If the input data is used within the boundary value limits,
Techniques for black box testing then it is said to be Positive Testing. If the input data is picked outside the boundary value
limits, then it is said to be Negative Testing.

1.Requirement Based Testing  Boundary value analysis is another black box test design technique and it is used to find
the errors at boundaries of input domain rather than finding those errors in the center of
2.Boundary Value Analysis input.

3. Equivalence Partitioning  Each boundary has a valid boundary value and an invalid boundary value. Test cases are
designed based on the both valid and invalid boundary values. Typically, we choose one
test case from each boundary.
1) Requirement based testing
 Requirements-based testing is a testing approach in which test cases, conditions and data Boundary value analysis is a black box testing and is also applies to white box testing.
are derived from requirements. It includes functional tests and also non-functional Internal data structures like arrays, stacks and queues need to be checked for boundary or
attributes such as performance, reliability or usability. limit conditions; when there are linked lists used as internal structures, the behavior of the
list at the beginning and end have to be tested thoroughly.
Stages in Requirements based Testing:  Boundary value analysis help identify the test cases that are most likely to uncover
defects
 Defining Test Completion Criteria - Testing is completed only when all the functional
and non-functional testing is complete.  For example : Suppose you have very important tool at office, accepts valid User Name
 Design Test Cases - A Test case has five parameters namely the initial state or and Password field to work on that tool, and accepts minimum 8 characters and
precondition, data setup, the inputs, expected outcomes and actual outcomes. maximum 12 characters. Valid range 8-12, Invalid range 7 or less than 7 and Invalid
 Execute Tests - Execute the test cases against the system under test and document the range 13 or more than 13.
results.
 Verify Test Results - Verify if the expected and actual results match each other.
 Verify Test Coverage - Verify if the tests cover both functional and non-functional
aspects of the requirement.

Course Coordinator: Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 17 Course Coordinator: Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 18
Unit 1: Basics of Software Testing and Testing Methods Unit 1: Basics of Software Testing and Testing Methods

 Test Cases 1: Consider password length less than 8. age group, an additional monthly premium has to pay that is as listed in the table below.
For example, a person aged 34 has to pay a premium=$0.50 +$ 1.65=$2.15
 Test Cases 2: Consider password of length exactly 8.
Age Group Additional Premium
 Test Cases 3: Consider password of length between 9 and 11. Under 35 $1.65
35-59 $2.87
 Test Cases 4: Consider password of length exactly 12.
60+ $6.00
 Test Cases 5: Consider password of length more than 12.

Test cases for the application whose input box accepts numbers between 1-1000. Valid range 1-  Based on the equivalence portioning technique, the equivalence partitions that are based
1000, Invalid range 0 and Invalid range 1001 or more. on age are given below:

1. Below 35 years of age (valid input)


2. Between 35 and 59 years of age (valid input)
3. Above 6 years of age (valid input)
4. Negative age (invalid input)
5. Age as 0(invalid input)
6. Age as any three-digit number (valid input)

• Test Cases 1: Consider test data exactly as the input boundaries of input domain i.e.
values 1 and 1000.

• Test Cases 2: Consider test data with values just below the extreme edges of input
domains i.e. values 0 and 999.

• Test Cases 3: Consider test data with values just above the extreme edges of input
domain i.e. values 2 and 1001.

3) Equivalence Partitioning

 Equivalence partitioning is a software technique that involves identifying a small set of


representative input values that produce as much different output condition as possible.

 This reduces the number of permutation & combination of input, output values used for
testing, thereby increasing the coverage and reducing the effort involved in testing.

 The set of input values that generate one single expected output is called a partition.

 When the behavior of the software is the same for a set of values, then the set is termed as
equivalence class or partition.

 Example: An insurance company that has the following premium rates based on the age
group. A life insurance company has base premium of $0.50 for all ages. Based on the

Course Coordinator: Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 19 Course Coordinator: Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 20
Unit 2: Types and Levels of Testing Unit 2: Types and Levels of Testing

CO2 : Prepare test cases for different types and levels of testing.

Levels of Testing

Drivers

Drivers are used in bottom-up integration testing approach.


It can simulate the behavior of upper-level module that is not integrated yet.
Drivers modules act as the temporary replacement of module and act as the actual
Unit Testing products.
Unit Testing is a level of software testing where individual units/ components of a Drivers are also used for interact with external system and usually complex than stubs.
software are tested. The purpose is to validate that each unit of the software performs as Driver: Calls the Module to be tested.
designed. Now suppose you have modules B and C ready but module A which calls functions from
Unit Testing is the first level of testing and is performed prior to Integration Testing. module B and C is not ready so developer will write a dummy piece of code for module A
A unit is the smallest testable part of software. It usually has one or a few inputs and which will return values to module B and C. This dummy piece of code is known as driver.
usually a single output.
Stubs
It is executed by the Developer.
Unit Testing is performed by using the White Box Testing method Stubs are used in top down integration testing.
Example: - A function, method, Loop or statement in program is working fine. It can simulate the behavior of lower-level module that are not integrated.
They are act as a temporary replacement of module and provide same output as actual
product.
When needs to intact with external system then also stubs are used.
Stub: Is called by the Module under Test.
Assume you have 3 modules, Module A, Module B and module C. Module A is ready and
we need to test it, but module A calls functions from Module B and C which are not ready, so
developer will write a dummy module which simulates B and C and returns values to module
A. This dummy module code is known as stub.

Importance of Stubs and Drivers

Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 1 Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 2
Unit 2: Types and Levels of Testing Unit 2: Types and Levels of Testing

1. Stubs and Drivers works as a substitute for the missing or unavailable module. Integration testing tests integration or interfaces between components, interactions to
2. They are specifically developed, for each module, having different functionalities. different parts of the system such as an operating system, file system and hardware or
3. Generally, developers and unit testers are involved in the development of stubs and interfaces between systems.
drivers.
4. Their most common use may be seen in the integration incremental testing, where stubs As displayed in the image below when two different modules ‘Module A’ and ‘Module
are used in top bottom approach and drivers in a bottom up approach. B’ are integrated then the integration testing is done.
Benefits of Unit Testing
Unit testing increases confidence in changing/ maintaining code. If good unit tests are
written and if they are run every time any code is changed, we will be able to promptly
catch any defects introduced due to the change.
Codes are more reusable.
Development is faster.
The cost of fixing a defect detected during unit testing is lesser in comparison to that of
defects detected at higher levels.
Debugging is easy.
1. Incremental Approach:

In this approach, testing is done by joining two or more modules that are logically
related. Then the other related modules are added and tested for the proper functioning.
Process continues until all of the modules are joined and tested successfully.

This process is carried out by using dummy programs called Stubs and Drivers. Stubs and
Drivers do not implement the entire programming logic of the software module but just
simulate data communication with the calling module.

Stub: Is called by the Module under Test.

Driver: Calls the Module to be tested.

2. Non- Incremental Integration


 . The non-incremental approach is also known as ―Big-Bang‖ Testing.
 Big Bang Integration Testing is an integration testing strategy wherein all units are
linked at once, resulting in a complete system.
 When this type of testing strategy is adopted, it is difficult to isolate any errors found,
because attention is not paid to verifying the interfaces across individual units.
Integration Testing  In Big Bang integration testing all components or modules are integrated simultaneously,
after which everything is tested as a whole. As per the below image all the modules from
Integration Testing is a level of software testing where individual units are combined and ‘Module 1’ to ‘Module 6’ are integrated simultaneously then the testing is carried out.
tested as a group.

In integration Testing, individual software modules are integrated logically and tested as
a group.

Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 3 Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 4
Unit 2: Types and Levels of Testing Unit 2: Types and Levels of Testing

 Stub modules must be produced


 Stub Modules are often more complicated than they first appear to be.
 Before the I/O functions are added, representation of test cases in stubs can be difficult.
 Test conditions may be impossible, or very difficult, to create.
 Observation of test output is more difficult.
 Allows one to think that design and testing can be overlapped.
 Induces one to defer completion of the testing of certain modules.

3. Top down Integration

The strategy in top-down integration is look at the design hierarchy from top to bottom. Start
with the high - level modules and move downward through the design hierarchy. In this approach
testing is conducted from main module to sub module. If the sub module is not developed a
temporary program called STUB is used for simulate the sub module. Modules subordinate to the
top modules are integrated in the following two ways: 4. Bottom up Integration
1. Depth first Integration: In this type, all modules on major control path of the design
In this approach testing is conducted from sub module to main module, if the main module is not
hierarchy are integrated first. In this example shown in fig. modules 1, 2, 4/5 will be integrated
developed a temporary program called DRIVERS is used to simulate the main module.
first. Next, modules 1, 3, 6 will be integrated.
2. Breadth first Integration: In this type, all modules directly subordinate at each level, moving
across the design hierarchy horizontally, are integrated first. In the example shown in figure
modules 2 and 3 will be integrated first. Next, modules 4,5 and 6 will be integrated .
Procedure:

The procedure for Top-Down integration process is discussed in the following steps:
1. Start with the top or initial module in the software. Substitute the stubs for all the subordinate
of the top module. Test the top module.
2. After testing the top module, stubs are replaced one at a time with the actual modules for
integration.
3. Perform testing on this recent integrated environment.
4. Regression testing may be conducted to ensure that new errors have not appeared. Advantages:
5. Repeat steps 2-4 for whole design hierarchy.  Advantageous if major flaws occur toward the bottom of the program.
Advantages:  Test conditions are easier to create.
 Advantageous if major flaws occur toward the top of the program.  Observation of test results is easier.
 Once the I/O functions are added, representation of test cases is easier.  Driver Modules must be produced.
 Early skeletal Program allows demonstrations and boosts morale.  The program as an entity does not exist until the last module is added
Disadvantages: Disadvantages:

Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 5 Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 6
Unit 2: Types and Levels of Testing Unit 2: Types and Levels of Testing

 Critical modules (at the top level of software architecture) which control the flow of 10. This technique is called as Sandwich Integration.
application are tested last and may be prone to defects.
Advantages:
 Early prototype is not possible 1. Sandwich approach is useful for very large projects having several subprojects.
2. Both Top-down and Bottom-up approach starts at a time as per development schedule.
5. Bi Direction / Sandwich Integration Testing 3. Units are tested and brought together to make a system Integration is done downwards.

Disadvantages:
1. It require very high cost for testing because one part has Top-down approach while another
part has bottom-up approach.
2. It cannot be used for smaller system with huge interdependence between different modules. It
makes sense when the individual subsystem is as good as complete system.

Unit test Integration test


1. Bi-directional Integration is a kind of integration testing process that combines top-down
 The idea behind Unit Testing is to test  The idea behind Integration Testing is to
and bottom-up testing.
each part of the program and show that combine modules in the application and
2. With an experience in delivering Bi-directional testing projects custom software the individual parts are correct. test as a group to see that they are
development services provide the best quality of the deliverables right from the development working fine
of software process.
 It is kind of White Box Testing  It is kind of Black Box Testing
3. Bi-directional Integration testing is a vertical incremental testing strategy that tests the
bottom layers and top layers and tests the integrated system in the computer software  It can be performed at any time  It usually carried out after Unit Testing
development process. and before System Testing

4. Using stubs, it tests the user interface in isolation as well as tests the very lowest level  Unit Testing tests only the functionality  Integrating testing may detect errors
functions using drivers. of the units themselves and may not when modules are integrated to build the
catch integration errors, or other overall system
5. Bi-directional Integration testing combines bottom-up and top-down testing. system-wide issues
6. Bottom-up testing is a process where lower level modules are integrated and then tested.
 It starts with the module specification  It starts with the interface specification
7. This process is repeated until the component of the top of the hierarchy is analyzed. It
helps custom software development services find bugs easily without any problems.  It pays attention to the behavior of  It pays attention to integration among
single modules modules
8. Top down testing is a process where the top integrated modules are tested and the
procedure is continued till the end of the related module.  Unit test does not verify whether your  Integration tests verify that your code
code works with external dependencies works with external dependencies
9. Top down testing helps developers find the missing branch link easily

Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 7 Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 8
Unit 2: Types and Levels of Testing Unit 2: Types and Levels of Testing

correctly. correctly. Here Increasing load means increasing number of concurrent users, transactions & check
the behavior of application under test.
 It is usually executed by the developer  It is usually executed by a test team It is normally carried out underneath controlled environment in order to distinguish
between two different systems.
 Finding errors is easy  Finding errors is difficult The main purpose of load testing is to monitor the response time and staying power of
application when system is performing well under heavy load.
 Maintenance of unit test is cheap  Maintenance of integration test is The successfully executed load testing is only if the specified test cases are executed
expensive
without any error in allocated time.
Load testing is testing the software under customer expected load.
In order to perform load testing on the software you feed it all that it can handle. Operate
the software with largest possible data files.
Performance Testing If the software operates on peripherals such as printer, or communication ports, connect
as many as you can.
Performance Testing is a type of testing to ensure software applications will perform well
If you are testing an internet server that can handle thousands of simultaneous
under their expected workload.
connections, do it. With most software it is important for it to run over long periods.
A software application's performance like its response time, reliability, resource usage Some software‘s should be able to run forever without being restarted. So Time acts as a
and scalability do matter. important variable.
Load testing can be best applied with the help of automation tools..
The goal of Performance Testing is not to find bugs but to eliminate performance
bottlenecks. Simple examples of load testing:
Testing printer by sending large job.
The focus of Performance Testing is checking a software program's
Editing a very large document for testing of word processor
 Speed - Determines whether the application responds quickly Continuously reading and writing data into hard disk.
Running multiple applications simultaneously on server.
 Scalability - Determines maximum user load the software application can handle. Testing of mail server by accessing thousands of mailboxes
 Stability - Determines if the application is stable under varying loads In case of zero-volume testing & system fed with zero load

Test objectives frequently include the following: 2. Stress Testing


Stress Testing is performance testing type to check the stability of software when
Response time. For example, the product catalog must be displayed in less than
hardware resources are not sufficient like CPU, memory, disk space etc.
3 seconds.
It is performed to find the upper limit capacity of the system and also to determine how
Throughput. For example, the system must support 100 transactions per second. the system performs if the current load goes well above the expected maximum.
Main parameters to focus during Stress testing are “Response Time” and “Throughput”.
Resource utilization. A frequently overlooked aspect is the amount of resources Stress testing is Negative testing where we load the software with large number of
your application is consuming, in terms of processor, memory, disk input output concurrent users/processes which cannot be handled by the systems hardware resources.
(I/O), and network I/O. This testing is also known as Fatigue testing
Stress testing is testing the software under less than ideal conditions. So subject your
1. Load Testing
software to low memory, low disk space, slow cps, and slow modems and so on. Look at
Load Testing is type of performance testing to check system with constantly increasing
your software and determine what external resources and dependencies it has.
the load on the system until the time load is reaches to its threshold value.

Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 9 Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 10
Unit 2: Types and Levels of Testing Unit 2: Types and Levels of Testing

Stress testing is simply limiting them to bare minimum. With stress testing you starve the  A Student Management System is insecure if ‘Admission’ branch can edit the data of
software. ‘Exam’ branch
For e.g. Word processor software running on your computer with all available memory  An ERP system is not secure if DEO (data entry operator) can generate ‘Reports’
and disk space, it works fine. But if the system runs low on resources you had a greater  An online Shopping Mall has no security if customer’s Credit Card Detail is not
potential to expect a bug. Setting the values to zero or near zero will make the software encrypted
execute different path as it attempt to handle the tight constraint. Ideally the software  A custom software possess inadequate security if an SQL query retrieves actual
would run without crashing or losing data passwords of its users

3. Security Testing 4. Client Server Testing


Security testing is a testing technique to determine if an information system protects data
i) This type of testing usually done for 2 tier applications (usually developed for LAN) Here we
and maintains functionality as intended
will be having front-end and backend.
It also aims at verifying 6 basic principles as listed below: ii) The application launched on front-end will be having forms and reports which will be
 Confidentiality monitoring and manipulating data. E.g: applications developed in VB, VC++, Core Java, C,
 Integrity C++, D2K, Power Builder etc.
 Authentication iii) The backend for these applications would be MS Access, SQL Server, Oracle, Sybase,
 Authorization MySQL, Quadbase.
 Availability iv) The tests performed on these types of applications would be– User interface testing Manual
support testing– Functionality testing– Compatibility testing & configuration testing –
 Non-repudiation
Intersystem testing.
 Confidentiality
A security measure which protects against the disclosure of information to parties other than the The approaches used for client server testing are
intended recipient is by no means the only way of ensuring the security. 1. User interface testing: User interface testing, a testing technique used to identify the presence
 Integrity of defects is a product/software under test by using Graphical user interface [GUI].GUI Testing -
Integrity of information refers to protecting information from being modified by unauthorized Characteristics:
parties i) GUI is a hierarchical, graphical front end to the application, contains graphical objects with a
set of properties.
 Authentication ii) During execution, the values of the properties of each objects of a GUI define the GUI state.
This might involve confirming the identity of a person, tracing the origins of an artifact, ensuring iii)It has capabilities to exercise GUI events like key press/mouse click.
that a product is what its packaging and labeling claims to be, or assuring that a computer iv) Able to provide inputs to the GUI Objects.
program is a trusted one v) To check the GUI representations to see if they are consistent with the expected ones.
 Authorization vi) It strongly depends on the used technology.
The process of determining that a requester is allowed to receive a service or perform an 2. Manual testing: Manual testing is a testing process that is carried out manually to find defects
without the usage of tools or automation scripting. A test plan document is prepared that acts as a
operation. Access control is an example of authorization.
guide to the testing process to have the complete test coverage. Following are the testing
 Availability techniques that are performed manually during the test life cycle are Acceptance Testing, White
Assuring information and communications services will be ready for use when expected. Box Testing, Black Box Testing, Unit Testing, System Testing, Integration Testing.
Information must be kept available to authorized persons when they need it. 3. Functional testing: Functional Testing is a testing technique that is used to test the
 Non-repudiation (acknowledgment) features/functionality of the system or Software, should cover all the scenarios including failure
In reference to digital security, non-repudiation means to ensure that a transferred message has paths and boundary cases.
been sent and received by the parties claiming to have sent and received the message. Non-
There are two major Functional Testing techniques as shown below:
repudiation is a way to guarantee that the sender of a message cannot later deny having sent the
message and that the recipient cannot deny having received the message.
Example :

Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 11 Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 12
Unit 2: Types and Levels of Testing Unit 2: Types and Levels of Testing

The purpose of this test is to evaluate the system’s compliance with the business
requirements and assess whether it is acceptable for delivery.

Usually, Black Box Testing method is used in Acceptance Testing.

Acceptance Testing is performed after System Testing and before making the system
available for actual use.

The acceptance test cases are executed against the test data or using an acceptance test
script and then the results are compared with the expected ones.

The goal of acceptance testing is to establish confidence in the system.

4. Compatibility testing: Compatibility testing is a non-functional testing conducted on the Acceptance testing is most often focused on a validation type testing.
application to evaluate the application's compatibility within different environments. It can be of
two types - forward compatibility testing and backward compatibility testing. Acceptance Criteria

Acceptance criteria are defined on the basis of the following attributes

Functional Correctness and Completeness

Data Integrity

Data Conversion

Usability

1. Forward Compatibility Testing: This type of testing verifies that the software is Performance
compatible with the newer or upcoming versions, and is thus named as forward
compatible. Timeliness
2. Backward Compatibility Testing: This type of testing helps to check whether the
application designed using the latest version of an environment also works seamlessly in Confidentiality and Availability
an older version. It is the testing performed to check the behavior of the
hardware/software with the older versions of the hardware/software. Install ability and Upgradability

Operating system Compatibility Testing - Linux , Mac OS, Windows Scalability


Database Compatibility Testing - Oracle SQL Server
Documentation
Browser Compatibility Testing - IE , Chrome, Firefox
Other System Software - Web server, networking/ messaging tool, etc. Types Of Acceptance Testing

User Acceptance test


Acceptance Testing
Operational Acceptance test
Acceptance Testing is a level of the software testing where a system is tested for
acceptability. Contract Acceptance testing

Compliance acceptance testing

Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 13 Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 14
Unit 2: Types and Levels of Testing Unit 2: Types and Levels of Testing

User Acceptance test Alpha Testing is a type of testing conducted by a team of highly skilled testers at
development site. Minor design changes can still be made as a result of alpha testing.
It focuses mainly on the functionality thereby validating the fitness-for-use of the system by
the business user. The user acceptance test is performed by the users and application managers. For Alpha Testing there is a dedicated test team.

Operational Acceptance test Alpha testing is final testing before the software is released to the general public. It has
two phases:
Also known as Production acceptance test validates whether the system meets the requirements
for operation. In most of the organization the operational acceptance test is performed by the  In the first phase of alpha testing, the software is tested by in-house developers. They
system administration before the system is released. The operational acceptance test may include use either debugger software, or hardware-assisted debuggers. The goal is to catch bugs
testing of backup/restore, disaster recovery, maintenance tasks and periodic check of security quickly.
vulnerabilities.
 In the second phase of alpha testing, the software is handed over to the software QA
Contract Acceptance testing staff, for additional testing in an environment that is similar to the intended use.

It is performed against the contract’s acceptance criteria for producing custom developed  Pros Of Alpha Testing
software. Acceptance should be formally defined when the contract is agreed. • Helps to uncover bugs that were not found during previous testing activities
• Better view of product usage and reliability
Compliance acceptance testing • Analyze possible risks during and after launch of the product
It is also known as regulation acceptance testing is performed against the regulations which must • Helps to be prepared for future customer support
be adhered to, such as governmental, legal or safety regulations. • Helps to build customer faith on the product
• Maintenance Cost reduction as the bugs are identified and fixed before Beta /
Advantages Of Acceptance Testing Production launch
• Easy Test Management
The functions and features to be tested are known.
 Cons Of Alpha Testing
The details of the tests are known and can be measured.
 Not all the functionality of the product is expected to be tested
The tests can be automated, which permits regression testing.
 Only Business requirements are scoped
The progress of the tests can be measured and monitored.
Beta Testing
The acceptability criteria are known.
Beta Testing is also known as field testing. It takes place at customer’s site. It sends the
Disadvantages Of Acceptance Testing system/software to users who install it and use it under real-world working conditions.

Requires significant resources and planning. A beta test is the second phase of software testing in which a sampling of the intended
audience tries the product out
The tests may be a re-implementation of system tests.
The goal of beta testing is to place your application in the hands of real users outside of
It may not uncover subjective defects in the software, since you are only looking for defects you your own engineering team to discover any flaws or issues from the user’s perspective
expect to find that you would not want to have in your final, released version of the application.
Alpha Testing Beta testing can be considered “pre-release testing.

Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 15 Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 16
Unit 2: Types and Levels of Testing Unit 2: Types and Levels of Testing

Advantages of beta testing

You have the opportunity to get your application into the hands of users prior to releasing Special tests
it to the general public.
1. Regression Testing
Users can install, test your application, and send feedback to you during this beta testing
period. Regression Testing is defined as a type of software testing to confirm that a recent
program or code change has not adversely affected existing features.
Your beta testers can discover issues with your application that you may have not
noticed, such as confusing application flow, and even crashes. Regression Testing is nothing but full or partial selection of already executed test cases
which are re-executed to ensure existing functionalities work fine.
Using the feedback you get from these users, you can fix problems before it is released to
the general public. This testing is done to make sure that new code changes should not have side effects on
the existing functionalities. It ensures that old code still works once the new code changes
The more issues you fix that solve real user problems, the higher the quality of your are done.
application when you release it to the general public.
Regression Testing is required when there is a
Having a higher-quality application when you release to the general public will increase
 Change in requirements and code is modified according to the requirement
customer satisfaction.

These users, who are early adopters of your application, will generate excitement about  New feature is added to the software
your application.  Defect fixing

 Performance issue fix


Sr.No. Alpha testing Beta testing
Performed at developer’s site Performed at end user’s site
1

Performed in controlled environment in Performed in uncontrolled environment in


2 developers presence developers absence

Less probability of finding errors as it is driven High probability of finding errors as it is


3 by developer used by end user.
It is done during implementation phase of It is done at the pre-release of the software
4 software

It is not considered as live application of It is considered as a live application of the


5 software software.
Less time consuming as developer can make More time consuming as user has to report
6
necessary changes in given time the bugs if any via appropriate channels.  Retest All
Alpha testing involves both white box and Beta testing typically uses black box
7 This is one of the methods for Regression Testing in which all the tests in the existing test
black box testing testing only
Long execution cycles may be required for Only a few weeks of execution are bucket or suite should be re-executed. This is very expensive as it requires huge time and
8 resources.
alpha testing required for beta testing

Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 17 Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 18
Unit 2: Types and Levels of Testing Unit 2: Types and Levels of Testing

 Regression Test Selection  In such cases, Manual execution of test cases increases test execution time as well
as costs.
Instead of re-executing the entire test suite, it is better to select part of test suite to be run
 Following are most important tools used for both functional and regression testing:
Test cases selected can be categorized as 1) Reusable Test Cases 2) Obsolete Test Cases.
 Selenium: This is an open source tool used for automating web applications. Selenium
Re-usable Test cases can be used in succeeding regression cycles. can be used for browser based regression testing.
Obsolete Test Cases can't be used in succeeding cycles.  Quick Test Professional (QTP): HP Quick Test Professional is automated software
designed to automate functional and regression test cases. It uses VBScript language for
 Prioritization of Test Cases
automation. It is a Data driven, Keyword based tool.
Prioritize the test cases depending on business impact, critical & frequently used
functionalities. Selection of test cases based on priority will greatly reduce the regression  Rational Functional Tester (RFT): IBM's rational functional tester is a Java tool used
test suite. to automate the test cases of software applications. This is primarily used for automating
regression test cases and it also integrates with Rational Test Manager.
Selecting test cases for regression testing
2. GUI Testing
It was found from industry data that good number of the defects reported by customers were due
to last minute bug fixes creating side effects and hence selecting the Test Case for regression There are two types of interfaces for a computer application.
testing is an art and not that easy. Command Line Interface is where you type text and computer responds to that command.

GUI stands for Graphical User Interface where you interact with the computer using
images rather than text.
Effective Regression Tests can be done by selecting following test cases –

 Test cases which have frequent defects GUI testing is the process of testing the system's Graphical User Interface of the
Application Under Test. GUI testing involves checking the screens with the controls like
 Functionalities which are more visible to the users menus, buttons, icons, and all types of bars - toolbar, menu bar, dialog boxes and
windows, etc.
 Test cases which verify core features of the product
GUI is what user sees. A user does not see the source code. The interface is visible to the
 Test cases of Functionalities which has undergone more and recent changes
user. Especially the focus is on the design structure, images that they are working
 All Integration Test Cases properly or not.

 All Complex Test Cases GUI Testing Guidelines


1. Check Screen Validations
 Boundary value test cases 2. Verify All Navigations
3. Check usability Conditions
 Sample of Successful test cases 4. Verify Data Integrity
5. Verify the object states
 Sample of Failure test cases 6. vi. Verify the date Field and Numeric Field Formats

 Regression Testing Tools Following are the GUI elements which can be used for interaction between the user and
application:
 If your software undergoes frequent changes, regression testing costs will escalate.

Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 19 Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 20
Unit 2: Types and Levels of Testing Unit 2: Types and Levels of Testing

 Good GUI improves feel and look of the application; it psychologically accepts the application
by the user.

 GUI represents a presentation layer of an application. Good GUI helps an application due to
better experience of the users.

 Consistency of the screen layouts and designs improves usability of an application.

Disadvantages of GUI Testing:

 When number of pages is large and number of controls in a single page is huge.

 Special application testing like those made for blind people or kids below age of five may
In addition to functionality, GUI testing evaluates design elements such as layout, colors, fonts,
need special training for testers.
font sizes, labels, text boxes, text formatting, captions, buttons, lists, icons, links and content.
GUI testing processes can be either manual or automatic, and are often performed by third -party
companies, rather than developers or end users.

Example: Consider any website like MSBTE, Google, yahoo or any login form or GUI of any
application to be tested. It includes following:

 All colors used for background, control colors, and font color have a major impact on users.
Wrong color combinations and bright colors may increase fatigue of users.

 All words, Fonts, Alignments, scrolling pages up and down, navigations for different
hyperlinks and pages, scrolling reduce usability.

 Error messages and information given to users must be usable to the user. Reports and outputs
produced either on screen or printed should be readable. Also paper size on printer, font, size of
screen should be consider.

 Screen layout in terms of number of instructions to users, number of controls and number of
pages are defined in low level design. More controls on single page and more pages reduce
usability.

 Types of control on a single page are very useful considering usability.

 Number of images on page or moving parts on screen may affect performance. These are high-
priority defects. It has direct relationships with usability testing, look, and feels of an application.
It affects emotions of users and can improve acceptability of an application.

Advantages of GUI Testing:

Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 21 Course Co-ordinator: Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 22
Unit 3: Test Management Unit 3: Test Management

• Testing Level Specific Test Plans :Plans for each level of testing.
CO3 : Prepare test plan for an application.
– Unit Test Plan

– Integration Test Plan


Test Plan
– System Test Plan
• A document describing the scope, approach, resources and schedule of intended test
activities. It identifies amongst others test items, the features to be tested, the testing – Acceptance Test Plan
tasks, who will do each task, degree of tester independence, the test environment, the test • Testing Type Specific Test Plans: Plans for major types of testing like Performance
design techniques and entry and exit criteria to be used, and the rationale for their choice, Test Plan and Security Test Plan.
and any risks requiring contingency planning. It is a record of the test planning process.
TEST PLAN GUIDELINES
Steps for preparing a test plan
• Make the plan concise. Avoid redundancy and superfluousness. If you think you do not
• Analyze the product (learn product thoroughly) need a section that has been mentioned in the template above, go ahead and delete that
section in your test plan.
• Develop test strategy -define scope of testing ,risk and issues
• Be specific. For example, when you specify an operating system as a property of a test
• Define objective of test
environment, mention the OS Edition/Version as well, not just the OS Name.
• Define test criteria
• Make use of lists and tables wherever possible. Avoid lengthy paragraphs.
• Planning the resources
• Have the test plan reviewed a number of times prior to base lining it or sending it
• Plan test environment for approval. The quality of your test plan speaks volumes about the quality of the
testing you or your team is going to perform.
• Schedule and cost
• Update the plan as and when necessary. An outdated and unused document stinks and
• Test deliverables is worse than not having the document in the first place.
Test deliverables includes
Deciding test approach
 Scope
• Like any project, the testing also should be driven by a plan. The test plan acts as the
 Methodology anchor
• For the execution, tracking and reporting of the entire testing project. Activities of test
 Requirements plan:
1. Scope Management: Deciding what features to be tested and not to be tested.
 Criteria for pass-fail
2. Deciding Test approach /strategy: Which type of testing shall be done like configuration,
 Schedule integration, localization etc.
3. Setting up criteria for testing: There must be clear entry and exit criteria for different
TEST PLAN TYPES phases of testing. The test strategies for the various features and combinations determined
• Master Test Plan: A single high-level test plan for a project/product that unifies all other how these features and combinations would be tested.
test plans. 4. Identifying responsibilities, staffing and training needs
5. Identifying resource requirements

Course Coordinator : Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 1 Course Coordinator : Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 2
Unit 3: Test Management Unit 3: Test Management

6. Identifying test deliverables 5. How long will be the training?


7. Testing tasks: size and effort estimation 6. Where training will be conducted

Resource requirements
Setting up criteria for testing
Factors to be considered while selecting the resource requirements are:
• There must be clear entry and exit criteria, pass or fail criteria , suspend criteria, Resume People: How many people are required? How much experience they should possess? What kind
of experience is needed? What should they be expertise in? Should they be full-time, part-time,
criteria for different phases of testing. The test strategies for the various features and
contract, students?
combinations determined how these features and combinations would be tested. Equipment: How many Computers are required? What configuration computers will be
 Pass or fail: - Specify the criteria that will be used to determine whether each test item required? What kind of test hardware is needed? Any other devices like printers, tools etc.
has passed or failed testing. Office and lab space: Where will they be located? How big will they be? How will they be
arranged?
 Suspend Criteria: - Specify the criteria to be used to suspend test activity. Software: Word processors, databases, custom tools. What will be purchased, what needs to be
written?
 Resume Criteria: - Specify the criteria which must be redone when testing is resumed. Outsource companies: Will they be used? What criteria will be used for choosing them?
How much will they cost?
Identifying Responsibilities Miscellaneous supplies: Disks, phones, reference books, training material. What else might be
necessary over the course of the project? The specific resource requirements are very project-,
• A testing project requires different people to play different roles. There are roles of test team-, and company-dependent, so the test plan effort will need to carefully evaluate what will
engineers, test leads and test managers. There is also role definition on the dimensions of be needed to test the software.
the modules being tested or the type of testing. These different roles should complement
each other. Test Deliverables and Milestones
• The different role definition should - • Test Deliverables are the artifacts which are given to the stakeholders of software project
during the software development lifecycle. There are different test deliverables at every
 Ensure there is clear accountability for a given task, so that each person knows what he or phase of the software development lifecycle. Some test deliverables are provided before
she has to do, testing phase, some are provided during the testing phase and some after the testing
 Clearly list the responsibilities for various functions to various people, so that everyone cycles is over.
knows how his or her work fits into the entire project.
• The different types of Test deliverables are:
 Complement each other, ensuring no one steps on an others‟ toes
 Test cases Documents
 Supplement each other, so that no task is left unassigned. Role definition should not only  Test Plan
address technical roles, but also list the management and reporting responsibilities. This
includes frequency, format and recipients of status reports and other project-tracking  Testing Strategy
mechanism.
 Test Scripts
Staff training
 Test Data
This activity of test planning will give the idea about the following points:
 Test Traceability Matrix
1. How many staff needs training?
2. Who are the attendees?
 Test Results/reports
3. What training needs to be given?
4. What are the pre requisites of the training?

Course Coordinator : Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 3 Course Coordinator : Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 4
Unit 3: Test Management Unit 3: Test Management

 Test summary report


Entity Purpose Attributes
 Install/config guides Test case Records all the ―static‖  Test case ID
information about the tests  Test case name
 Defect Reports
(filename)
 Release notes  Test case owner
 Associated files for
• The test plan describes the overall method to be used to verify that the software meets the the test case
product specification and the customer's needs. It includes the quality objectives, resource Test case- product cross Provides a mapping between  Test case ID
needs, schedules, assignments, methods, and so forth. reference the tests and the  Modulate ID
corresponding product
• Test cases list the specific items that will be tested and describe the detailed steps that features ; enables
will be followed to verify the software. identification of tests for a
given feature
• Bug reports describe the problems found as the test cases are followed. These could be Test case run history Gives the history of when a  Test case ID
done on paper but are often tracked in a database. test was run and what was  Run date
the result; provides inputs on  Time taken
• Test tools and automation are listed and described which are used to test the software. If selection of tests for  Run status
the team is using automated methods to test software, the tools used, either purchased or regression runs (see chapter (success/failure)
8)
written in-house, must be documented.
Test case – Defect cross Gives details of test cases  Test case ID
• Metrics, statistics, and summaries convey the progress being made as the test work reference introduced to test certain  Defect reference#
specific defects detected in (points to a record in
progresses. They take the form of graphs, charts, and written reports.
the product ;provides inputs the defect repository)
• Milestones: milestones are the dates of completion given for various tasks to be on the selection of tests for
regression runs
performed in testing. These are thoroughly tracked by the test manager and are kept in the
documents such as Gantt charts, etc.  Defect Repository

Entity Purpose Attributes


Test Management Defect details Records all the ― static  Defect ID
information about the  Defect priority
tests
/severity
• It concerned with both test resource and test environment management. It is the
 Defect description
role of test management to ensure that new or modified service products meet  Affected product(s)
business requirements for which they have been developed or enhanced.  Any relevant
version information
(for example, OS
1) Test Infrastructure Management version)
Testing requires a robust infrastructure to be planned upfront. This infrastructure is  Customers who
made up of three essential elements. encountered the
problem (could be
 A test case database (TCDB) (additional): A test case database captures all reported by the
the relevant information about the test cases in an organization. Some of the internal testing
team also)
entities and the attributes are given in following table
 Date and time of
defect
Course Coordinator : Mrs. Deshmukh A.P. M.M.Polytechnic , Thergaon Page 5 Course Coordinator : Mrs. Deshmukh A.P M.M.Polytechnic , Thergaon Page 6
Unit 3: Test Management Unit 3: Test Management

occurrence
Test case- product cross Provides a mapping between  Test case ID
reference the tests and the  Modulate ID
corresponding product
features ; enables
identification of tests for a
given feature
Test case run history Gives the history of when a  Test case ID
test was run and what was  Run date
the result; provides inputs on  Time taken
selection of tests for  Run status
regression runs (see chapter (success/failure)
8)
Test case – Defect cross Gives details of test cases  Test case ID
reference introduced to test certain  Defect reference#
specific defects detected in (points to a record in
the product ;provides inputs the defect repository)
on the selection of tests for
regression runs

2) Test People Management

• People management is an integral part of any project management and test planning.
• People management also requires the ability to hire, motivate, and retain the right people.
• These skills are seldom formally taught.
• Testing projects present several additional challenges.

Course Coordinator : Mrs. Deshmukh A.P M.M.Polytechnic , Thergaon Page 7 Course Coordinator : Mrs. Deshmukh A.P M.M.Polytechnic , Thergaon Page 7
Unit 3: Test Management Unit 3: Test Management

• We believe that the success of a testing organization depends vitally on judicious people • Features to be Tested:
management skills 1. List the features of the software/product to be tested.
2. Provide references to the Requirements and/or Design specifications of the
Test Lead responsibilities and activities:
features to be tested
• Identify how the test teams formed and aligned within organization
• Decide the roadmap for the project • Features Not to Be Tested:
• Identify the scope of testing using SRS documents. 1. List the features of the software/product which will not be tested.
• Discuss test plan, review and approve by management/ development team. 2. Specify the reasons these features won’t be tested.
• Identify required metrics • Approach:
• Calculate size of project and estimate efforts and corresponding plan. 1. Mention the overall approach to testing.
• Identify skill gap and balance resources and need for training education. 2. Specify the testing levels [if it’s a Master Test Plan], the testing types, and the testing
• Identify the tools for test reporting , test management, test automation,
methods [Manual/Automated; White Box/Black Box/Gray Box]
• Create healthy environment for all resources to gain maximum throughput.
• Identify how the test teams formed and aligned within organization management/ • Item Pass/Fail Criteria:
development team. 1. Specify the criteria that will be used to determine whether each test item
(software/product) has passed or failed testing.
Test team responsibilities and activities: • Suspension Criteria and Resumption Requirements:
• Initiate the test plan for test case design 1. Specify criteria to be used to suspend the testing activity.
• Conduct review meetings 2. Specify testing activities which must be redone when testing is resumed.
• Monitor test progress , check for resources, balancing and allocation
• Test Deliverables: List test deliverables, and links to them if available, including the
• Check for delays in schedule discuss, resolve risks if any.
following:
Test Process
– Test Plan (this document itself)
1) Base lining of test plan – Test Cases
• The format and content of a software test plan vary depending on the processes, – Test Scripts
standards, and test management tools being implemented. Nevertheless, the following – Defect/Enhancement Logs
format, which is based on IEEE standard for software test documentation, provides a – Test Reports
summary of what a test plan can/should contain. • Test Environment:
1. Specify the properties of test environment: hardware, software, network etc.
Test plan template 2. List any testing or related tools.
• Estimate: Provide a summary of test estimates (cost or effort) and/or provide a link to
• Test Plan Identifier: Provide a unique identifier for the document. (Adhere to the
the detailed estimation.
Configuration Management System if you have one.)
• Schedule: Provide a summary of the schedule, specifying key test milestones, and/or
• Introduction: provide a link to the detailed schedule.
• Staffing and Training Needs:
Provide an overview of the test plan. 1. Specify staffing needs by role and required skills.
Specify the goals/objectives. 2. Identify training that is necessary to provide those skills, if not already acquired.
Specify any constraints. • Responsibilities: List the responsibilities of each team/role/individual.
• References: List the related documents, with links to them if available, including the • Risks:
following: 1. List the risks that have been identified.
1. Project Plan 2. Specify the mitigation plan and the contingency plan for each risk.
2. Configuration Management Plan • Assumptions and Dependencies:
• Test Items: List the test items (software/products) and their versions. 1. List the assumptions that have been made during the preparation of this plan.

Course Coordinator : Mrs. Deshmukh A.P M.M.Polytechnic , Thergaon Page 8 Course Coordinator : Mrs. Deshmukh A.P M.M.Polytechnic , Thergaon Page 9
Unit 3: Test Management Unit 3: Test Management

2. List the dependencies. Test execution is the process of executing the code and comparing the expected and actual results.
• Approvals: Following factors are to be considered for a test execution process:
1. Specify the names and roles of all persons who must approve the plan.
2. Provide space for signatures and dates. (If the document is to be printed.)  Based on a risk, select a subset of test suite to be executed for this cycle.

 Assign the test cases in each test suite to testers for execution.
2) Test Case Specification
 Execute tests, report bugs, and capture test status continuously.
The test case specifications should be developed from the test plan and are the second phase
of the test development life cycle. The test specification should explain "how" to implement  Resolve blocking issues as they arise.
the test cases described in the test plan. Test case specifications are useful as it enlists the
 Report status, adjust assignments, and reconsider plans and priorities daily.
specification details of the items.
 Report test cycle findings and status.
Test Specification Items are must for each test specification should contain the following
items: 2) Test Reporting
1. Case No.: The test case number should be a three digit identifier of the following Test reporting is a means of achieving communication through the testing cycle. There
form:c.s.t, where: c- is the chapter number, s- is the section number, and t- is the test case are 3 types of test reporting.
number. 1. Test incident report:
A test incident report is communication that happens through the testing cycle as and
2. Title: is the title of the test. when defects are encountered .A test incident report is an entry made in the defect
repository each defect has a unique id to identify incident .The high impact test incident
3. Programme: is the program name containing the test.
are highlighted in the test summary report.
4. Author: is the person who wrote the test specification. 2. Test cycle report:
A test cycle entails planning and running certain test in cycle , each cycle using a
5. Date: is the date of the last revision to the test case. different build of the product .As the product progresses through the various cycles it is
expected to stabilize.
6. Background: (Objectives, Assumptions, References, Success Criteria): Describes in words
Test cycle report gives
how to conduct the test.
1. A summary of the activities carried out during that cycle.
7. Expected Error(s): Describes any errors expected 2. Defects that are uncovered during that cycle based on severity and impact
3. Progress from the previous cycle to the current cycle in terms of defect fixed
8. Reference(s): Lists reference documentation used to design the specification. 4. Outstanding defects that not yet to be fixed in cycle
9. Data: (Tx Data, Predicted Rx Data): Describes the data flows between the Implementation 5. Any variation observed in effort or schedule
under Test (IUT) and the test engine. 3 Test summary report:
The final step in a test cycle is to recommend the suitability of a product for release. A
10. Script: (Pseudo Code for Coding Tests): Pseudo code (or real code) used to conduct the report that summarizes the result of a test cycle is the test summary report.
test. There are two types of test summary report:
1. Phase wise test summary, which is produced at the end of every phase
2. Final test summary report.
Test Reporting A Summary report should present
1. Test Summary report Identifier
1) Executing Test Cases
2 Description: - Identify the test items being reported in this report with test id
3 Variances: - Mention any deviation from test plans, test procedures, if any.

Course Coordinator : Mrs. Deshmukh A.P M.M.Polytechnic , Thergaon Page 10 Course Coordinator : Mrs. Deshmukh A.P M.M.Polytechnic , Thergaon Page 11
Unit 3: Test Management Unit 4: Defect Management

4 Summary of results: - All the results are mentioned here with the resolved incidents and
their solutions. CO4 : Identify bugs to create defect report of given application.
5 Comprehensive assessment and recommendation for release should include Fit for
release assessment and recommendation of release
What is defect?
Defect: A defect is an error or a bug, in the application which is created. A programmer
while designing and building the software can make mistakes or errors. These mistakes or
errors mean that there are flaws in the software. These are called defects.

Defects in any system may arise in various stages of development life cycle. At each stage,
the impact and cost of fixing defects are dependent on various aspects including the defect
arising stage.

Different causes of software defects


 Miscommunication of requirements introduces error in code
 Unrealistic time schedule for development
 Lack of designing experience
 Lack of coding practices experience
 Human factors introduces errors in code
 Lack of version control
 Buggy third-party tools
 Last minute changes in the requirement introduce error
 Poor Software testing skill

Defect Classification

Defect Classification
1.Severity Wise
2.Work Product Wise
3.Type Of Error Wise
4.Status Wise

Course Coordinator : Mrs. Deshmukh A.P M.M.Polytechnic , Thergaon Page 12 Course Coordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 1
Unit 4: Defect Management Unit 4: Defect Management

Severity Wise:  Message Error: Inadequate/ incorrect/ misleading or missing error messages in
source code
 Major: A defect, which will cause an observable product failure or departure from  Navigation Error: Navigation not coded correctly in source code
requirements.  Performance Error: An error related to performance/optimality of the code
 Minor: A defect that will not cause a failure in execution of the product.  Missing Requirements: Implicit/Explicit requirements are missed/not documented
 Fatal: A defect that will cause the system to crash or close abruptly or effect other during requirement phase
applications.  Inadequate Requirements: Requirement needs additional inputs for to be complete
 Incorrect Requirements: Wrong or inaccurate requirements
Work product wise:
 Ambiguous Requirements: Requirement is not clear to the reviewer. Also includes
 SSD: A defect from System Study document ambiguous use of words – e.g. Like, such as, may be, could be, might etc.
 FSD: A defect from Functional Specification document  Sequencing / Timing Error: Error due to incorrect/missing consideration to timeouts
 ADS: A defect from Architectural Design Document and improper/missing sequencing in source code.
 DDS: A defect from Detailed Design document  Standards: Standards not followed like improper exception handling, use of E & D
 Source code: A defect from Source code Formats and project related design/requirements/coding standards
 Test Plan/ Test Cases: A defect from Test Plan/ Test Cases  System Error: Hardware and Operating System related error, Memory leak
 User Documentation: A defect from User manuals, Operating manuals  Test Plan / Cases Error: Inadequate/ incorrect/ ambiguous or duplicate or missing -
Test Plan/ Test Cases & Test Scripts, Incorrect/Incomplete test setup
Type of Errors Wise:  Typographical Error: Spelling / Grammar mistake in documents/source code
 Variable Declaration Error: Improper declaration / usage of variables, Type
 Comments: Inadequate/ incorrect/ misleading or missing comments in the source
mismatch error in source code
code
 Computational Error: Improper computation of the formulae / improper business Status Wise:
validations in code.
 Data error: Incorrect data population / update in database  Open
 Database Error: Error in the database schema/Design  Closed
 Missing Design: Design features/approach missed/not documented in the design  Deferred
document and hence does not correspond to requirements  Cancelled
 Inadequate or sub optimal Design: Design features/approach needs additional
inputs for it to be complete Design features described does not provide the best
Defect Management Process
approach (optimal approach) towards the solution required The process of finding defects and reducing them at the lowest cost is called as Defect
 In correct Design: Wrong or inaccurate Design Management Process.
 Ambiguous Design: Design feature/approach is not clear to the reviewer. Also
includes ambiguous use of words or unclear design features.
 Boundary Conditions Neglected: Boundary conditions not addressed/incorrect
 Interface Error: Internal or external to application interfacing error, Incorrect
handling of passing parameters, Incorrect alignment, incorrect/misplaced
fields/objects, un friendly window/screen positions
 Logic Error: Missing or Inadequate or irrelevant or ambiguous functionality in
source code

Course Coordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 2 Course Coordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 3
Unit 4: Defect Management Unit 4: Defect Management

Defect Prevention -- Implementation of techniques, methodology and standard processes to


reduce the risk of defects.

Deliverable Baseline -- Establishment of milestones where deliverables will be considered


complete and ready for further development work. When a deliverable is base lined, any further
changes are controlled. Errors in a deliverable are not considered defects until after the
deliverable is base lined.

Defect Discovery -- Identification and reporting of defects for development team


acknowledgment. A defect is only termed discovered when it has been documented and
acknowledged as a valid defect by the development team member(s) responsible for the
component(s) in error.

Defect Resolution -- Work by the development team to prioritize, schedule and fix a defect, and
document the resolution. This also includes notification back to the tester to ensure that the
resolution is verified.

Process Improvement -- Identification and analysis of the process in which a defect originated
to identify ways to improve the process to prevent future occurrences of similar defects. Also
the validation process that should have identified the defect earlier is analyzed to determine ways
to strengthen that process.

Management Reporting -- Analysis and reporting of defect information to assist management


with risk management, process improvement and project management.

Defect Life Cycle


DEFECT LIFE CYCLE (Bug Life cycle) is the journey of a defect from its identification to
its closure. The Life Cycle varies from organization to organization and is governed by the
software testing process the organization or project follows and/or the Defect tracking tool
being used.

Course Coordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 4 Course Coordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 5
Unit 4: Defect Management Unit 4: Defect Management

 New: When the bug is posted for the first time, its state will be “NEW”. This means  Thus, defects cause software to fail to meet requirements and make customers
that the bug is not yet approved. unhappy.
 Open: After a tester has posted a bug, the lead of the tester approves that the bug is  when a defect gets through during the development process, the earlier it is
genuine and he changes the state as “OPEN”. diagnosed, the easier and cheaper is the rectification of the defect.
 Assign: Once the lead changes the state as “OPEN”, he assigns the bug  The end result in prevention or early detection is a product with zero or minimal
corresponding developer or developer team. The state of the bug now is changed to defects.
“ASSIGN”.
 Test/Retest: Once the developer fixes the bug, he has to assign the bug to the testing
team for next round of testing. Before he releases the software with bug fixed, he
changes the state of bug to “TEST”. It specifies that the bug has been fixed and is
released to testing team.// At this stage the tester do the retesting of the changed code
which developer has given to him to check whether the defect got fixed or not.
 Deferred: The bug, changed to deferred state means the bug is expected to be fixed in
next releases. The reasons for changing the bug to this state have many factors. Some Identify Critical Risks -- Identify the critical risks facing the project or system. These are the
of them are priority of the bug may be low, lack of time for the release or the bug may types of defects that could jeopardize the successful construction, delivery and/or operation of
not have major effect on the software. the system.
 Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then
the state of the bug is changed to “REJECTED”. Estimate Expected Impact -- For each critical risk, make an assessment of the financial impact
 Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests if the risk becomes a problem.
the bug. If the bug is not present in the software, he approves that the bug is fixed and
Minimize Expected Impact -- Once the most important risks are identified try to eliminate
changes the status to “VERIFIED”.
each risk. For risks that cannot be eliminated, reduce the probability that the risk will become a
 Reopened: If the bug still exists even after the bug is fixed by the developer, the
problem and the financial impact should that happen.
tester changes the status to “REOPENED”. The bug traverses the life cycle once
again. The five general activities of defect prevention are:
 Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug
no longer exists in the software, he changes the status of the bug to “CLOSED”. This 1. Software Requirements Analysis
state means that the bug is fixed, tested and approved. Defects introduced during the requirements and design phase are not only more probable
 Fixed: When developer makes necessary code changes and verifies the changes then but also are more severe and more difficult to remove.
he/she can make bug status as „Fixed‟ and the bug is passed to testing team.
 Pending retest: After fixing the defect the developer has given that particular code Front-end errors in requirements and design cannot be found and removed via testing, but
for retesting to the tester. Here the testing is pending on the testers end. Hence its instead need pre-test reviews and inspections.
status is pending retest.
2. Reviews: Self-Review and Peer Review
Defect Prevention Process Self-review is one of the most effective activity in uncovering the defects which may later be
discovered by a testing team or directly by a customer.
 “Prevention is better than cure” applies to defects in the software development life
cycle A self-review of the code helps reduce the defects related to algorithm mplementations,
 Defects, as defined by software developers, are variances from a desired attribute. incorrect logic or certain missing conditions.
These attributes include complete and correct requirements and specifications as
drawn from the desires of potential customers.

Course Coordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 6 Course Coordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 7
Unit 4: Defect Management Unit 4: Defect Management

Peer review is similar to self-review in terms of the objective – the only difference is that it
is a peer (someone who understands the functionality of the code very well) who reviews the
code

3. Defect Logging and Documentation

Effective defect tracking begins with a systematic process. A structured tracking process
begins with initially logging the defects, investigating the defects, then providing the structure to
resolve them. Defect analysis and reporting offer a powerful means to manage defects and defect
depletion trends, hence, costs.

Root Cause Analysis and Preventive Measures Determination

After defects are logged and documented, the next step is to analyze them

Reducing the defects to improve the quality: The analysis should lead to implementing
changes in processes that help prevent defects and ensure their early detection.

Applying local expertise: The people who really understand what went wrong are the people
present when the defects were inserted – members of the software engineering team. They can
give the best suggestions for how to avoid such defects in the future.

Targeting the systematic errors: There may be many errors or defects to be handled in such
an analysis forum; however, some mistakes tend to be repeated. These systematic errors account
for a large portion of the defects found in the typical software project.

5. Embedding Procedures into Software Development Process

Implementation is the toughest of all activities of defect prevention.

It requires total commitment from the development team and management.

A plan of action is made for deployment of the modification of the existing processes or Estimate Expected Impact Of A Defect
introduction of the new ones with the consent of management and the team.
 Defect Impact: The degree of severity that a defect has on the development or operation
Defect Report Template of a component or system.

 How to Estimate the defect impact


 A defect report documents an anomaly discovered during testing.
 It includes all the information needed to reproduce the problem, including the author, 1. Once the critical risks are identified, the financial impact of each risk should be
release/build number, open/close dates, problem area, problem description, test estimated.
environment, defect type, how it was detected, who detected it, priority, severity,
status, etc. After uncovering a defect (bug), testers generate a formal defect report. 2. This can be done by assessing the impact, in dollars, if the risk does become a problem
 The purpose of a defect report is to state the problem as clearly as possible so that combined with the probability that the risk will become a problem.
developers can replicate the defect easily and fix it. 3. The product of these two numbers is the expected impact of the risk.

Course Coordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 8 Course Coordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 9
Unit 4: Defect Management Unit 4: Defect Management

4. The expected impact of a risk (E) is calculated as  Dynamic techniques: Testing in which system components are physically executed to
identify defects. Execution of test cases is an example of a dynamic testing technique.
E = P * I, where:
 Operational techniques: An operational system produces a deliverable containing a
P= probability of the risk becoming a problem and defect found by users, customers, or control personnel -- i.e., the defect is found as a
I= Impact in dollars if the risk becomes a problem. result of a failure.

Once the expected impact of each risk is identified, the risks should be prioritized by the  While it is beyond the scope of this study to compare and contrast the various static,
expected impact and the degree to which the expected impact can be reduced. While guess work dynamic, and operational techniques, the research did arrive at the following conclusions:
will constitute a major role in producing these numbers, precision is not important. What will be  Both static and dynamic techniques are required for an effective defect management
important is to identify the risk, and determine the risk's order of magnitude. Large, complex program. In each category, the more formally the techniques were integrated into the
systems will have many critical risks. Whatever can be done to reduce the probability of each development process, the more effective they were.
individual critical risk becoming a problem to a very small number should be done. Doing this Since static techniques will generally find defects earlier in the process, they are more
increases the probability of a successful project by increasing the probability that none of the efficient at finding defects.
critical risks will become a problem.
Reporting a Defect
One should assume that an individual critical risk has a low probability of becoming a problem
only when there is specific knowledge justifying why it is low. For example, the likelihood that
 Be specific:
an important requirement was missed may be high if developers have not involved users in the
project. If users have actively participated in the requirements definition, and the new system is 1. Specify the exact action: Do not say something like ‘Select ButtonB’. Do you
not a radical departure from an existing system or process, the likelihood may be low. mean ‘Click ButtonB’ or ‘Press ALT+B’ or ‘Focus on ButtonB and click
ENTER’? Of course, if the defect can be arrived at by using all the three ways,
For example: it’s okay to use a generic term as ‘Select’ but bear in mind that you might just get
the fix for the ‘Click ButtonB’ scenario. [Note: This might be a highly unlikely
o An organization with a project of 2,500 function points and was about medium at defect example but it is hoped that the message is clear.]
discovery and removal would have 1,650 defects remaining after all defect removal and 2. In case of multiple paths, mention the exact path you followed: Do not say
discovery activities. something like “If you do ‘A and X’ or ‘B and Y’ or ‘C and Z’, you get D.”
Understanding all the paths at once will be difficult. Instead, say “Do ‘A and X’
o The calculation is 2,500 x 1.2 = 3,000 potential defects. and you get D.” You can, of course, mention elsewhere in the report that “D can
also be got if you do ‘B and Y’ or ‘C and Z’.”
o The organization would be able to remove about 45% of the defects or 1,350 defects. 3. Do not use vague pronouns: Do not say something like “In ApplicationA, open X,
Y, and Z, and then close it.” What does the ‘it’ stand for? ‘Z’ or, ‘Y’, or ‘X’ or
o The total potential defects (3,000) less the removed defects (1,350) equals the remaining ‘ApplicationA’?”
defects of 1,650.
 Be detailed:
Techniques For Finding Defects
1. Provide more information (not less). In other words, do not be lazy. Developers
 Defects are found either by preplanned activities specifically intended to uncover defects may or may not use all the information you provide but they sure do not want to
(e.g., quality control activities such as inspections, testing, etc.) or by accident (e.g., users beg you for any information you have missed.
in production).
Techniques to find defects can be divided into three categories:  Be objective:

 Static techniques: Testing that is done without physically executing a program or 1. Do not make subjective statements like “This is a lousy application” or “You
system. A code review is an example of a static testing technique. fixed it real bad.”

Course Coordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 10 Course Coordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 11
Unit 4: Defect Management Unit 5: Testing Tools and Measurements

2. Stick to the facts and avoid the emotions. Test software for performance measure using automation
CO5 :
 Reproduce the defect: testing tools.

1. Do not be impatient and file a defect report as soon as you uncover a defect.
Replicate it at least once more to be sure. (If you cannot replicate it again, try Manual Testing
recalling the exact test condition and keep trying. However, if you cannot
replicate it again after many trials, finally submit the report for further • Manual testing is a testing process that is carried out manually in order to find defects
investigation, stating that you are unable to reproduce the defect anymore and without the usage of tools or automation scripting.
providing any evidence of the defect if you had gathered. )
• A test plan document is prepared that acts as a guide to the testing process in order to have
 Review the report: the complete test coverage.

1. Do not hit ‘Submit’ as soon as you write the report. Review it at least once. How to Do Manual Testing
Remove any typos.
• Requirement Analysis

• Test Plan Creation

• Test case Creation

• Test case Execution

• Defect Logging

• Defect Fix & Re-Verification

Limitations of Manual Testing


i) Manual Testing requires more time or more resources, sometimes both Time and
Resources.
Covering all areas of the Application requires more Tests, Creating all possible Test cases, and
executing Test cases takes more time. If it is Test Automation, Test tool can execute Tests quickly.

ii) Less Accuracy


Human Users (Testers) may make mistakes, so we cannot expect more accuracy in Manual
Testing,
If it is Test Automation / Automated Testing, if you provide the correct logic then test tool can
provide correct output.

iii) Performance testing is impractical in manual testing.

Course Coordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 12 Course Cordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 1
Unit 5: Testing Tools and Measurements Unit 5: Testing Tools and Measurements

Organizing Thousands of Machines / Computers and human Users is impractical, If it is Test • It is basically an automation process of a manual process.
Automation, we can create thousands of Virtual users and using 3 or 4 Computers we can apply
the Load and test the Performance of the Application • The main goal of Automation testing is to increase the test efficiency and develop software
value.
iv) Comparing large amount of data is impractical.
Test automation is the use of special software to control the execution of tests and the comparison
Comparing two Databases that have thousands of records is impractical, but it is very is in Test of actual outcomes with predicted outcomes. The objective of automated testing is to simplify as
Automation. much of the testing effort as possible with a minimum set of scripts. Test automation can automate
some repetitive but necessary tasks in a formalized testing process already in place, or add
v) Processing change requests during software maintenance takes more time additional testing that would be difficult to perform manually.
vi) Batch Testing is possible, but for each test execution Human user interaction is
Types of test automation tools:
mandatory.
• Static automation tools: These tools are used throughout a software development
Batch Testing means executing series of tests, In Batch Testing for every test case execution User /
lifecycle, e.g. tools used for verification purposes. There are many varieties of static testing
Tester interaction is mandatory, If it is Test Automation Test tool can execute series of Tests tools used by different people as per the type of system being developed. These tools do not
without human user interaction. involve actual input and output. Rather, they take a symbolic approach to testing, i.e. they
do not test the actual execution of the software. e.g. Flow analyzers, Coverage analyzers,
vii) GUI Objects Size difference and Color combinations etc.. are not easy to find in Manual Interface analyzer Code complexity measurement tools can be used to measure the
Testing. complexity of a given code. Similarly, data-profiling tools can be used to optimize a
database. Code-profiling tools can be used to optimize code. Test-generators are used for
viii) Manual Test Case scope is very less, if it is automated test, scope is more. generating a test plan form code. Syntax-checking tools are used to verify correctness of
code.
In Manual Testing, Test case scope is very limited why because Tester/user can concentrate on one • Dynamic automation tools: These tools test the software system with live data. They are
or two Verification points only, If it is Test Automation, Test tool (Tool also Software) can used at different levels of testing starting from unit testing & which may go up to system
concentration on multiple verification points at a time. testing & performance testing. These tools are generally used by tester.
These tools test the software system with live data. e.g. Test driver, Test beds, Emulators
ix) Executing same tests again and again is time taking process as well as Tedious. There are many different tools used for dynamic testing.
Sometimes we need to execute same tests using multiple sets of Test data, for each test iteration Some of the areas covered by testing tools are:
user interaction is mandatory, In Test Automation using Test Data, data file (either Text file or 1. Regression testing using automated tools.
Excel file or Database file) we can easily conduct Data driven Testing. 2. Defect tracking and communication systems used by tracking & communication.
Performance, Load, stress-testing tools
x) For every release you must rerun the same set of tests which can be tiresome.
Benefits of Automation Testing
We need to execute Sanity Test Cases and Regression Test cases on every modified build, it takes
• Reduces time of testing
more time. In Automated Testing / Test Automation once we can create Tests then Tool can • Improve the bugs finding
execute Tests multiple times quickly. • Deliver the quality product
• Allow to run tests many time with different data
Automation Testing • Getting more time for test planning
• Save resources or requires less
• Automation testing is a technique uses an application to implement entire life cycle of the • Automation never tires, and expert person can work at a time many tools.
software in less time and provides efficiency and effectiveness to the testing software.
Advantages of Switching To Automated Testing from Manual Testing
• Automation testing is an Automatic technique where the tester writes scripts by own and
uses suitable software to test the software. • Efficient testing

Course Cordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 2 Course Cordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 3
Unit 5: Testing Tools and Measurements Unit 5: Testing Tools and Measurements

• Consistency in testing Features for selecting static test tools:


• Better quality software i. Assessment of the organization’s maturity (e.g. readiness for change);
• Automated testing is cheaper ii. Identification of the areas within the organization where tool support will help to improve
• Automation testing is faster testing processes;
• Automated testing is more reliable iii. Evaluation of tools against clear requirements and objective criteria;
• Automated testing reduces human and technical risks iv. Proof-of-concept to see whether the product works as desired and meets the requirements and
• Automated testing is more powerful and versatile objectives defined for it;
v. Evaluation of the vendor (training, support and other commercial aspects) or open-source
Features of automated testing tools network of support;
• FAST Automation Engine vi. Identifying and planning internal implementation (including coaching and mentoring for those
• Object Eye Internal Recorder new to the use of the tool).
• Visual Recorder • Static test tools includes:
• Multiple Browsers Support 1. Flow analyzer :ensures consistency in data flow from input to output
• Dynamic Test Data Support 2. Path tests :finds unused codes and codes with contradictions
• Continuous Server Integration 3. Coverage analyzer :all logical paths are tested
• Mobile Testing Support 4. Interface analyzer :examines effects of passing variables and data between modules
• Robust Reporting & Logs
• Reusable Methods Dynamic Testing Tool
• Integration with Bug Tracking tools • Dynamic testing tools are used at different levels of testing starting from unit testing &
• Integration with Test Management Tools which may go up to system testing & performance testing.
• Job Scheduler • These tools are generally used by tester.
• Image Comparison • There are many different tools used for dynamic testing. Some of the areas covered by
• Distributed Test Execution testing tools are:
• Captcha Automation • 1. Regression testing using automated tools.
• Risk Based Testing • 2. Defect tracking and communication systems used by tracking & communication.
• Performance, Load, stress-testing tools.

Static Testing Tool


Features for selecting dynamic test tools:
• Static testing tools are used during static analysis of a system.
• To detect memory leaks;
• Static testing tools are used throughout a software development life cycle, e.g. , tools used
• To identify pointer arithmetic errors such as null pointers;
for verification purposes.
• To identify time dependencies.
• There are many varieties of static testing tools used by different people as per the type of
system being developed.
Dynamic test tools includes:
• Code complexity measurement tools can be used to measure the complexity of a given
1. Test driver :includes data into module under test (MUT)
code.
2. Test beds :simultaneously displays source code along with the program under execution
• Similarly, data-profiling tools can be used to optimize a database.
3. Emulators
• Code-profiling tools can be used to optimize code.
• Test-generators are used for generating a test plan form code.
• Syntax-checking tools are used to verify correctness of code.

Course Cordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 4 Course Cordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 5
Unit 5: Testing Tools and Measurements Unit 5: Testing Tools and Measurements

OR
1. Reduce time of testing
2. Improve the bugs finding
3. Deliver the quality software/product
4. Allow to run tests many times with different data
5. Getting more time for test planning
6. Save resources or reduce requirement
7. It is never tired and expert person can work at a time many tools.

Disadvantages of using tools:


• Unrealistic expectation from the tool
• People always make mistake by understanding time cost and effort for the initial
introduction of the tool
• People frequently miscalculate the time and effort needed to achieve significant and
continuing benefits from the tools
• Mostly people underestimate the effort required to maintain the test assets generated by the
tool
• People depend on the tool a lot.(over reliance on the tool)

Guidelines for selecting a tool:

1. The tool must match its intended use. Wrong selection of a tool can lead to problems like
Advantages using Tools lower efficiency and effectiveness of testing may be lost.
2. Different phases of a life cycle have different quality-factor requirements. Tools required at
1. Speed. The automation tools tests the software under tests with the very faster speed. There is a
each stage may differ significantly.
vast difference between the speed of user entering the data and the automated tools generating and
entering the data required for the testing of the software. Speed of this software also completes the 3. Matching a tool with the skills of testers is also essential. If the testers do not have proper
work faster. training and skill then they may not be able to work effectively.
2. Efficiency. While testers are busy running test cases, testers can't be doing anything else. If the 4. Select affordable tools. Cost and benefits of various tools must be compared before
tester have a test tool that reduces the time it takes for him to run his tests, he has more time for making final decision.
test planning and thinking up new tests. 5. Backdoor entry of tools must be prevented. Unauthorized entry results into failure of tool
3. Accuracy and Precision. A test tool will perform the same test and check the results perfectly, and creates a negative environment for new tool introduction.
each and every time.
4. Resource Reduction. Sometimes it can be physically impossible to perform a certain test case.
The number of people or the amount of equipment required to create the test condition could be Criteria for Selecting Test Tools:
prohibitive. A test tool can be used to simulate the real world and greatly reduce the physical • The Criteria's for selecting Test Tools are,
resources necessary to perform the testing. 1. Meeting requirements;
5. Simulation and Emulation. Test tools are often used to replace hardware or software that
2. Technology expectations;
would normally interface to your product. This "fake" device or application can then be used to
drive or respond to your software in ways that you choose and ways that might otherwise be 3. Training/skills;
difficult to achieve. 4. Management aspects.
6. Relentlessness. Test tools and automation never tire or give up. They can keep going and going
and on and on without any problem; whereas the tester gets tired to test again and again.

Course Cordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 6 Course Cordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 7
Unit 5: Testing Tools and Measurements Unit 5: Testing Tools and Measurements

1. Meeting requirements Metrics and measurements


There are plenty of tools available in the market but rarely do they meet all the requirements of a Metrics & measurement:
given product or a given organization. Evaluating different tools for different requirements involve Metrics is a relative measurement of status of process or product in terms of two or more entities
significant effort, money, and time. Given of the plethora of choice available, huge delay is taken together for comparison.
involved in selecting and implementing test tools. Measurements are key element for controlling software engineering processes.
2. Technology expectations Need of software measurements:
Test tools in general may not allow test developers to extends/modify the functionality of the 1. Understanding: Metrics can help in making the aspects of process more visible, thereby giving a
framework. So extending the functionality requires going back to the tool vendor and involves better understanding of the relationship among the activities and entities they affect.
additional cost and effort. A good number of test tools require their libraries to be linked with 2. Control: Using baselines, goals and an understanding of the relationships, we can predict what is
product binaries likely to happen and correspondingly, make appropriate changes in the process to help meet the
3. Training/skills goals.
While test tools require plenty of training, very few vendors provide the training to the required 3. Improvement: By taking corrective actions and making appropriate changes, we can improve a
level. Organization level training is needed to deploy the test tools, as the user of the test suite are product. Similarly, based on the analysis of a project, a process can also be improved.
not only the test team but also the development team and other areas like configuration
management. Metrics classification
4. Management aspects Metrics are basically classified as:
A test tool increases the system requirement and requires the hardware and software to be 1. Product Metrics: Product metrics are measures of software product at any stage of its
upgraded. This increases the cost of the already- expensive test tool. development, from requirements to installed system.
2. Process Metrics: Process metrics are measures of the software development process such as the
When to use automated test tools overall development time, type of methodology used or the average level of experience of the
 Stress, reliability, scalability and performance testing: programming staff.
These types of testing require the test case to be run from a large number of different
machines for an extended period of time, such as 24 hours, 48 hours, and so on. It is just Product Metrics is classified as
not possible to have hundreds of users trying out the product they may be not willing to 1. Project Metrics: A set of metrics that indicates how the project is planned and executed.
perform the repetitive tasks, nor will it be possible to find that many people with the 2. Progress: A set of metrics that tracks how the different activities of the project are
required skill sets. Test cases belonging to these testing types become the first candidates progressing.
for automation. Progress Metrics is classified as
• Regression tests: Regression tests are repetitive in nature .These test cases are executed 1. Test defect metrics : help the testing team in analysis of product quality and testing.
multiple times during the product development phase. Given the repetitive nature of test 2. Development defect metrics: help the development team in analysis of development
cases, automation will save significant time and effort in the long run. The time thus gained activities.
can be effectively utilized for other tests. 3. Productivity: A set of metrics that takes into account various productivity numbers that can be
• Functional tests: These kinds of tests may require a complex set up and thus require collected and used for planning and tracking testing activities.
specialized skill, which may not be available on an ongoing basis. Automating these once,
using the expert skill sets, can enable using less-skilled people to run these test on an
ongoing basis

Course Cordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 8 Course Cordinator : Mrs. Kshirsagar S.R. M.M.Polytechnic , Thergaon Page 9

You might also like