Unit-4 Software Testing
Unit-4 Software Testing
Types of Testing
Description
Documents
It is a high-level document which describes principles,
Test policy methods and all the important testing goals of the
organization.
A high-level document which identifies the Test Levels
Test strategy
(types) to be executed for the project.
A test plan is a complete planning document which contains
Test plan the scope, approach, resources, schedule, etc. of testing
activities.
Requirements This is a document which connects the requirements to the
Traceability Matrix test cases.
Test Scenario is an item or event of a software system which
Test Scenario
could be verified by one or more Test cases.
It is a group of input values, execution preconditions,
Test Case expected execution postconditions and results. It is
developed for a Test Scenario.
Test Data is a data which exists before a test is executed. It
Test Data
used to execute the test case.
Defect report is a documented report of any flaw in a
Defect Report Software System which fails to perform its expected
function.
Advantages of Test Documentation
Documentation not only offers a systematic approach to software testing,
but it also acts as training material to freshers in the software testing
process
It is also a good marketing & sales strategy to showcase Test
Documentation to exhibit a mature testing process
Test documentation helps you to offer a quality product to the client
within specific time limits
In Software Engineering, Test Documentation also helps to configure or
set-up the program through the configuration document and operator
manuals
Test documentation helps you to improve transparency with the client
The test plan is a template for conducting software testing activities as a defined
process that is fully monitored and controlled by the testing manager. The test plan is
prepared by the Test Lead (60%), Test Manager(20%), and by the test engineer(20%).
Phase Test Plan-A phase test plan is a type of test plan that addresses any one
phase of the testing strategy. For example, a list of tools, a list of test cases, etc.
Specific Test Plans-Specific test plan designed for major types of testing like
security testing, load testing, performance testing, etc. In other words, a specific
test plan designed for non-functional testing.
Goals Of Planning
The following are some of the key benefits of making a test plan:
Quick guide for the testing process: The test plan serves as a quick
guide for the testing process as it offers a clear guide for QA
engineers to conduct testing activities.
Helps to avoid out-of-scope functionalities: The test plan offers
detailed aspects such as test scope, test estimation, strategy, etc.
Helps to determine the time, cost, and effort: The Test serves as
the blueprint to conduct testing activities thus it helps to deduce an
estimate of time, cost, and effort for the testing activities.
Provide a schedule for testing activities: A test plan is like a rule
book that needs to be followed, it thus helps to schedule activities
that can be followed by all the team members.
Test plan can be reused: The test plan documents important aspects
like test estimation, test scope, and test strategy which are reviewed
by the Management Team and thus can be reused for other projects.
Planning Topics
The test plan consists of various parts, which help us to derive the entire testing activity.
Objectives: It consists of information about modules, features, test data etc., which indicate
the aim of the application means the application behavior, goal, etc.
Scope: It contains information that needs to be tested with respective of an application. The
Scope can be further divided into two parts:
o In scope
o Out scope
o Suppose in the first release of the product, the elements that have been developed, such as P,
Q, R, S, T, U, V, W…..X, Y, Z. Now the client will provide the requirements for the new
features which improve the product in the second release and the new features are A1, B2,
C3, D4, and E5.
After that, we will write the scope during the test plan as
Scope
Features to be tested
P, Q, R, S, T
W…..X, Y, Z
Therefore, we will check the new features first and then continue with the old features
because that might be affected after adding the new features, which means it will also affect
the impact areas, so we will do one round of regressing testing for P, Q, R…, T features.
Test methodology:
It contains information about performing a different kind of testing like Functional
testing, Integration testing, and System testing, etc.
on the application. In this, we will decide what type of testing; we will perform on the
various features based on the application requirement.
And here, we should also define that what kind of testing we will use in the testing
methodologies so that everyone, like the management, the development team, and the
testing team can understand easily because the testing terms are not standard.
Smoke testing Functional testing Integration testing
Approach
This attribute is used to describe the flow of the application while performing testing and for
the future reference.
We can understand the flow of the application with the help of below aspects:
Assumption
It contains information about a problem or issue which maybe occurred during the testing
process and when we are writing the test plans, the assured assumptions would be made like
resources and technologies, etc.
Risk
These are the challenges which we need to face to test the application in the current release
and if the assumptions will fail then the risks are involved.
For example, the effect for an application, release date becomes postponed.
If there are 3-4 small projects, then the test manager will assign each project to each
Test Lead. And then, the test lead writes the test plan for the project, which he/she is
assigned.
Schedule
It is used to explain the timing to work, which needs to be done or this attribute covers when
exactly each testing activity should start and end? And the exact data is also mentioned for
every testing activity for the particular date.
Therefore as we can see in the below image that for the particular activity, there will be a
starting date and ending date; for each testing to a specific build, there will be the specified
date.
For example
o Writing test cases
o Execution process
Defect tracking
It is generally done with the help of tools because we cannot track the status of each
bug manually.
And we also comment about how we communicate the bugs which are identified
during the testing process and send it back to the development team and how the
development team will reply. Here we also mention the priority of the bugs such as
high, medium, and low.
Test Environments
These are the environments where we will test the application, and here we have two
types of environments, which are of software and hardware configuration.
And the hardware configuration means the information about different sizes
of RAM, ROM, and the Processors.
For example
Server
Exit Criteria
Test Automation
In this, we will decide the following:
But if some features are unstable and have lots of bugs, which means that we will not test
those features because it has to be tested again and again while doing manual testing.
If there is a feature that has to be tested frequently, but we are expecting the requirement
change for that feature, so we do not check it because changing the manual test cases is more
comfortable as compared to change in the automation test script.
Effort estimation
In this, we will plan the effort need to be applied by every team member.
Test Deliverable
These are the documents which are the output from the testing team, which we handed over
to the customer along with the product. It includes the following:
o Test plan
o Test Cases
o Test Scripts
o RTM(Requirement Traceability Matrix)
o Defect Report
o Test Execution Report
o Graphs and metrics
o Release Notes
Template
This part contains all the templates for the documents that will be used in the product, and all
the test engineers will use only these templates in the project to maintain the consistency of
the product. Here, we have different types of the template which are used during the entire
testing process such as:
o Test Lead→60%
o Test Manager→20%
o Test Engineer→20%
Therefore, as we can see from above that in 60% of the product, the test plan is written by the
Test Lead.
o Test Lead
o Test Manager
o Test engineer
o Customer
o Development team
The Test Engineer review the Test plan for their module perspective and the test manager
review the Test plan based on the customer opinion.
o Customer
o Test Manager
o Test Lead
o Test Engineer
o Test Engineer
o Test Lead
o Customer
o Development Team
o Test Manager
o Test Lead
o Customer
There are multiple differences between test cases and test scenarios −
1 It has all the granular details on what to test, It is a high level document which touches all the
what test steps need to be done, what is functionalities, and user stories of all the
actual, and expected results etc. features.
It is created from a test scenario document It is created straight from the requirements but it
3 and it is reused again during the regression needs to be updated whenever there is a change
or retesting phase. or addition of any requirements.
A test case is written even before the starting of the software development when the requirements
are ready. The testers finish designing the test cases at the time when development of the software
has also been completed.
It is also created right after the complete software development (before production deployment)
or after a new feature has been implemented as per need.
Test case creation is an ongoing process during the entire software development process such that
once a module or a part of it is ready, it can immediately be tested parallelly.
The test cases are written for the reasons listed below −
Prioritizing Test Scenarios: Determining which test scenarios are critical or high priority
and should be tested first.
Designing Test Cases: Creating detailed test cases for each identified scenario. A test
case typically includes:
Organizing Test Cases: Grouping test cases logically, such as by module or functional
area, to ensure comprehensive coverage.
Assigning Test Resources: Allocating human and technical resources needed to execute
the test cases.
Defining Test Data: Determining what data inputs will be used for testing and preparing
them accordingly.
Scheduling: Planning when each test case will be executed, taking into account
dependencies and timelines.
Review and Approval: Having test cases reviewed by stakeholders, QA leads, or team
members to ensure completeness and accuracy.
Documentation: Documenting the entire test case planning process, including any
changes or updates made during the planning phase.
Components Purpose
Test Case
A short description to describe why the test case is designed.
Description
The prerequisites that needed to be satisfied before starting with the test
Preconditions
steps.
Test Data The inputs and data which are needed to complete the test.
The conditions that are required to be fulfilled after the test case is run
Post Conditions
successfully.
Test Status The status of the test after comparing with the actual and expected results.
Created By The tester name who has designed the test case.
Created Date The data on which the test case has been created.
Reviewed Date The data on which the test case has been reviewed.
Executed By The tester name who has executed the test case.
Executed Date The data on which the test case has been executed.
Formal Test Case − It follows the test case template or format as discussed above.
Informal Test Case − It does not follow any test case template or format.
Functional Test Cases − They are written to verify if the functionalities of the software
are working as expected.
Unit Test Cases − They are written by developers to verify if the unit of the software they
have developed is working.
GUI Test Cases − They are written to verify the graphical user interface of the software.
Integration Test Cases − They are written to verify if the different components of the
software are working fine after being integrated with other components.
Performance Test Cases − They are written to verify the response time and overall
performance of the software.
Database Test Cases − They are written to verify the backend and if all the data are
reflecting in the correct tables in the database.
Security Test Cases − They are written to verify the security features of the software.
Usability Test Cases − They are written to verify if the software is usable and user
friendly.
User Acceptance Testing − They are written to verify if the software is working correctly
in real life scenarios and environments.
The below is an example of a functional test case designed to verify if the payment can be
processed only if the user has a minimum account balance.
Test Case Id 10
Created By Ram
Test Case
To verify payment features
Description
Verify if the payment can be processed only if the user has minimum
Scenario Name
account balance.
user account balance is checked following which the payment is done using that balance.
Test Cases For FAN (UI Testing)
Check what type of pen that is, that table fan or a
ceiling fan.
Check how many blades are present in that fan.
Check that all the blades and parts fit correctly.
Check whether the fan dimension is as per the
specification document
Check whether the color of the pen is as per the
specification document
Check the size of the blades as Per the specification
document.
Check the position of the company logo as per the
specification document.
Check whether the company logo is visible.
Check if the distance between another blade is as per
the specification document, and also, the gap between
the blades should be equal.
Fan Test Cases Functional Testing
Check the power consumption by a fan as per the
requirement document
Check if the fan is working when the electric switch is
on
Check whether the fan is slowing down when the
switch is off
Check whether the speed of the fan is when the
regulator is adjusted.
Check the moving direction of the fan blades. Is it
clockwise or anticlockwise?
Check when the fan is rotating. It is throwing the wind
in the proper direction.
Check by using a regulator; you can control the fan’s
speed.
Check the time required to reach its maximum speed
when the switch is on
Check the maximum and minimum speed of a fan
Check whether the materials used to build a fan are as
per the specification document.
Check whether the fan blades are bending when the
fan is rotating state
Check if the fan is not wobbling when it’s in
movement.
Check if it works properly under voltage fluctuation, like
low voltage conditions.
Check the status of a fan when it is rotating for a
longer time
Check the condition of Motors coils and other electrical
parts when there is an in the voltage
Check the fan is moving smoothly without making so
much noise and vibration
Check the weight of the pen is as per the specification
document
Check the voltage fluctuation effect of a fan when it is
moving
Check after the switch on the fan doesn’t run in the
anticlockwise direction.
Check all fan blades; they should all be the same size.
Test Cases For Ceiling Fan
Whether the ceiling fan has a hook hanging on the roof.
Check whether all the fan blades are fitted tightly
Check the fan is on when the switch is on
Check that weather-specific distance is maintained
between the roof and the fan mentioned in the
requirement document.
Check all the electric materials, like capacitors, are
covered within the plastic coil of the fan.
Test Cases For Fan-Component Tests
The fan’s wings should have edges long enough to give
proper air.
The fan’s rod should be sleek enough to be well put in
the Fan’s cup and engine.
The fan’s engine should have all wiring fitted to create
a connected circuit with all components.
Test Case For Fan – Integration Tests
When all fan components are switched on, the fan
should start working.
The fan’s rod should not rotate when the engine is
working.
The fan’s engine should start working when switched
on.
When electricity connects, the fan’s wings should
rotate along with the engine.
1. Test Case Status: Track the status of each test case (e.g., not started, in progress,
passed, failed, blocked) to monitor progress and identify any bottlenecks in testing.
2. Assign Ownership: Assign ownership of test cases to specific team members or
testers responsible for executing them. This helps in accountability and ensures that
each test case is executed by the appropriate person.
3. Execution History: Maintain a history of test case executions, including dates, results
(pass/fail), and any comments or notes related to the execution.
4. Defect Tracking Integration: Integrate test case tracking with defect tracking tools
or systems. Link test cases to defects found during testing to facilitate traceability and
resolution.
5. Reporting: Generate reports on test case status, coverage, and results to provide
visibility to stakeholders and management about the progress and quality of testing
efforts.
6. Regular Updates and Reviews: Regularly update test case status and review
progress in team meetings or through collaboration tools to ensure alignment and
address any issues promptly.
Define Test Strategy: Determine testing approaches, techniques, and tools (e.g.,
Selenium for UI testing, JMeter for performance testing).
Create Test Plan: Document test objectives, scope, schedule, resources, and risks.
Plan for different types of testing phases (unit, integration, system, acceptance).
Identify Test Scenarios: Based on requirements, create test scenarios that cover all
functionalities (e.g., user registration, product search, checkout process).
Write Test Cases: Detail each scenario with test case steps, expected results, and
preconditions. Organize test cases into suites for better management.
**5. Test Environment Setup:
Execute Test Cases: Run test cases based on prioritization (critical functionalities
first).
Record Results: Document test results, including actual outcomes, discrepancies, and
defects found.
Perform Regression Testing: Validate fixes and ensure existing functionalities
remain intact after changes.
Report Defects: Log defects found during testing in a defect tracking system (e.g.,
JIRA).
Prioritize Defects: Prioritize defects based on severity and impact on the application's
functionality.
Monitor Defect Resolution: Track the status of defects from discovery through to
resolution and verification.
Evaluate Exit Criteria: Assess whether all planned tests have been executed and
whether the software is ready for release.
Prepare Test Summary Report: Summarize testing activities, results, and metrics
(e.g., test coverage, defect density).
Conduct Lessons Learned: Review the testing process to identify improvements for
future projects.
Key Points:
Tools Used: Selenium for UI testing, JMeter for performance testing, JIRA for defect
tracking.
Challenges Faced: Integrating third-party payment gateways, ensuring cross-browser
compatibility.
Success Factors: Collaboration among cross-functional teams, adherence to test
planning and execution timelines.
This case study illustrates how the Test Life Cycle is implemented in a software project,
ensuring systematic testing to deliver a high-quality e-commerce website. Each stage—from
planning and design to execution and closure—plays a crucial role in achieving testing
objectives and meeting stakeholder expectations.