Unit VI Software Testing New
Unit VI Software Testing New
A Strategic Approach to Software Testing, Verification and Validation, Organizing for Software
Testing, Software Testing Strategy—The Big Picture, Criteria for Completion of Testing, Strategic
Issues, Test Strategies for Conventional Software, Unit Testing, Integration Testing, Test Strategies for
Object-Oriented Software, Unit Testing in the OO Context, Integration Testing in the OO Context, Test
Here are the definitions of each term in the context of a strategic approach:
1. Software Testing
Software testing involves the process of executing a software application to find defects or bugs.
It verifies that the software behaves as expected and that it meets the functional and non-
functional requirements. Testing can be manual or automated and is conducted at various levels,
including unit testing, integration testing, system testing, and acceptance testing.
2. Verification
Verification is the process of ensuring that the software was built correctly according to its
specifications and design documents. The goal is to answer the question: "Did we build the
software right?" It is typically performed through reviews, inspections, and static analysis.
Verification ensures that the software adheres to its predefined requirements without actually
executing the program.
Example: Reviewing design documents to check if the software architecture aligns with
the system requirements.
3. Validation
Validation is the process of ensuring that the software meets the intended use and fulfills the
user's needs. The goal is to answer the question: "Did we build the right software?" Validation
ensures that the software performs as expected in real-world scenarios and satisfies customer or
user expectations. It involves executing the software and verifying that it provides value to the
end-user.
Example: Conducting user acceptance testing (UAT) to verify that the software meets
the needs of the end-users.
Planning: Defining clear goals and selecting the right methodologies, tools, and
resources for testing and V&V.
Comprehensiveness: Ensuring all aspects of the software (functional, non-functional,
performance, security, etc.) are covered.
Early Involvement: Engaging in V&V activities early in the software development
lifecycle, including during the design and development phases, to identify and resolve
issues early.
Risk-Based Testing: Focusing efforts on testing areas of the software that have the
highest risk of failure or impact on the system.
Automation: Leveraging automation where appropriate to improve efficiency,
repeatability, and consistency in testing and verification activities.
Continuous Feedback: Implementing continuous integration and testing, where
feedback from testing is quickly integrated into the development process.
A strategic approach helps in improving software quality, reducing the cost of fixing defects, and
ensuring that the software meets both technical and user requirements effectively.
A Strategic Approach to Software Testing, Verification, and Validation (V&V) refers to a comprehensive
and systematic methodology for ensuring that software meets its specified requirements, functions
correctly, and satisfies quality standards. This approach typically includes a combination of Testing,
Verification, and Validation processes, each with its own focus but all working together to ensure
software quality.
Here are the definitions of each term in the context of a strategic approach:
1. Software Testing
Software testing involves the process of executing a software application to find defects or bugs. It
verifies that the software behaves as expected and that it meets the functional and non-functional
requirements. Testing can be manual or automated and is conducted at various levels, including unit
testing, integration testing, system testing, and acceptance testing.
2. Verification
Verification is the process of ensuring that the software was built correctly according to its specifications
and design documents. The goal is to answer the question: "Did we build the software right?" It is
typically performed through reviews, inspections, and static analysis. Verification ensures that the
software adheres to its predefined requirements without actually executing the program.
Example: Reviewing design documents to check if the software architecture aligns with the
system requirements.
3. Validation
Validation is the process of ensuring that the software meets the intended use and fulfills the user's
needs. The goal is to answer the question: "Did we build the right software?" Validation ensures that the
software performs as expected in real-world scenarios and satisfies customer or user expectations. It
involves executing the software and verifying that it provides value to the end-user.
Example: Conducting user acceptance testing (UAT) to verify that the software meets the needs
of the end-users.
Planning: Defining clear goals and selecting the right methodologies, tools, and resources for
testing and V&V.
Comprehensiveness: Ensuring all aspects of the software (functional, non-functional,
performance, security, etc.) are covered.
Early Involvement: Engaging in V&V activities early in the software development lifecycle,
including during the design and development phases, to identify and resolve issues early.
Risk-Based Testing: Focusing efforts on testing areas of the software that have the highest risk of
failure or impact on the system.
Automation: Leveraging automation where appropriate to improve efficiency, repeatability, and
consistency in testing and verification activities.
Continuous Feedback: Implementing continuous integration and testing, where feedback from
testing is quickly integrated into the development process.
A strategic approach helps in improving software quality, reducing the cost of fixing defects, and
ensuring that the software meets both technical and user requirements effectively.
A Software Testing Strategy is a comprehensive plan that outlines the approach and processes used to
test a software application or system to ensure it meets the specified requirements and quality
standards. The testing strategy defines the scope, objectives, methods, and resources required to
conduct testing efficiently and effectively.
1. Testing Objectives
The primary objective of a software testing strategy is to ensure that the software behaves as expected,
meets functional and non-functional requirements, and is free from critical defects. Specific objectives
include:
2. Scope of Testing
The testing strategy should clearly define the scope of testing to determine what will and will not be
tested. This includes:
Functional testing: Validating whether the software works according to functional specifications
(e.g., unit testing, integration testing, system testing).
Non-functional testing: Evaluating non-functional aspects such as performance, security,
usability, and load testing.
Regression testing: Ensuring that new code changes don't break existing features.
Acceptance testing: Ensuring the software meets end-user requirements and expectations (e.g.,
User Acceptance Testing).
Areas not covered: Clearly stating any aspects not within the scope of testing, such as certain
third-party integrations or legacy systems.
3. Test Levels
A good strategy outlines various test levels throughout the software development lifecycle. Common
test levels include:
The strategy defines which types of testing will be employed and how they will be executed. These may
include:
Manual Testing: Executing test cases manually without automation, often used for
exploratory, usability, or ad-hoc testing.
Automated Testing: Using automated test scripts to test software repeatedly, useful for
regression tests, performance tests, and high-volume tests.
Static Testing: Reviewing code, documentation, and design without execution (e.g.,
code reviews, inspections).
Dynamic Testing: Running the software and checking its behavior during execution.
Black-box Testing: Testing the software’s functionality without knowledge of its internal
workings.
White-box Testing: Testing with an understanding of the internal logic and code
structure.
Gray-box Testing: Combines both black-box and white-box techniques, often focusing
on testing from the user perspective with partial knowledge of the internal system.
5. Test Environment
The test environment includes the hardware, software, network configurations, databases, and other
resources required to execute the tests. The strategy should define:
Test setup: Specify which environments are needed (e.g., development, staging, production).
Test data: Identify the data required for testing and how to manage it securely, especially for
tests involving sensitive or personal data.
Configuration management: Managing the configuration of environments to ensure consistency
between tests.
The strategy should outline the tools and resources required for testing, including:
Test management tools: Tools for organizing and tracking test cases, such as Jira, TestRail, or
Quality Center.
Automation tools: Tools for automated testing, such as Selenium, QTP, or Appium.
Performance testing tools: Tools like JMeter, LoadRunner, or Apache Bench for load and stress
testing.
Defect tracking tools: Tools like Bugzilla or Jira for reporting and managing defects.
CI/CD tools: Integration of testing within continuous integration/continuous deployment (CI/CD)
pipelines using tools like Jenkins, Travis CI, or GitLab.
7. Risk-Based Testing
A key principle in a testing strategy is risk-based testing, which focuses on testing areas with the highest
risk of failure or that would have the most significant impact on the system. The strategy should:
Prioritize testing: Focus on testing the most critical areas (e.g., core functionality, security
features, high-risk components).
Identify risk factors: Identify components, technologies, or areas that have a higher likelihood of
defects or failure.
Allocate resources: Assign resources and effort in line with risk priorities, ensuring that high-risk
areas receive the most thorough testing.
Automation is a crucial part of modern testing strategies, especially for repetitive and regression tests.
Key components include:
Automation scope: Identify which tests are suitable for automation (e.g., repetitive tests, high-
volume scenarios, regression testing).
Automation tools: Choose appropriate tools based on the project needs (e.g., Selenium for web
applications, Appium for mobile apps).
Framework: Establish an automation framework (e.g., keyword-driven, data-driven, behavior-
driven) for consistent and efficient automation.
Automation maintenance: Plan for ongoing maintenance of test scripts and frameworks to keep
them aligned with code changes.
A successful strategy ensures a clear approach for test execution and the regular reporting of test
results:
Test execution schedule: Define when and how tests will be executed (e.g., manually, in cycles,
on-demand).
Defect management: Track defects, ensuring they are logged, prioritized, assigned, and resolved
efficiently.
Progress reporting: Regularly report test execution progress to stakeholders, highlighting the
number of tests run, defects found, and overall test status.
Metrics and Key Performance Indicators (KPIs) help measure the success of the testing process and
identify areas for improvement. Important metrics to track include:
In modern software development methodologies like Agile and DevOps, continuous testing is vital:
A successful software testing strategy is iterative and adaptive. Periodically review the strategy to
identify areas for improvement, incorporating lessons learned from previous testing cycles:
Post-mortem analysis: After each project or release, assess what went well and what could be
improved.
Process adjustments: Make adjustments to testing practices, tools, or team structure as
necessary.
Conclusion
A well-organized software testing strategy ensures that testing is thorough, efficient, and aligned with
business goals. By focusing on clear objectives, risk-based testing, test automation, and continuous
feedback, organizations can deliver high-quality software that meets user expectations and functions
reliably in real-world conditions.
Unit testing is the process of testing individual components or units of code (usually methods or
functions) in isolation to verify that they behave as expected. In the Object-Oriented (OO) context, unit
testing involves testing individual classes or methods that belong to those classes.
Testability of Objects: In OO systems, units often have dependencies on other objects. Effective
unit tests should mock or stub these dependencies to isolate the behavior of the unit being
tested.
Encapsulation: OO classes encapsulate state and behavior. Unit tests for methods or functions
should check if the methods correctly manipulate object state and perform the desired behavior.
Inheritance and Polymorphism: When testing inherited classes or polymorphic methods, unit
tests should check whether derived classes behave as expected and whether the correct
methods are called.
Mocking and Stubbing: OO systems often rely on interfaces or abstract classes. In unit tests,
mocking frameworks (e.g., Mockito in Java) are used to mock the behavior of dependencies so
that the unit under test can be isolated from the rest of the system.
Example: Testing a method calculatePrice() in a ShoppingCart class. The test would focus on
ensuring that the method calculates the correct price, considering the objects (products) added to the
cart, while mocking any external dependencies like inventory or pricing services.
Integration testing in the Object-Oriented context focuses on testing how different objects or
components of the system work together as a whole. It’s important to ensure that the interactions
between classes or subsystems are functioning as expected.
Inter-object Communication: Objects often interact with each other through method calls, data
exchanges, and message passing. Integration testing ensures that when one object calls methods
on another, the correct results are produced.
Real Dependencies: While unit tests focus on isolated components, integration tests verify that
real dependencies (such as databases, network services, or external systems) are functioning
together with the object in question.
Subsystems and Layers: In complex OO systems, multiple subsystems (such as database access
layers, business logic layers, and UI layers) interact. Integration testing ensures that these
subsystems work together as expected.
State Changes: Since OO systems often have objects that hold state, integration tests should
check whether changes to one object’s state correctly propagate to other objects that rely on it.
Example: In an e-commerce system, an integration test might verify that when a ShoppingCart object
communicates with the InventoryService to check product availability, the correct inventory data is
retrieved and the cart's state is updated accordingly.
Testing strategies for Web Applications need to cover a wide range of scenarios, as these applications
often deal with user interactions, multiple devices, cross-browser compatibility, and communication
between the front-end and back-end layers.
Some important strategies for web application testing include:
Functional Testing: Verifying that all features and functions of the web application work
correctly. This includes testing user flows, form submissions, and interactions.
Cross-browser Testing: Ensuring the application works as expected across different web
browsers (Chrome, Firefox, Safari, Edge) and versions.
Responsive Design Testing: Checking that the web application is usable across different screen
sizes and devices, including mobile phones, tablets, and desktops.
Performance Testing: Testing how the application performs under load, ensuring it can handle
multiple concurrent users, large amounts of data, and traffic spikes (e.g., using tools like JMeter).
Security Testing: Ensuring that the web application is secure, particularly in areas like
authentication, session management, and data protection (e.g., SQL injection, cross-site scripting
(XSS), and cross-site request forgery (CSRF) vulnerabilities).
Usability Testing: Validating the ease of use and user experience (UX) of the application to
ensure it meets user needs.
Example: Running tests to verify that the user login functionality works correctly across all browsers,
ensuring that sensitive data is encrypted during transmission, and ensuring that the application remains
responsive under heavy user load.
Validation Testing
Validation testing ensures that the software meets the intended use and fulfills the user’s needs and
business requirements. It focuses on “building the right product” rather than just building the product
right. Validation tests confirm that the system behaves as expected in real-world scenarios.
User Acceptance Testing (UAT): Typically performed by end-users or clients to ensure the system
meets their expectations and requirements.
Alpha and Beta Testing: Internal and external testing phases to gather feedback from users to
validate the product's functionality, usability, and performance in a real-world environment.
System Validation: Ensuring the complete system works together as intended and that business
goals are achieved.
Example: In an e-commerce application, validation testing might include end-users validating whether
they can successfully search for products, place an order, and make a payment using real payment
methods.
Validation-Test Criteria
Validation-test criteria define the conditions that must be met for a system or software to be considered
successfully validated. These criteria are typically based on the system’s functional requirements,
business objectives, and user needs.
Correctness: The system produces correct and expected outputs for given inputs.
User Requirements: The system must meet the business requirements or specifications that
were defined during the project initiation phase.
Compliance: The system must adhere to any applicable regulations or standards (e.g., data
privacy regulations, security standards).
User Experience (UX): The system must provide an acceptable user experience, which includes
intuitive interfaces, ease of use, and responsiveness.
System Behavior: The system must function correctly under expected operating conditions,
without major failures or issues.
Example: In a healthcare application, validation-test criteria might include verifying that patient records
are correctly displayed, patient data is securely handled, and that the software complies with HIPAA
regulations.
Configuration Review
A configuration review is the process of reviewing the configuration of the software and its components
to ensure that it meets the required standards and is ready for deployment.
Version Control: Ensuring that the correct versions of code, libraries, and components are being
used. This might involve reviewing Git branches, tags, and version numbers.
Build and Deployment Configuration: Verifying that the build pipeline, deployment
configurations, and scripts are correctly set up and tested.
Environment Configuration: Ensuring that configurations such as database settings, network
settings, and third-party service integrations are correctly specified for development, staging,
and production environments.
Documentation: Ensuring that configuration settings and decisions are properly documented for
future reference or for team members.
Example: A configuration review for a web application might check whether the correct web server,
database settings, environment variables, and deployment scripts are being used for the production
environment.
Summary
These testing approaches and strategies cover various stages and types of software testing, each with its
focus. When building OO systems or web applications, employing a structured and comprehensive
testing strategy is crucial for delivering high-quality, user-friendly, and reliable software. Here's a quick
recap:
These testing techniques help ensure that the software meets its design specifications, satisfies user
needs, and works as intended in the real world.