QA Notes 1
QA Notes 1
What is Quality?
What is Assurance?
Project Product
The main goal of a project is to form a new product The main goal of the product is to complete the
that has not already been made. work successfully (solve a specific problem).
Project is undertaken to form a new software. Product is the final production of the project.
"Are we creating the product correctly?" is checked. "Are we developing the proper product?" is checked.
This is accomplished without running the program. Is finished with the software's execution.
It checks whether the software conforms to It checks whether the software meets the requirements
specifications or not. and expectations of a customer or not.
It can only find the bugs that could not be found by the
It can find the bugs in the early stage of the development.
verification process.
4) QA
● Cost-Effective
● Security
● Product quality
● Customer Satisfaction
Software Testing Principles
● Error-> Mistake which may occur due to different reasons, such as: Time pressure,
Inexperienced or insufficiently skilled project participants, Miscommunication between
project participants, complexity of the code design, unfamiliar technologies, etc.
● Defects-> Fault or bug -> difference in the software program from the end user's or original
business requirements. Arises due to coding fault.
● Failures-> Caused due to defects in the code or other environmental factors. This can be
false positive.
Test Activities
• Test planning
• Test analysis
• Test design
• Test implementation
• Test execution
• Test completion
Software Development Life Cycle
SDLC Methodologies
1) Waterfall
2) Iterative
3) V-shape
4) Agile
Waterfall Methodology
Iterative Methodology
V-Shape Methodology
Agile Methodology
Agile Framework
Scrum
- Planning meetings
- Commitment Meetings
- Daily Standup Meeting
- Demo Meeting
- Retrospective Meeting
Kanban
- Continuous
flow
- Does not
require roles
- Focuses on
cycle time
Software Testing Life Cycle
SDLC vs STLC
Types of Testing
1) Manual Testing
a) Testing software manually without using any automation tools or scripts.
b) There are different stages for manual testing such as unit testing, integration testing, system
testing, and user acceptance testing.
c) Testers use test plans, test cases, or test scenarios to test software to ensure the
completeness of testing.
d) Manual testing also includes exploratory testing, as testers explore the software to identify
errors in it.
2) Automation Testing
a) the tester writes scripts and uses another software to test the product.
b) Involves the automation of a manual process.
c) It increases the test coverage, improves accuracy, and saves time and money when
compared to manual testing.
White Box Testing
- Purpose is to emphasize the flow of inputs and outputs over the software and enhance the security of an
application.
- It is used for verification. In this, we focus on internal mechanisms i.e.how the output is achieved?
Black Box Testing
- Examples:
- In a program we are checking if the loop, method, or function is working fine
- Misunderstood or incorrect, arithmetic precedence.
- Incorrect initialization
Integration testing
- Test the data flow between dependent modules or interfaces between two features
- to find the defect on interface, communication, and data flow among modules
Types:
1) Incremental Testing
2) Non-Incremental Testing
1) Incremental Testing
- incrementally adding up the modules in ascending order and test the data flow between the modules
- If these modules are working fine, then we can add one more module and test again. And we can continue with
the same process to get better results.
Example:
Suppose we have daraz application and the flow of the application would be like:
Types:
- Higher level modules are tested with lower level modules until the successful completion of testing of all
the modules.
- Major design flaws can be detected and fixed early because critical modules tested first
- Add the module step by step, ensuring we are adding the child of the earlier ones
Example: CEO-> send the requirement to the manager-> send it further to the team lead-> send further
to the test engineers for testing purpose. Here CEO is the parent
- Ensuring that the module we are adding is the parent of the earlier ones
2) Non-Incremental Testing
- We will go for this method, when the data flow is very complex and when it is difficult to find who is a
parent and who is a child
- It is convenient for small software systems, if used for large software systems identification of defects is
difficult.
- Sanity tests also ensure that any changes made do not impact other
functionalities of the software build.
- In QA, sanity testing is part of regression testing.
- Limited functionalities are covered
- Carried out on relatively stable builds
Regression Testing
- Performed to determine whether or not the software system has met the
requirement specifications
- Production-like testing environment
Types:
- The beta test is conducted at one or more customer sites by the end-user of
the software.
- This version is released to a limited audience to check the accessibility,
usability, and functionality, and more.
- Performed after alpha testing an before the release of the final product
- Beta testing helps to get direct feedback from users.
Security Testing
- It will help us minimize the risk of production and related costs of the software.
a) Performance Testing
b) Usability Testing
c) Compatibility Testing
Performance Testing
- The test engineer will test the working of an application by applying some
load.
- Focuses on several aspects, such as Response time, Load, scalability, and
Stability of the software or an application.
Types:
1) Load Testing
2) Performance Testing
3) Scalability testing
Load Testing
Importance:
- to ensure that the system can handle projected increase in user traffic,
data volume, transaction counts frequency, etc
- It tests the response time under heavy request load.
- In scalability testing, load is varied slowly.
Usability Testing
● To test the application as it offers us a better application with less effort and time.
● Some organizations still perform only manual testing to test the application as those companies are not fully aware of
the automation testing process.
● But now, they are aware of automated testing and executing the test automation procedure in their application
development process.
● To implement the automation testing, we required pretty a considerable investment of resources and money.
Automation Testing Process
What are the Top 3 Things you Should Consider Before Selecting the Best software Automation
Testing Tools?
3. Select the appropriate automation testing team, who can use any kind of tool
Some of the Automation tools:
● Selenium
● Appium
● Katalon Studio
● Cucumber
● SoapUI
Test Platforms
1. Static Techniques :
- Software products are tested or examined manually, or with help of different automation tools that are available, but it’s not
executed.
Different type of causes of defects can be found by this technique such as :
○ Missing requirements
○ Design defects
○ Deviation from standards
○ Inconsistent interface specification
○ Non-maintainable code
○ Insufficient maintainability, etc.
2. Dynamic Techniques :
- Software is tested by execution of program or system. Different types of defects can be found by this technique such as :
○ Functional defects
i. These defects arise when functionality of system or software does not work as per Software Requirement
Specification (SRS).
ii. Software product might not work properly and stop working.
iii. These defects are simply related to working of system.
○ Non-functional defects
i. A defect in software products largely affects its non-functional aspects.
ii. These defects can affect performance, usability, etc.
3. Operational Techniques :
● Critical – These defects reflect crucial functionality deviations in the software application without fixing
which a QA tester cannot validate. For example, even after entering the correct id and password, the user
cannot log in and access an application is a critical defect. Sev 1: crash, hang, data loss
● Major – These defects are found when a crucial module in the application is malfunctioning, but the rest of
the system works fine. The QA team needs to fix these issues, but it can also validate the rest of the
application irrespective of whether the major defect is fixed or not. For example, not being able to add
more than one item in the cart on an e-commerce website is a significant defect but not critical as the user
will still be able to shop for the one item. Sev 2: blocks feature, no workaround
● Medium – These defects are related to issues in a single screen or single function, but they do not affect the
system’s functioning as a whole. The defects do not block any functionality. Example: Items cart ma add gari
sake ni “Items not added” bhanera msg dekhayo bhane Sev 3: blocks feature, workaround available
● Low – These defects do not impact the software application’s functionality at all. It can be related to UI
inconsistency, cosmetic defects, or suggestions to improve the user’s UI experience. Example:
Misalignment or spelling mistake in the “Terms/Conditions” page of the website is a trivial defect. Sev 4:
trivial (e.g. cosmetic) y
Priority
● Urgent – Immediate resolution of this category defect is required as these defects can severely
affect the application and cause costly repairs if left untreated. For example, a Misspelled
company’s name might not be a high or critical severity defect, but it is an immediate priority since
it affects its business. Pri 1: Fix immediately
● High – Immediate resolution of these defects is essential as these defects affect the application
modules that are rendered useless until the defects are fixed. For example, not adding products to
a shopping cart belongs to a high priority category. Pri 2: Fix before next release outside team
● Medium – These are not such essential defects and can be scheduled to be fixed in later releases.
For example, even after successful login, an error message of “login id/password invalid” is
prompting, then it is a medium priority error. Pri 3: Fix before ship
● Low – Once the above critical defects are fixed, the tester may or may not fix them. For example,
the contact tab is not located on the home page and is hidden inside the other menu in the
navigation bar. Pri 4: Fix if nothing better to do
Defect Life Cycle
- Specific set of states that defect or bug goes through in its entire life.
- The purpose of is to easily coordinate and communicate current status of defect which changes to various
assignees and make the defect fixing process systematic and efficient.
Bug Report
1. Title/Bug ID
2. Environment
a. Device Type
b. OS
c. Software version
d. Rate of Reproduction
3. Description
4. Steps to reproduce a Bug
5. Expected Result
6. Actual Result
7. Visual Proof (screenshots, videos, text) of Bug
8. Severity/Priority
Others:
1) Assign to
2) Reporter
3) Module
4) Project Name
Test techniques
● Equivalence Partitioning
● Boundary Value Analysis
● Decision Table Testing
● State Transition Testing
Boundary Value Analysis
- Boundary Value Analysis is based on testing at the boundaries between partition.
- Boundary refers to values near the limit where the behavior of the system changes.
- It includes maximum, minimum, inside and outside boundaries.
- In boundary value analysis, both valid and invalid inputs are being tested to verify the
issues.
- The input domain data is divided into different equivalence data classes.
- This method is typically used to reduce the total number of test cases to a finite set of
testable test cases, still covering maximum requirements.
- One test value is picked from each class while testing.
- Equivalence Partitioning uses the fewest test cases to cover the maximum requirements.
Username (T/F) F T F T
Password (T/F) F F T T
- Allows tester to test the behaviour of the AUT (Application Under Test)
- The tester can perform this action by entering various input conditions in a sequence.
- In state transitioning technique, testing team provides positive as well as negative input test values
for evaluating the system behaviour.
Error Guessing
Requirement Analysis
- It is a process used to determine the needs and expectations of a new product.
- It involves frequent communication with the stakeholders and end-users of the product to define
expectations, resolve conflicts, and document all the key requirements.
- Software requirements can be broadly classified into two groups:
● Functional or problem domain requirements
a. In a problem domain, the focus is on the functional or business requirements.
b. It is recommended that you create a domain model of your functional requirements before
you start thinking of the solution domain.
c. Functional Requirements describes the functionalities, capabilities and activities a system
must be able to perform and they specify the overall behavior of the system to be developed.
● Non-functional or solution domain requirements
a. In a solution domain, we focus on how to deliver the solution for functional or business
requirements
Requirement Analysis Process
1) Identify Key Stakeholders and End-Users
2) Capture Requirements
a) Hold One-on-One Interviews
b) Use Focus Groups
c) Utilize Use Cases
d) Build Prototypes
3) Categorize Requirements
4) Interpret and Record Requirements
a) Define Requirements Precisely
b) Prioritize Requirements
c) Carry Out an Impact Analysis
d) Resolve Conflicts
e) Analyze Feasibility
5) Sign off
Roles of QA in requirement analysis
● Analyze each and every requirement from specification document, use cases.
● List down high level scenarios.
● Clarify queries and functionality from stakeholders.
● Promote suggestions to implement the features or any logical issues.
● Raise defect or clarification against the specification document.
● Track the defect or clarification rose against the specification document.
● Create high level Test Scenarios.
● Create Traceability Matrix.
Outcome of the Requirement Analysis Phase
- A detailed document that catalogs the test strategy, objectives, schedule, estimations,
deadlines, and the resources required for completing that particular project.
- Test Plan is a document that acts as a point of reference and only based on that testing is
carried out within the QA team.
- It is also a document that we share with the Business Analysts, Project Managers, Dev team
and the other teams. This helps to enhance the level of transparency of the QA team’s work
to the external teams.
- It is documented by the QA manager/QA lead based on the inputs from the QA team
members.
- This plan is not static and is updated on an on-demand basis.
- The more detailed and comprehensive the plan is, the more successful will be the testing
activity.
Importance of Test Plan
• It guides our thinking; forces us to confront the challenges that await us and focus our thinking
on important topics.
• Help people outside the test team such as developers, business managers, customers
understand the details of testing.
• Important aspects like test estimation, test scopes, test strategy are documented in the Test
plan, so it can be reviewed by the Management team and reused for other projects.
Values of Test Plan
• Test objects and the objectives: What are you going to test (and whatnot?)
• Planning: When will you be carrying out which test activities and who do you need
Pros and Cons of Test Plan
Pros Cons
Out of scope => Enhanced clarity on what we are not going to cover
Assumptions => All the conditions that need to hold true for us to be able to proceed
successfully
Deliverables => - What documents(test artifacts) are going to produce at what time
frames?
- What can be expected from each document?
1) Verify that as soon as the login page opens, by default the cursor should remain on the username textbox.
2) Check if the password is in masked form when typed in the password field.
3) Check if the password can be copy-pasted or not
4) Check system behavior when valid email id and password is entered.
5) Check system behavior when invalid email id and valid password is entered.
6) Check system behavior when valid email id and invalid password is entered.
7) Check system behavior when invalid email id and invalid password is entered.
UI:
1) Check that the font type and size of the labels and the text written on the different elements should be clearly visible.
2) Verify that the size, color, and UI of the different elements are as per the specifications.
Security:
1) Verify that there is a limit on the total number of unsuccessful login attempts.
2) Verify that once logged in, clicking the back button doesn’t logout the user.
Test Cases
• Allows the tester to think thoroughly through different ways of validating features.
• Negative test cases are also documented, which can often be overlooked.
• They are reusable for the future, anyone can reference them and execute the test
Some Points to consider while writing test cases
- Before we start writing test case, come up with options and select the best option and then only
start writing test case
- In Expected result, use “should be” or “must be”
- Elaborate only those steps on which we have focus. Do not elaborate on all the steps.
- Highlight object names
- Do not hard code the test case. Write a generic test case
- Organize steps properly so that it reduces lot of execution time.
Procedures to Write test
cases
1) System Study
2) Identify all possible test scenarios
3) Apply test design technique, using standard template
4) Review the test cases
5) Fix the review comment given by the reviewer
6) Test case approval
Efficient ways of writing test cases
1. Simple and transparent
3. Avoid repetition
4. Do not Assume
8. Self-cleaning
• Ensures good test coverage (key functionality isn’t missed in the testing process).
• Allows the tester to think thoroughly through different ways of validating features.
• Negative test cases are also documented, which can often be overlooked.
• They are reusable for the future, anyone can reference them and execute the test
Test cases parameters
● Test case id
● Test Scenarios
● Assumptions/ Pre-Condition
● Steps to be executed
● Test data: Variables and their values
● Expected Result
● Actual result
● Status (Pass/Fail)
● Comments
Simple format of test case
Test Test Pre-Conditi Test Steps Test Data Expected Actual Result Post Status Remarks
case ID Scenario on Result Condition
TC_01 Verify the Need a valid 1.Enter <Valid Successful Login Gmail inbox PASS
login of Gmail username Username> login successfully is shown
Gmail account to
2.Enter <Valid
login
password password>
3.Click on
login button
TC_02 Verify the Need a valid 1.Enter <Valid A message Unable to login FAIL
login of Gmail username Username> “The email with invalid
Gmail with account to and password
2.Enter <Invalid
invalid login password
password password>
password you entered
3.Click on don’t match”
login button should be
shown
Test Scenario Test Cases
Test scenario is a concept. Test cases are the solutions to verify that concept .
Test Scenario is a high level functionality. Test cases are detailed procedure to test the high level functionality.
Test Scenarios are derived from Requirements/ User Stories. Test cases are derived from Test Scenarios .
Test scenario is ‘What functionality is to be tested’ Test Cases are ‘ How to test the functionality ’.
Single test scenarios are never repeatable. Single test case may be used for multiple times in different scenarios.
Brainstorming sessions are required to finalize a Test Scenario. Detailed technical knowledge of the software application is required
Test Data
- Data created or selected to satisfy the execution preconditions and input content required to
execute one or more test cases.
- Crucial part of most functional test.
- Typically test data is created in-sync with the test case it is intended to be used for.
How test data can be generated?
● Manually
a. The test data is generally created by the testers using their own skills and judgments.
● Back-end data injection
a. This is done through SQL queries.
b. This approach can also update the existing data in the database.
c. It is speedy & efficient but should be implemented very carefully so that the existing database does not get
corrupted.
● Third-Party Tools
● Automated Test Data Generation Tools
a. high level of accuracy.
b. better speed and delivery of output with this technique.
c. helps in saving a lot of time as well as generating a large volume of accurate data.
● Mass copy of data from production to testing environment
How to Prepare Data that will Ensure Maximum Test Coverage?
1) No data: Run your test cases on blank or default data. See if proper error messages are generated.
2) Valid data set: Create it to check if the application is functioning as per requirements and valid input data is properly saved in
database or files.
3) Invalid data set: Prepare invalid data set to check application behavior for negative values, alphanumeric string inputs.
4) Illegal data format: Make one data set of illegal data format. The system should not accept data in an invalid or illegal format.
Also, check proper error messages are generated.
5) Boundary Condition dataset: Dataset containing out of range data. Identify application boundary cases and prepare data set
that will cover lower as well as upper boundary conditions.
6) The dataset for performance, load and stress testing: This data set should be large in volume.
Test Data Qualities
1) Realistic
2) Practically valid
3) Versatile to cover scenarios
4) Exceptional data
Test Case Review
● Preparation: During the first stage of the process the team prepares a test strategy and
test cases, which are then used by them to test run the software product.
● Execution: After creating test cases and strategy, the team finally executes them, while
monitoring the results and comparing them with the expected results.
● Verification and Validation: During this stage of the process, the test results are verified
& validated as well as recorded and reported by the team in the form of test summary/test
execution report.
Test Execution States
- Defined as a metric in Software testing that measure the amount of testing performed by a set of tests.
Example:
Suppose you are testing the “Login” functionality of a product; just do not concentrate on checking the login button
functionality and validation of the fields. There are other aspects to look for such as the UI part or if the page displayed is
user friendly or whether the application crashes while clicking on login.
What Test Coverage does?
• Finds the areas in specified requirements which are not covered by the test scenarios and cases.
• Identify the unusual test cases which do not have meaning in being executed and can omit them.
• Identifying a quantitative measure of test coverage which is an indirect method for quality check
• Can determine all the decision points and paths used in the application, which allows to increase test
coverage
• Can help to determine the paths in application that were not tested.
Test Strategy
Example: A Test Strategy includes details like “Individual modules are to be tested by the test team members”. In
this case, who tests it does not matter – so it’s generic and the change in the team member does not have to be
updated, keeping it static.
Test Script
- Test scripts are a line-by-line list of all the activities that must be performed and tested on various
user journeys.
- It lays down each action to be followed, along with the intended outcomes.
- This automation script helps software tester to test each level on a wide range of devices
systematically.
How to write test scripts?
1) Record/playback
2) Keyword/data-driven scripting
a) The tester defines the test using keywords rather than the underlying code in
data-driven scripting.
b) The developers' task here is to implement the test script code for the keywords and to
update it as required.
3) Writing Code Using the Programming Language
Tips for creating a Test Script
1) Clear:
a) Your test script should be clear.
b) You need to constantly verify that each step in the test script is clear, concise, and coherent. This helps to
keep the testing process smooth.
2) Simple:
a) You should create a test script that should contain just one specific action for testers to take. This makes
sure that each function is tested correctly and that testers do not miss steps in the software testing
process.
3) Well-thought-out:
a) To write the test script, you need to put yourself in the user’s place to decide which paths to test.
b) Should be creative enough to predict all the different paths that users would use while running a system or
application.
When Should You Use a Test Script?
The following are the justifications for utilizing the Test Script.
● The most reliable way to ensure that nothing is skipped and that the findings match the
desired testing strategy is to use a test script.
● It gives a lot less space for mistakes throughout the testing process if the test script is
prepared.
● Testers are sometimes given free rein over the product. They are prone to overlooking key
details.
● When a function does not provide the anticipated result, the tester assumes it.
● It's especially helpful when the user's performance is critical and specific.
Test Result
For example, if the test report informs that there are many defects remaining in the product,
stakeholders can delay the release until all the defects are fixed.
Test Report
Benefit of Test Report
Test Report Writing Tips
1) Details
2) Clearness
3) Standardization
4) Specification
Test Summary Report
1) Hardware Testing
2) Software Testing
1) Native apps
2) Web apps
3) Hybrid apps
Testing for different types of App
1) Native app
a) Device compatibility
b) Utilization of device features
2) For hybrid apps:
a) Interaction of the app with the device native features
b) Potential performance issues
c) Usability (look and feel) compared to native apps on the platform in question
3) For web apps:
a) Testing to determine cross-browser compatibility of the app to various common mobile
browsers
b) Utilization of OS features (e.g., date picker and opening appropriate keyboard)
c) Usability (look and feel) compared to native apps on the platform in question
Types of Mobile App testing
Some of the key mobile testing types:
● Usability testing
● Compatibility testing
● Interface testing
● Services testing
● Performance testing
● Installation tests
● Security testing
● Storage testing
● Input testing
Types of mobile devices
1) Real devices
2) Virtual devices
a) Emulator
b) Simulator
Challenges of Mobile testing
● Multiple platforms and device fragmentation: Multiple OS types and versions, screen sizes and quality of
display.
● Hardware differences in various devices: Various types of sensors and difficulty in simulating test conditions for
constrained CPU and RAM resources.
● Variety of software development tools required by the platforms.
● Difference of user interface designs and user experience (UX) expectations from the platforms.
● Diverse users and user groups.
● Various app types with various connection methods.
● High feedback visibility resulting from bugs that have a high impact on users which may easily result in them
publishing feedback on online marketplaces.
● Unavailability of newly launched devices requiring the use of mobile emulators/simulators
Impacts of the Challenges
1) Planning
2) Identifying testing types
3) Test case and script design
4) Manual and automated testing
5) Usability testing
6) Performance testing
7) Functional testing
8) Security testing
9) Device testing
10) Launch Plan
Mobile Application Testing Strategies
1) Screen size
2) Storage
3) Performance Speed
4) Internet access
5) Cross-platform compatibility
6) Offline mode
Some Tips For Effective Mobile Testing
● Test early and test often by using testing as part of your app development
● Split your app testing into smaller units
● Commit your efforts towards performance and load testing
● Distribute the testing efforts across the entire team members including developers
● Include experts or experienced people in your QA team
● Know the platform’s (Android or iOS) user interface/user experience (UI/UX) guidelines
before starting the mobile app testing
● Test your application on multiple devices
● Test the key app features in realistic scenarios
● Don’t rely completely on emulators
● Keep an eye on proper functioning of updates including OS versions
● Test for all supported screen sizes and touch interfaces to validate seamless user
experience
Things to be tested in mobile application
1. Installation
2. Uninstallation
3. Application Logo
4. Splash
5. Low memory
6. Visual Feedback
7. Exit application
8. Start/restart application
API
- A type of software testing that performs verification directly at the API level
- API testing puts much more emphasis on the testing of business logic, data responses
and security, and performance bottlenecks.
- In API Testing, instead of using standard user inputs(keyboard) and outputs, you use
software to send calls to the API, get output, and note down the system’s response.
- API tests are very different from GUI Tests and won’t concentrate on the look and
feel of an application
Benefits of API Testing
1) Earlier testing
2) Language-independent
3) GUI-independent
4) Improved test coverages
What you need before carrying out API testing
400 Bad Request The request could not be understood by the server due to incorrect syntax, invalid
request message parameters, etc.
401 Unauthorized Indicates that the request requires user authentication information.
403 Forbidden The client does not have access rights to the content but their identity is known to
the server.
404 Not Found The server can not find the requested resource
408 Request Timeout Indicates that the server did not receive a complete request from the client within
the server’s allotted timeout period.
409 Conflict The request could not be completed due to a conflict with the current state of the
resource.
Status Code Description
415 Unsupported Media The media-type in Content-type of the request is not supported by the server.
Type
500 Internal Server Error The server encountered an unexpected condition that prevented it from
fulfilling the request.
502 Bad Gateway The server got an invalid response while working as a gateway to get the
response needed to handle the request.
503 Service Unavailable The server is not ready to handle the request.
504 Gateway Timeout The server is acting as a gateway and cannot get a response in time for a
request.
Types of Bugs that API testing detects
- The process of determining the speed, responsiveness and stability of a computer, network, software program or
device under a workload.
- Performance testing measures the quality attributes of the system, such as scalability, reliability and resource usage.
- Speed – It identifies whether the response of the application is fast.
- Scalability – It determines the maximum user load.
- Stability – It checks if the application is stable under varying loads.
Goals of Performance Testing
How
- Typically done using performance testing tools
- These tools create virtual users to carry out different business transactions
- Multiple load generators are used to generate large user load
- Tools provides different performance metrics
Who
- Testers play a key role in performance testing
- Collaboration is essential between different parties
Types of Performance Testing
1) Load Testing
- It is conducted to understand the application behavior under a specific expected user load.
- It is used to check application performance under peak load conditions.
- The main goal is to validate that a system can handle the expected load with acceptable
performance.
Common issues
- Slower response time
- Increased error rate after certain load
- Increased resource utilization
- One or more application components failing or misbehaving
2) Soak Test
- It is done to determine if the system can sustain the continuous expected load for long
duration.
- It is used to check the application performance under average load condition.
- It is also known as Endurance Testing or Longevity Testing.
Common issues
- Memory leaks
- Application crash
- Slower response time
- Increased DB resource utilization
3) Stress Test
- It is done to determine the system’s robustness when it’s put at extreme load.
- It is used to check the application performance under extreme load (load beyond regular)
conditions.
- The applied load can be 150% to 500% of peak load in stress test.
Common issues
- Application crash
- Increased error rate after certain load
- Slower response time
- Increased resource utilization
- One or more application components failing or misbehaving
- Any specific performance issues
Performance Testing life Cycle