0% found this document useful (0 votes)
15 views

QA Notes 1

Uploaded by

All in one
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

QA Notes 1

Uploaded by

All in one
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 176

Quality Assurance

What is Quality?

- Can be stated as “Fit for Purpose or use”

What is Assurance?

- a positive declaration on a product or service


What is Quality Assurance?
- Procedure to ensure quality of software product
- Set of activities that defines the procedures and standards to develop the
product
- Focuses on process standard, projects audit, and procedures for development

What is Quality Control?


- can be defined as "part of quality management focused on fulfilling quality
requirements."
- Operation techniques and activities to fulfill requirements
QA vs QC
Quality Assurance Processes
Importance of Quality Assurance
1) Saves you money and effort
2) Prevents corporate emergencies
3) Enhance user experiences
4) Ability to foresee possible issues
5) Improved system performance
6) Ensures consistent results
Roles and Responsibilities of QA

● To gain confidence in and giving information about the quality level.


● Ensuring that the end result meets the user and business
requirements.
● To gain the customers' confidence by offering them a quality
product
● Analysis of requirements
● Writing and executing test cases to detect issues
● Creating detailed reports and listing improvements;
Product vs Project

Project Product

The main goal of a project is to form a new product The main goal of the product is to complete the
that has not already been made. work successfully (solve a specific problem).

Project is undertaken to form a new software. Product is the final production of the project.

Fixed Requirements Evolving needs

One-off delivery Continual improvements

Beginning and end date Permanent


Validation vs Verification
Verification Validation

It includes checking documents, design, codes and


It includes testing and validating the actual product.
programs.

"Are we creating the product correctly?" is checked. "Are we developing the proper product?" is checked.

This is accomplished without running the program. Is finished with the software's execution.

It checks whether the software conforms to It checks whether the software meets the requirements
specifications or not. and expectations of a customer or not.

It can only find the bugs that could not be found by the
It can find the bugs in the early stage of the development.
verification process.

It comes before validation. It comes after verification.


Roles and Responsibilities

1) Business Analysts (BA)


- Creating a detailed business analysis, outlining problems, opportunities and solutions for a
business
- Planning and monitoring
- Defining business requirements and reporting them
2) Developers
- Developing a product based on the requirement
- Ensuring the developed product meets the customers or user expectation
3) Project Manager

- Activity and resource planning


- Organizing and motivating a project team
- Budgeting

4) QA

- Analyze and clarification of requirements with a customer or a business analyst


- Plan the process of testing
- Write test cases (test scripts)
- Conduct functional testing
- Identify problem areas, add them to a tracking system
Software Testing

● A method to check whether the actual software product matches expected


requirements
● Ensuring that software product is Defect free
● Involves execution of software/system components
● The purpose of software testing is to identify errors, gaps or missing
requirements in contrast to actual requirements.
Benefits of Software Testing

● Cost-Effective
● Security
● Product quality
● Customer Satisfaction
Software Testing Principles

● Testing shows the presence of defects, not their absence


● Exhaustive testing is impossible
● Early testing saves time and money
● Defects cluster together
● Beware of the pesticide paradox
● Testing is context dependent
● Absence-of-errors is a fallacy (mistaken belief)
Error, Defects and Failures

● Error-> Mistake which may occur due to different reasons, such as: Time pressure,
Inexperienced or insufficiently skilled project participants, Miscommunication between
project participants, complexity of the code design, unfamiliar technologies, etc.
● Defects-> Fault or bug -> difference in the software program from the end user's or original
business requirements. Arises due to coding fault.
● Failures-> Caused due to defects in the code or other environmental factors. This can be
false positive.
Test Activities

• Test planning

• Test monitoring and control

• Test analysis

• Test design

• Test implementation

• Test execution

• Test completion
Software Development Life Cycle
SDLC Methodologies

1) Waterfall
2) Iterative
3) V-shape
4) Agile
Waterfall Methodology
Iterative Methodology
V-Shape Methodology
Agile Methodology
Agile Framework

Scrum
- Planning meetings
- Commitment Meetings
- Daily Standup Meeting
- Demo Meeting
- Retrospective Meeting
Kanban
- Continuous
flow
- Does not
require roles
- Focuses on
cycle time
Software Testing Life Cycle
SDLC vs STLC
Types of Testing

1) Manual Testing
a) Testing software manually without using any automation tools or scripts.
b) There are different stages for manual testing such as unit testing, integration testing, system
testing, and user acceptance testing.
c) Testers use test plans, test cases, or test scenarios to test software to ensure the
completeness of testing.
d) Manual testing also includes exploratory testing, as testers explore the software to identify
errors in it.
2) Automation Testing
a) the tester writes scripts and uses another software to test the product.
b) Involves the automation of a manual process.
c) It increases the test coverage, improves accuracy, and saves time and money when
compared to manual testing.
White Box Testing

- Purpose is to emphasize the flow of inputs and outputs over the software and enhance the security of an
application.

- It is structural test of the software mostly done by software developers.

- It is mandatory to have knowledge of programming.

- It is the logic testing of the software.

- It is used for verification. In this, we focus on internal mechanisms i.e.how the output is achieved?
Black Box Testing

- Main objective is to specify the business needs or the customer's requirements.

- Process of checking the functionality of an application as per the customer requirement.

- Source code is not visible in this testing


1) Functional Testing

- Also known as Component testing

- Check components against requirement specifications

- Emphasis on application requirements rather than actual code


Unit Testing

- First level of functional testing in order to test any software

- Done by developer at the application development phase.

- It focuses on the smallest unit of software design.

- Can use different automation tools to carry out the testing

- Examples:
- In a program we are checking if the loop, method, or function is working fine
- Misunderstood or incorrect, arithmetic precedence.
- Incorrect initialization
Integration testing

- Test the data flow between dependent modules or interfaces between two features

- Main purpose is to test the statement's accuracy between each module

- testing in which a group of components is combined to produce output.

- to find the defect on interface, communication, and data flow among modules

Types:

1) Incremental Testing

2) Non-Incremental Testing
1) Incremental Testing

- incrementally adding up the modules in ascending order and test the data flow between the modules

- The selected modules must be logically related

- If these modules are working fine, then we can add one more module and test again. And we can continue with
the same process to get better results.

Example:

Suppose we have daraz application and the flow of the application would be like:

Daraz-> Login-> Home-> Search-> Add Cart-> Payment-> Logout

Types:

a) Top-down Incremental Integration Testing

b) Bottom-up Incremental Integration Testing


a) Top-down Incremental Integration Testing

- Higher level modules are tested with lower level modules until the successful completion of testing of all
the modules.

- Major design flaws can be detected and fixed early because critical modules tested first

- Add the module step by step, ensuring we are adding the child of the earlier ones

Example: CEO-> send the requirement to the manager-> send it further to the team lead-> send further
to the test engineers for testing purpose. Here CEO is the parent

b) Bottom-up Incremental Integration Testing

- Ensuring that the module we are adding is the parent of the earlier ones
2) Non-Incremental Testing

- Also known as Big Bang Method

- We will go for this method, when the data flow is very complex and when it is difficult to find who is a
parent and who is a child

- Testing is done via integration of all modules at once.

- It is convenient for small software systems, if used for large software systems identification of defects is
difficult.

Example: Gmail's inbox where there is no parent and child concept


System Testing
- Also known as end-to-end testing
- The test environment is parallel to the production environment
- Test if the end feature works according to the business requirement
- Testing the fully integrated applications including external peripherals in order to
check how components interact with one another and with the system as a whole.
- Verify thorough testing of every input in the application to check for desired
outputs.
- Testing of the user’s experience with the application.
Smoke Testing

- Smoke testing is carried out in the initial stages of the software


development life cycle (SDLC).
- The objective is to verify "stability" of the application to continue with
further testing.
- They are a subset of regression tests.
- It ensures that all the core functionalities of the program are working
seamlessly and cohesively.
Sanity Testing

- Sanity tests also ensure that any changes made do not impact other
functionalities of the software build.
- In QA, sanity testing is part of regression testing.
- Limited functionalities are covered
- Carried out on relatively stable builds
Regression Testing

- They are performed to check detailed functionalities of the


application.
- The objective is to verify every existing feature of the application.
- Regression test cases should be well documented.
- They are the superset of smoke and sanity testing.
Acceptance Testing

- Performed to determine whether or not the software system has met the
requirement specifications
- Production-like testing environment

Types:

a) User Acceptance Testing


b) Business Acceptance Testing
c) Alpha Testing
d) Beta Testing
User Acceptance Testing

- Also known as, End-User testing


- Main purpose is to validate end to end business flow.
- Carried out in final stage of testing
- Carried out by intended users of the system or software
Business Acceptance Testing

- To determine the application meets the business requirements


- Carried out by product owner or business analysts with the support
from testers or other team members
Alpha Testing

- To evaluate the quality of the product


- Does the product work?
- Features are freezed and no scope for major enhancements
- Better view of product usage and reliability
- Analyze possible risks during and after launch of the product
Beta Testing

- The beta test is conducted at one or more customer sites by the end-user of
the software.
- This version is released to a limited audience to check the accessibility,
usability, and functionality, and more.
- Performed after alpha testing an before the release of the final product
- Beta testing helps to get direct feedback from users.
Security Testing

- It is a type of testing performed by a special team. Any hacking method can


penetrate the system.
- Iis done to check how the software, application, or website is secure from internal
and/or external threats.
- This testing includes how much software is secure from malicious programs,
viruses and how secure & strong the authorization and authentication processes
are.
- Monkey Testing
- Monkey Testing is carried out by a tester, assuming that if the
monkey uses the application, then how random input and values will
be entered by the Monkey without any knowledge or understanding
of the application.
- The objective is to check if an application or system gets crashed by
providing random input values/data.
- Monkey Testing is performed randomly, no test cases are scripted,
and it is not necessary to be aware of the full functionality of the
system.
- Exploratory testing
- It is informal testing performed by the testing team.
- The objective of this testing is to explore the application and look for defects
that exist in the application.
- Testers use the knowledge of the business domain to test the application.
- Ad-hoc Testing
- Testing is performed on an ad-hoc basis, i.e., with no reference to the test case
and also without any plan or documentation in place for this type of testing.
- The objective of this testing is to find the defects and break the application by
executing any flow of the application or any random functionality.
- Ad-hoc testing is an informal way of finding defects and can be performed by
anyone in the project.
- Negative Testing
- The mindset of the tester is to “Break the System/Application” and it is
achieved through Negative Testing.
- It is performed using incorrect data, invalid data, or input.
- It validates if the system throws an error of invalid input and behaves as
expected.
- Globalization Testing
- Testing method that checks proper functionality of the product with
any of the culture/locale settings using every type of international
input possible.
- Localization Testing
- Part of software testing process focused on adapting a globalized
application to a particular culture/local
Non-Functional Testing

- It provides detailed information on software product performance and used


technologies.

- It will help us minimize the risk of production and related costs of the software.

- It is a combination of performance, load, stress, usability and, compatibility testing.

Types of Non-functional Testing

a) Performance Testing

b) Usability Testing

c) Compatibility Testing
Performance Testing
- The test engineer will test the working of an application by applying some
load.
- Focuses on several aspects, such as Response time, Load, scalability, and
Stability of the software or an application.

Types:

1) Load Testing
2) Performance Testing
3) Scalability testing
Load Testing

- Load testing will help to detect the maximum operating capacity of an


application and any blockages or bottlenecks.
- It is mainly used to test the Client/Server's performance and applications that
are web-based.

Importance:

- It guarantees the system and its consistency and performance.


- The load testing is necessary if any code changes occur in the application that
may affect the application's performance.
- It is important because it reproduces the real user scenarios.
A test scenario is a combination of scripts, and virtual users executed during a testing
session.

Load Testing Process:


Stress Testing

- It is also known as Endurance Testing, fatigue testing or Torture Testing.


- We can use stress testing to discover hardware issues, data corruption
issues.
- It is used to analyze the system works under rare circumstances and the
system's behavior after a failure.

Example: Education Board's result website

- Stress testing is important to perform on the education board's result


website. On the day of some results, many students, users, and applicants
will logins to the particular to check their grades.
Scalability Testing

- to ensure that the system can handle projected increase in user traffic,
data volume, transaction counts frequency, etc
- It tests the response time under heavy request load.
- In scalability testing, load is varied slowly.
Usability Testing

- Also known as User Experience Testing


- Recommended during the initial design phase
- Mainly focuses on user’s ease of using application, flexibility of
application
- It helps improve end-user satisfaction
- It makes your system highly effective and efficient
Compatibility Testing

- to check whether your software is capable of running on different


hardware, operating systems, applications, network environments or
Mobile devices.
- Compatibility tests should always perform in a real environment instead of
a virtual environment.

How to perform Compatibility testing?

● Test the application in the same browsers but in different versions.


● Test the application in different browsers but in different versions
Grey Box Testing
- It is a collaboration of black box and white box testing.
- It includes access to internal coding for designing test cases.
- It is performed by a person who knows coding as well as testing.
- if a single-person team done both white box and black-box testing, it is considered grey box testing
Automation Testing
- It uses specific tools to automate manual design test cases without any human interference.
- It is the best way to enhance the efficiency, productivity, and coverage of Software testing.
- It is used to re-run the test scenarios, which were executed manually, quickly, and repeatedly.
- It is a time-saving process as it spends less time in exploratory testing and more time in keeping the test scripts
whereas enhancing the complete test coverage.

Why do we need to perform automation testing?

● To test the application as it offers us a better application with less effort and time.
● Some organizations still perform only manual testing to test the application as those companies are not fully aware of
the automation testing process.
● But now, they are aware of automated testing and executing the test automation procedure in their application
development process.
● To implement the automation testing, we required pretty a considerable investment of resources and money.
Automation Testing Process
What are the Top 3 Things you Should Consider Before Selecting the Best software Automation
Testing Tools?

1. List out your current requirement

2. Think about your project budget

3. Select the appropriate automation testing team, who can use any kind of tool
Some of the Automation tools:

● Selenium
● Appium
● Katalon Studio
● Cucumber
● SoapUI
Test Platforms

1. DEV — Development [Software developer]


2. SIT — System Integration Test [Software developer and QA engineer]
3. UAT — User Acceptance Test [Client]
4. PROD — Production [Public user]
Techniques to Identify Defects :

1. Static Techniques :
- Software products are tested or examined manually, or with help of different automation tools that are available, but it’s not
executed.
Different type of causes of defects can be found by this technique such as :
○ Missing requirements
○ Design defects
○ Deviation from standards
○ Inconsistent interface specification
○ Non-maintainable code
○ Insufficient maintainability, etc.
2. Dynamic Techniques :
- Software is tested by execution of program or system. Different types of defects can be found by this technique such as :
○ Functional defects
i. These defects arise when functionality of system or software does not work as per Software Requirement
Specification (SRS).
ii. Software product might not work properly and stop working.
iii. These defects are simply related to working of system.
○ Non-functional defects
i. A defect in software products largely affects its non-functional aspects.
ii. These defects can affect performance, usability, etc.
3. Operational Techniques :

i. A technique that produces a deliverable i.e. Product.


ii. User, customer, or control personnel identify or found defects by inspection, checking, reviewing, etc.
iii. Defect is found as a result of failure.
Bug Classification
Severity

● Critical – These defects reflect crucial functionality deviations in the software application without fixing
which a QA tester cannot validate. For example, even after entering the correct id and password, the user
cannot log in and access an application is a critical defect. Sev 1: crash, hang, data loss
● Major – These defects are found when a crucial module in the application is malfunctioning, but the rest of
the system works fine. The QA team needs to fix these issues, but it can also validate the rest of the
application irrespective of whether the major defect is fixed or not. For example, not being able to add
more than one item in the cart on an e-commerce website is a significant defect but not critical as the user
will still be able to shop for the one item. Sev 2: blocks feature, no workaround
● Medium – These defects are related to issues in a single screen or single function, but they do not affect the
system’s functioning as a whole. The defects do not block any functionality. Example: Items cart ma add gari
sake ni “Items not added” bhanera msg dekhayo bhane Sev 3: blocks feature, workaround available
● Low – These defects do not impact the software application’s functionality at all. It can be related to UI
inconsistency, cosmetic defects, or suggestions to improve the user’s UI experience. Example:
Misalignment or spelling mistake in the “Terms/Conditions” page of the website is a trivial defect. Sev 4:
trivial (e.g. cosmetic) y
Priority

● Urgent – Immediate resolution of this category defect is required as these defects can severely
affect the application and cause costly repairs if left untreated. For example, a Misspelled
company’s name might not be a high or critical severity defect, but it is an immediate priority since
it affects its business. Pri 1: Fix immediately
● High – Immediate resolution of these defects is essential as these defects affect the application
modules that are rendered useless until the defects are fixed. For example, not adding products to
a shopping cart belongs to a high priority category. Pri 2: Fix before next release outside team
● Medium – These are not such essential defects and can be scheduled to be fixed in later releases.
For example, even after successful login, an error message of “login id/password invalid” is
prompting, then it is a medium priority error. Pri 3: Fix before ship
● Low – Once the above critical defects are fixed, the tester may or may not fix them. For example,
the contact tab is not located on the home page and is hidden inside the other menu in the
navigation bar. Pri 4: Fix if nothing better to do
Defect Life Cycle

- Specific set of states that defect or bug goes through in its entire life.
- The purpose of is to easily coordinate and communicate current status of defect which changes to various
assignees and make the defect fixing process systematic and efficient.
Bug Report

- A detailed document about bugs found in the software application


- A software bug report contains specific information about what is wrong and what needs to be
fixed in software or on a website.
- It lets the developers understand what is wrong and how to fix it.
Elements of an Effective Bug Report

● What is the problem?


● How can the developer reproduce the problem (to see it for themselves)?
● Where in the software (which webpage or feature) has the problem appeared?
● What is the environment (browser, device, OS) in which the problem has occurred?
Bug reporting best practices

1) Individual bug reports


2) Where is the bug?
3) What did you expect to occur?
4) Gather visual proof
5) Add technical information
6) Record error messages and codes
Components included in Bug Report

1. Title/Bug ID
2. Environment
a. Device Type
b. OS
c. Software version
d. Rate of Reproduction
3. Description
4. Steps to reproduce a Bug
5. Expected Result
6. Actual Result
7. Visual Proof (screenshots, videos, text) of Bug
8. Severity/Priority

Others:
1) Assign to
2) Reporter
3) Module
4) Project Name
Test techniques

- Test techniques are used to design test datas.


- Software test techniques helps you to design better test cases
- It help to reduce the number of test cases to be executed while increasing test coverage.
Some Types:

● Equivalence Partitioning
● Boundary Value Analysis
● Decision Table Testing
● State Transition Testing
Boundary Value Analysis
- Boundary Value Analysis is based on testing at the boundaries between partition.
- Boundary refers to values near the limit where the behavior of the system changes.
- It includes maximum, minimum, inside and outside boundaries.
- In boundary value analysis, both valid and invalid inputs are being tested to verify the
issues.

if you divide 1 to 1000 input values invalid


data equivalence class, then you can
select test case values like 1, 11, 100, 950,
etc. Same case for other test cases having
invalid data classes.
Equivalence Partitioning

- The input domain data is divided into different equivalence data classes.
- This method is typically used to reduce the total number of test cases to a finite set of
testable test cases, still covering maximum requirements.
- One test value is picked from each class while testing.
- Equivalence Partitioning uses the fewest test cases to cover the maximum requirements.

Two invalid classes will be:

a) Less than or equal to 17.

b) Greater than or equal to 61.

A valid class will be anything between 18


and 60.
Decision Table Testing
- Type of software testing that examines how a system responds to various input
combinations.
- It is also called Cause-Effect table.
- Here, we deal with combination of inputs.
- To identify the test cases with decision table, we consider condition and actions.
- We take conditions as inputs and actions as output.
Example 1 − How to Create a Login Screen Decision Base Table

Conditions Rule 1 Rule 2 Rule 3 Rule 4

Username (T/F) F T F T

Password (T/F) F F T T

Output (E/H) Error Error Error Homepage


State Transition

- Allows tester to test the behaviour of the AUT (Application Under Test)
- The tester can perform this action by entering various input conditions in a sequence.
- In state transitioning technique, testing team provides positive as well as negative input test values
for evaluating the system behaviour.
Error Guessing
Requirement Analysis
- It is a process used to determine the needs and expectations of a new product.
- It involves frequent communication with the stakeholders and end-users of the product to define
expectations, resolve conflicts, and document all the key requirements.
- Software requirements can be broadly classified into two groups:
● Functional or problem domain requirements
a. In a problem domain, the focus is on the functional or business requirements.
b. It is recommended that you create a domain model of your functional requirements before
you start thinking of the solution domain.
c. Functional Requirements describes the functionalities, capabilities and activities a system
must be able to perform and they specify the overall behavior of the system to be developed.
● Non-functional or solution domain requirements
a. In a solution domain, we focus on how to deliver the solution for functional or business
requirements
Requirement Analysis Process
1) Identify Key Stakeholders and End-Users
2) Capture Requirements
a) Hold One-on-One Interviews
b) Use Focus Groups
c) Utilize Use Cases
d) Build Prototypes
3) Categorize Requirements
4) Interpret and Record Requirements
a) Define Requirements Precisely
b) Prioritize Requirements
c) Carry Out an Impact Analysis
d) Resolve Conflicts
e) Analyze Feasibility
5) Sign off
Roles of QA in requirement analysis

● Analyze each and every requirement from specification document, use cases.
● List down high level scenarios.
● Clarify queries and functionality from stakeholders.
● Promote suggestions to implement the features or any logical issues.
● Raise defect or clarification against the specification document.
● Track the defect or clarification rose against the specification document.
● Create high level Test Scenarios.
● Create Traceability Matrix.
Outcome of the Requirement Analysis Phase

● Requirement Understanding Document.


● High level scenarios.
● High level test strategy and testing applicability.
Test Plan

- A detailed document that catalogs the test strategy, objectives, schedule, estimations,
deadlines, and the resources required for completing that particular project.
- Test Plan is a document that acts as a point of reference and only based on that testing is
carried out within the QA team.
- It is also a document that we share with the Business Analysts, Project Managers, Dev team
and the other teams. This helps to enhance the level of transparency of the QA team’s work
to the external teams.
- It is documented by the QA manager/QA lead based on the inputs from the QA team
members.
- This plan is not static and is updated on an on-demand basis.
- The more detailed and comprehensive the plan is, the more successful will be the testing
activity.
Importance of Test Plan

• It guides our thinking; forces us to confront the challenges that await us and focus our thinking
on important topics.

• Help people outside the test team such as developers, business managers, customers
understand the details of testing.

• It helps us to manage changes.

• Important aspects like test estimation, test scopes, test strategy are documented in the Test
plan, so it can be reviewed by the Management team and reused for other projects.
Values of Test Plan

Test plan should at least clarify the following topics:

• Test objects and the objectives: What are you going to test (and whatnot?)

• Test approach: How are you going to test and Why

• Resources: Who do you need for which activity?

• Planning: When will you be carrying out which test activities and who do you need
Pros and Cons of Test Plan
Pros Cons

Road Map to Testing Process Needs Time to create Test Plan

Means of Communication Change Management in Test Plan

Requirements for Test Environment

Better Functional Coverage

Prevents from Unnecessary Testing

Accurate Effort Estimation

Risk Management: Risks are listed and analyzed.


Steps to Write a Test Plan
Components of Test Plan
Items in a Test Plan Template What do they contain?

Scope => Test Scenarios/Test objectives that will be validated.

Out of scope => Enhanced clarity on what we are not going to cover

Assumptions => All the conditions that need to hold true for us to be able to proceed
successfully

Schedules => - Test Scenario prep


- Test Documentation- test cases/test data/setting up environment
- Test Execution
- Test Cycle- How many cycle
- Start and End date for cycles
Roles and Responsibilities => - Team members are listed
- Who does what
- Module owners are listed with their contact info

Deliverables => - What documents(test artifacts) are going to produce at what time
frames?
- What can be expected from each document?

Environment => - What kind of environment requirements exist?


- Who is going to be in charge?
- What to do in case of problems?

Tools => For Example, JIRA for bug tracking


Defect Management => - Who are we going to report the defects to?
- How are we going to report?
- What is expected?

Risks and Risk Management => - Risks are listed


- Risks are analyzed- likelihood and impact is documented
- Risk mitigation plans are drawn

Exit criteria => When to stop testing?


Sections in test plan
1) Objectives
2) Scope
3) Testing Methodologies (Types Of Testing)
4) Approach
5) Assumptions
6) Risk
7) Contingency Plan Or Mitigation Plan Or Back-Up Plan
8) Roles And Responsibilities
9) Schedules
10) Defect Tracking
11) Test Environment (Hardware And Software)
12) Entry And Exit Criteria
13) Test Automation (Based On The Project Needs)
14) Deliverables
Template of test plan
Table of Content 4. Test Deliverables

1. Introduction 5. Environmental needs

1.1 Purpose 6. Dependencies

1.2 Project Overview 7. Responsibilities

8. Staffing and Training needs


1.3 Audience
9. Schedule
2. Test strategy
10. Approvals
2.1 Test objectives

2.2 Test Assumptions References

3. Develop Test strategy

3.1 Define Scope of testing

3.2 Identify testing type

3.3 Document risk and Issues


1) Who writes Test Plan ?
Test Lead – 60%
Test Manager – 20%
Test Engineer – 20%
Thus, we can see from above – in 60% of projects, Test plan is written by Test Lead.
2) Who reviews Test Plan ?
Test Engineer
Test Lead
Test Manager
Customer
Development team
-> Test Engineer looks at the Test plan from his module point of view.
-> Test Manager looks at the Test plan from the customer point of view.
3) Who approves Test Plan ?
Test Manager
Customer
Test Scenario

- A statement describing the functionality of the


application to be tested
- Test scenario is ‘What functionality is to be
tested’.
- The test scenario is a detailed document of test
cases that cover end to end functionality of a
software application in liner statements.
Importance of Test Scenarios

● To assess the actual situation on the project.


● To see the full scale of work and prioritize it correctly.
● To provide sufficient test coverage before product release.
● They help determine the most important end-to-end transactions or the real use of
the software applications.
Best Practices for Writing Test Scenarios

● Should be easy to understand.


● Easily executable.
● Should be accurate.
● Traceable or mapped with the requirements.
● Should not have any ambiguity.
Example
Functional:

1) Verify that as soon as the login page opens, by default the cursor should remain on the username textbox.
2) Check if the password is in masked form when typed in the password field.
3) Check if the password can be copy-pasted or not
4) Check system behavior when valid email id and password is entered.
5) Check system behavior when invalid email id and valid password is entered.
6) Check system behavior when valid email id and invalid password is entered.
7) Check system behavior when invalid email id and invalid password is entered.

UI:

1) Check that the font type and size of the labels and the text written on the different elements should be clearly visible.
2) Verify that the size, color, and UI of the different elements are as per the specifications.

Security:

1) Verify that there is a limit on the total number of unsuccessful login attempts.
2) Verify that once logged in, clicking the back button doesn’t logout the user.
Test Cases

• Ensures good test coverage

• Allows the tester to think thoroughly through different ways of validating features.

• Negative test cases are also documented, which can often be overlooked.

• They are reusable for the future, anyone can reference them and execute the test
Some Points to consider while writing test cases

- Before we start writing test case, come up with options and select the best option and then only
start writing test case
- In Expected result, use “should be” or “must be”
- Elaborate only those steps on which we have focus. Do not elaborate on all the steps.
- Highlight object names
- Do not hard code the test case. Write a generic test case
- Organize steps properly so that it reduces lot of execution time.
Procedures to Write test
cases
1) System Study
2) Identify all possible test scenarios
3) Apply test design technique, using standard template
4) Review the test cases
5) Fix the review comment given by the reviewer
6) Test case approval
Efficient ways of writing test cases
1. Simple and transparent

2. Keep End User in Mind

3. Avoid repetition

4. Do not Assume

5. Ensure 100% Coverage

6. Test Cases must be identifiable.

7. Implement Testing Techniques

8. Self-cleaning

9. Repeatable and self-standing

10. Peer Review


Test cases provide the following value:

• Ensures good test coverage (key functionality isn’t missed in the testing process).

• Allows the tester to think thoroughly through different ways of validating features.

• Negative test cases are also documented, which can often be overlooked.

• They are reusable for the future, anyone can reference them and execute the test
Test cases parameters

● Test case id
● Test Scenarios
● Assumptions/ Pre-Condition
● Steps to be executed
● Test data: Variables and their values
● Expected Result
● Actual result
● Status (Pass/Fail)
● Comments
Simple format of test case

Test Test Pre-Conditi Test Steps Test Data Expected Actual Result Post Status Remarks
case ID Scenario on Result Condition

TC_01 Verify the Need a valid 1.Enter <Valid Successful Login Gmail inbox PASS
login of Gmail username Username> login successfully is shown
Gmail account to
2.Enter <Valid
login
password password>
3.Click on
login button

TC_02 Verify the Need a valid 1.Enter <Valid A message Unable to login FAIL
login of Gmail username Username> “The email with invalid
Gmail with account to and password
2.Enter <Invalid
invalid login password
password password>
password you entered
3.Click on don’t match”
login button should be
shown
Test Scenario Test Cases

Test scenario is a concept. Test cases are the solutions to verify that concept .

Test Scenario is a high level functionality. Test cases are detailed procedure to test the high level functionality.

Test Scenarios are derived from Requirements/ User Stories. Test cases are derived from Test Scenarios .

Test scenario is ‘What functionality is to be tested’ Test Cases are ‘ How to test the functionality ’.

Single test scenarios are never repeatable. Single test case may be used for multiple times in different scenarios.

Brief documentations required. Detailed documentation is required.

Brainstorming sessions are required to finalize a Test Scenario. Detailed technical knowledge of the software application is required
Test Data

- Data created or selected to satisfy the execution preconditions and input content required to
execute one or more test cases.
- Crucial part of most functional test.
- Typically test data is created in-sync with the test case it is intended to be used for.
How test data can be generated?

● Manually
a. The test data is generally created by the testers using their own skills and judgments.
● Back-end data injection
a. This is done through SQL queries.
b. This approach can also update the existing data in the database.
c. It is speedy & efficient but should be implemented very carefully so that the existing database does not get
corrupted.
● Third-Party Tools
● Automated Test Data Generation Tools
a. high level of accuracy.
b. better speed and delivery of output with this technique.
c. helps in saving a lot of time as well as generating a large volume of accurate data.
● Mass copy of data from production to testing environment
How to Prepare Data that will Ensure Maximum Test Coverage?

Design your data considering the following categories:

1) No data: Run your test cases on blank or default data. See if proper error messages are generated.

2) Valid data set: Create it to check if the application is functioning as per requirements and valid input data is properly saved in
database or files.

3) Invalid data set: Prepare invalid data set to check application behavior for negative values, alphanumeric string inputs.

4) Illegal data format: Make one data set of illegal data format. The system should not accept data in an invalid or illegal format.
Also, check proper error messages are generated.

5) Boundary Condition dataset: Dataset containing out of range data. Identify application boundary cases and prepare data set
that will cover lower as well as upper boundary conditions.

6) The dataset for performance, load and stress testing: This data set should be large in volume.
Test Data Qualities

1) Realistic
2) Practically valid
3) Versatile to cover scenarios
4) Exceptional data
Test Case Review

- Important process in Software Testing


- Can be done in three ways:
- Self-Review
- Peer Review
- Review by a supervisor
Test Case Review
While reviewing, the reviewer checks the following,
1) Template – Checks whether the template is as per decided for the project
2)Header :
a) Checks whether all the attributes are captured or not
b) Checks whether all the attributes in the header are filled or not
c) Checks whether all the attributes in the header are relevant or not
3) Body :
a) Check whether all possible scenarios are covered or not
b) Check whether the flow of test case is good or not
c) Check whether the test case design techniques are applied or not
d) The test cases should be organized in such a way that it should less time to execute
e) Check whether the test case is simple to understand and execute
f) Check whether proper navigation steps is written or no
Execution of Test Case

- Test execution refers to execution of test cases or test plan on a software


product, to ensure the fulfilment of the pre-defined requirements and
specifications of the developed software product.
- It is an important part of Software Testing Life Cycle (STLC), along with Software
Development Life Cycle (SDLC).
- The values in “Status” and “Actual Result” columns are added after the test
execution phase.
Conditions to be fulfilled before test executions:

● Designing or defining the tests must be complete


● Test tools, especially test management tools, must be ready
● Processes for tracking the test results must be working
● Every team member must understand the data to be tracked
● Criteria for logging tests and reporting defects must be published and available to all
team members
Process of Test Execution:

● Preparation: During the first stage of the process the team prepares a test strategy and
test cases, which are then used by them to test run the software product.
● Execution: After creating test cases and strategy, the team finally executes them, while
monitoring the results and comparing them with the expected results.
● Verification and Validation: During this stage of the process, the test results are verified
& validated as well as recorded and reported by the team in the form of test summary/test
execution report.
Test Execution States

● Pass: test procedure is executed and the expected result is satisfied.


● Fail: test procedure is executed and the expected result is not satisfied.
● Inconclusive: test procedure is executed and requires further analysis to have a clear
result.
● Block: test procedure cannot be executed due to the fact that at least one of the test
case preconditions are not met.
● Deferred: test procedure is not executed yet and deferred for a future test cycle/release
for execution.
● In progress: test procedure is currently being executed.
● Not run: test procedure is not executed yet.
Test Case Coverage

- Defined as a metric in Software testing that measure the amount of testing performed by a set of tests.

Example:

Suppose you are testing the “Login” functionality of a product; just do not concentrate on checking the login button
functionality and validation of the fields. There are other aspects to look for such as the UI part or if the page displayed is
user friendly or whether the application crashes while clicking on login.
What Test Coverage does?

• Finds the areas in specified requirements which are not covered by the test scenarios and cases.

• Identify the unusual test cases which do not have meaning in being executed and can omit them.

• Helps to create additional test cases to increase coverage

• Identifying a quantitative measure of test coverage which is an indirect method for quality check

• Helps to determine impact analysis and change tracking.


Different measures/techniques for efficient Test Coverage

Following are the measures of software testing:

> Requirement Coverage

> Product Coverage

> Risk Coverage


Benefits of Test Coverage

• Defects can be prevented at the early stage of application life cycle.

• Impact analysis becomes easier

• Time, scope and cost can be kept under control

• Can determine all the decision points and paths used in the application, which allows to increase test
coverage

• Can help to determine the paths in application that were not tested.
Test Strategy

- A test strategy answers the “what” questions


- A test strategy is carried out by the project manager. It says what type of technique to follow and which
module to test
- Test strategy cannot be changed
- Test strategy is made before the test plan.
- Test strategy describes about the general approaches.
- It is set at organization level and can be used by multiple projects
- In smaller project, test strategy is often found as a section of a test plan

Example: A Test Strategy includes details like “Individual modules are to be tested by the test team members”. In
this case, who tests it does not matter – so it’s generic and the change in the team member does not have to be
updated, keeping it static.
Test Script

- Test scripts are a line-by-line list of all the activities that must be performed and tested on various
user journeys.
- It lays down each action to be followed, along with the intended outcomes.
- This automation script helps software tester to test each level on a wide range of devices
systematically.
How to write test scripts?

1) Record/playback
2) Keyword/data-driven scripting
a) The tester defines the test using keywords rather than the underlying code in
data-driven scripting.
b) The developers' task here is to implement the test script code for the keywords and to
update it as required.
3) Writing Code Using the Programming Language
Tips for creating a Test Script
1) Clear:
a) Your test script should be clear.
b) You need to constantly verify that each step in the test script is clear, concise, and coherent. This helps to
keep the testing process smooth.
2) Simple:
a) You should create a test script that should contain just one specific action for testers to take. This makes
sure that each function is tested correctly and that testers do not miss steps in the software testing
process.
3) Well-thought-out:
a) To write the test script, you need to put yourself in the user’s place to decide which paths to test.
b) Should be creative enough to predict all the different paths that users would use while running a system or
application.
When Should You Use a Test Script?

The following are the justifications for utilizing the Test Script.

● The most reliable way to ensure that nothing is skipped and that the findings match the
desired testing strategy is to use a test script.
● It gives a lot less space for mistakes throughout the testing process if the test script is
prepared.
● Testers are sometimes given free rein over the product. They are prone to overlooking key
details.
● When a function does not provide the anticipated result, the tester assumes it.
● It's especially helpful when the user's performance is critical and specific.
Test Result

- Outcome of the whole process of software testing life cycle.


- Offers an insight into the deliverables of a software project, significant in representing the
status of the project to the stakeholders.
- An assessment of how well the Testing is performed.

For example, if the test report informs that there are many defects remaining in the product,
stakeholders can delay the release until all the defects are fixed.
Test Report
Benefit of Test Report
Test Report Writing Tips

1) Details
2) Clearness
3) Standardization
4) Specification
Test Summary Report

- Also known as a Test Closure Report


- It provides overall test results, defects and connected data following a test project.
- the report is created at the end of the testing cycles, so it can also include regression and retesting
- Test Lead or Test Manager prepares this document at end of the Testing, means in Test Closure phase (Last
phase in STLC/Software Test Process.
- can be generated at the end of each phase of the cycle separately, as well as at the end of the
entire life cycle of the product
Summary Report Components

1) Objectives of the test


2) Testing Approach
3) Testing Scope
a) Areas Tested
b) Areas not tested
4) Platform details
5) Defect report
6) Testing Metrics
7) Overall Summary
8) Approval
Mobile Testing

- A process by which application software tested in mobile devices for its


functionality, usability and compatibility.
- The main purpose of mobile application functional testing is to ensure the quality,
meeting the specified expectations, reducing the risk or errors and customer
satisfaction.
- It is a process through which applications being developed for mobile devices are
tested.
- The main focus is to test the apps for functionality, usability and stability.
Types of Mobile Testing

1) Hardware Testing
2) Software Testing

Types of Mobile Apps

1) Native apps
2) Web apps
3) Hybrid apps
Testing for different types of App
1) Native app
a) Device compatibility
b) Utilization of device features
2) For hybrid apps:
a) Interaction of the app with the device native features
b) Potential performance issues
c) Usability (look and feel) compared to native apps on the platform in question
3) For web apps:
a) Testing to determine cross-browser compatibility of the app to various common mobile
browsers
b) Utilization of OS features (e.g., date picker and opening appropriate keyboard)
c) Usability (look and feel) compared to native apps on the platform in question
Types of Mobile App testing
Some of the key mobile testing types:
● Usability testing
● Compatibility testing
● Interface testing
● Services testing
● Performance testing
● Installation tests
● Security testing
● Storage testing
● Input testing
Types of mobile devices

1) Real devices
2) Virtual devices
a) Emulator
b) Simulator
Challenges of Mobile testing
● Multiple platforms and device fragmentation: Multiple OS types and versions, screen sizes and quality of
display.
● Hardware differences in various devices: Various types of sensors and difficulty in simulating test conditions for
constrained CPU and RAM resources.
● Variety of software development tools required by the platforms.
● Difference of user interface designs and user experience (UX) expectations from the platforms.
● Diverse users and user groups.
● Various app types with various connection methods.
● High feedback visibility resulting from bugs that have a high impact on users which may easily result in them
publishing feedback on online marketplaces.
● Unavailability of newly launched devices requiring the use of mobile emulators/simulators
Impacts of the Challenges

● Large numbers of combinations to be tested.


● Large numbers of devices required for testing, which drives up the cost.
● New features being released in every version of underlying operating system.
● Guidelines to be considered for the various platforms.
● Changes in the available upload and download speeds based on data plan
Mobile Testing Process

1) Planning
2) Identifying testing types
3) Test case and script design
4) Manual and automated testing
5) Usability testing
6) Performance testing
7) Functional testing
8) Security testing
9) Device testing
10) Launch Plan
Mobile Application Testing Strategies

1) Selection of the device


2) Emulators
3) Physical Devices
4) Automation and manual testing
How Is Mobile Testing Different From Web Testing?

1) Screen size
2) Storage
3) Performance Speed
4) Internet access
5) Cross-platform compatibility
6) Offline mode
Some Tips For Effective Mobile Testing
● Test early and test often by using testing as part of your app development
● Split your app testing into smaller units
● Commit your efforts towards performance and load testing
● Distribute the testing efforts across the entire team members including developers
● Include experts or experienced people in your QA team
● Know the platform’s (Android or iOS) user interface/user experience (UI/UX) guidelines
before starting the mobile app testing
● Test your application on multiple devices
● Test the key app features in realistic scenarios
● Don’t rely completely on emulators
● Keep an eye on proper functioning of updates including OS versions
● Test for all supported screen sizes and touch interfaces to validate seamless user
experience
Things to be tested in mobile application

1. Installation
2. Uninstallation
3. Application Logo
4. Splash
5. Low memory
6. Visual Feedback
7. Exit application
8. Start/restart application
API

- Application Programming Interface


- Allows two applications to communicate with each others

Types of Web APIs


● Open APIs: Also known as Public API
● Partner APIs
● Internal APIs
What is API testing?

- A type of software testing that performs verification directly at the API level
- API testing puts much more emphasis on the testing of business logic, data responses
and security, and performance bottlenecks.
- In API Testing, instead of using standard user inputs(keyboard) and outputs, you use
software to send calls to the API, get output, and note down the system’s response.
- API tests are very different from GUI Tests and won’t concentrate on the look and
feel of an application
Benefits of API Testing

1) Earlier testing
2) Language-independent
3) GUI-independent
4) Improved test coverages
What you need before carrying out API testing

● Who is your target audience?


● Who is your API consumer?
● What environment/s should the API typically be used?
● What problems are we testing for?
● What are your priorities to test?
● What is supposed to happen in normal circumstances?
● What could potentially happen in abnormal circumstances?
● What is defined as a Pass or a Fail?
● What data is the desired output?
● Who on your team is in charge of testing what?
API Testing approach

1. Understanding the functionality of the API program and clearly


define the scope of the program
2. Apply testing techniques such as equivalence classes, boundary
value analysis, and error guessing and write test cases for the API
3. Input Parameters for the API need to be planned and defined
appropriately
4. Execute the test cases and compare expected and actual results.
Why would you want to test API?

● Checking API return values based on the input condition


● Verifying if the API doesn’t return anything at all or the wrong
results
● Verifying if the API triggers some other event or calls another API
● Verifying if the API is updating any data structures.
How to do API testing?

1. Review the API specification.


2. Determine API testing requirements.
3. Define input parameters.
4. Create positive and negative tests.
5. Select an API testing tool.
API Testing practices
1. Test for the typical or expected results first
2. Add stress to the system through a series of API load tests
3. Test for failure. Make sure you understand how your API will fail.
4. Prioritize API function calls so that it will be easy for testers to test quickly and easily
5. Perform well-planned call sequencing
6. API Test cases should be grouped by test category
7. Parameters selection should be explicitly mentioned in the test case itself
8. Each test case should be as self-contained and independent from dependencies as
possible
9. To ensure complete test coverage, create API test cases for all possible input
combinations of the API.
HTTP Methods for API Testing

Major methods used during API Testing are listed below:


● GET: It is used to retrieve data from a server at the specified resource.
● POST: It is used to send data to the API server to create or update a resource.
● PUT: It is used to send data to the API to create or update a resource. It is used to update
information on the server.
● DELETE: This request deletes information on the server.
● OPTIONS: This request should return data describing what other methods and operations
the server supports at the given URL.
Some HTTPS status code
Status Code Description

200 OK Indicates that the request has succeeded.

400 Bad Request The request could not be understood by the server due to incorrect syntax, invalid
request message parameters, etc.

401 Unauthorized Indicates that the request requires user authentication information.

403 Forbidden The client does not have access rights to the content but their identity is known to
the server.

404 Not Found The server can not find the requested resource

408 Request Timeout Indicates that the server did not receive a complete request from the client within
the server’s allotted timeout period.

409 Conflict The request could not be completed due to a conflict with the current state of the
resource.
Status Code Description

415 Unsupported Media The media-type in Content-type of the request is not supported by the server.
Type

500 Internal Server Error The server encountered an unexpected condition that prevented it from
fulfilling the request.

502 Bad Gateway The server got an invalid response while working as a gateway to get the
response needed to handle the request.

503 Service Unavailable The server is not ready to handle the request.

504 Gateway Timeout The server is acting as a gateway and cannot get a response in time for a
request.
Types of Bugs that API testing detects

● Fails to handle error conditions gracefully


● Unused flags
● Missing or duplicate functionality
● Reliability Issues. Difficulty in connecting and getting a response from API.
● Security Issues
● Multi-threading issues
● Performance Issues. API response time is very high.
● Improper errors/warning to a caller
● Incorrect handling of valid argument values
● Response Data is not structured correctly (JSON or XML)
Performance Testing

- The process of determining the speed, responsiveness and stability of a computer, network, software program or
device under a workload.
- Performance testing measures the quality attributes of the system, such as scalability, reliability and resource usage.
- Speed – It identifies whether the response of the application is fast.
- Scalability – It determines the maximum user load.
- Stability – It checks if the application is stable under varying loads.
Goals of Performance Testing

- Anticipate before going live


- Reproduce crash
- Identify infra requirements (server side)
- Account for future growth
Why is performance testing important?
- Know about maximum load system can handle
- Find out system responsiveness under load
- Identify system behavior under different load conditions
- Identify any performance bottlenecks

How
- Typically done using performance testing tools
- These tools create virtual users to carry out different business transactions
- Multiple load generators are used to generate large user load
- Tools provides different performance metrics
Who
- Testers play a key role in performance testing
- Collaboration is essential between different parties
Types of Performance Testing

1) Load Testing
- It is conducted to understand the application behavior under a specific expected user load.
- It is used to check application performance under peak load conditions.
- The main goal is to validate that a system can handle the expected load with acceptable
performance.

Common issues
- Slower response time
- Increased error rate after certain load
- Increased resource utilization
- One or more application components failing or misbehaving
2) Soak Test

- It is done to determine if the system can sustain the continuous expected load for long
duration.
- It is used to check the application performance under average load condition.
- It is also known as Endurance Testing or Longevity Testing.

Common issues
- Memory leaks
- Application crash
- Slower response time
- Increased DB resource utilization
3) Stress Test

- It is done to determine the system’s robustness when it’s put at extreme load.
- It is used to check the application performance under extreme load (load beyond regular)
conditions.
- The applied load can be 150% to 500% of peak load in stress test.

Common issues
- Application crash
- Increased error rate after certain load
- Slower response time
- Increased resource utilization
- One or more application components failing or misbehaving
- Any specific performance issues
Performance Testing life Cycle

You might also like