Manual Testing Concepts
Manual Testing Concepts
[email protected]
What is the software?
Types of software:
1. To prevent defects.
2. Finding defects that may get created by the programmer while developing the software.
3. Helps to provide a quality product.
4. To ensure that it satisfies the BRS and SRS.
5. To ensure that the result meets the business and user requirements.
6. Gaining confidence and providing information about the level of quality.
What is Debugging:
1. Once the Development team receives the testing team’s report, they will start debugging. This
phase aims to locate the bug and remove it from the software. It is a one-off process and is done
manually.
2. In this process, a special tool called a debugger is used in locating the bugs, most programming
environments have the debugger.
3. Some popular Debugger tools: WinDbg, OllyDbg, IDA Pro...
Psychology of testing:
In software testing, psychology plays an extremely important role.
It is one of those factors that stay behind the scenes but has a great impact on the end result.
It is mainly dependent on the mindset of the developers and testers, as well as the quality of
communication between them. Moreover, the psychology of testing helps them work towards a
common goal.
The three sections of the psychology of testing are:
o The mindset of Developers and Testers.
o Communication in a Constructive Manner.
o Test Independence.
QA Vs. QC:
Quality Assurance:
QA is process-oriented.
QA is a proactive process.
QA focuses on preventing defects.
QA team works with the development team to produce quality software.
QA ensures that approaches and techniques are implemented correctly (during software
development).
QA is responsible for SDLC.
E.g., Verification
Quality Control:
QC is product-oriented.
QC is a reactive process.
QC focuses on identifying/detecting the defects.
QC comes into the picture after Quality Assurance.
QC verifies that the developed project meets the defined quality standards.
QC is responsible for STLC.
E.g., Validation
QE (Quality Engineering):
Quality Engineer writes the code but for the software testing purpose.
Quality Engineers are nothing but Automation Testers.
What is QAMS?
A quality management system is a collection of business processes focused on consistently
meeting customer requirements and enhancing their satisfaction. It is aligned with an
organization's purpose and strategic direction.
A quality management system (QMS) is a system that documents the policies, business processes,
and procedures necessary for an organization to create and deliver its products or services to its
customers, and therefore increase customer satisfaction through high product quality.
It has long been accepted that continuous process improvement is based on many small
evolutionary steps rather than larger revolutionary innovations. The Capability Maturity Model
(CMM) provides a framework for organizing these evolutionary steps into five maturity levels that
lay successive foundations for continuous process improvement.
This methodology is at the heart of most management systems which are designed to improve
the quality of the development and delivery of all products and services.
The five Software Capability Maturity Model levels have been defined as:
1. Initial
The software process is characterized as ad hoc, and occasionally even chaotic. Few
processes are defined, and success depends on individual effort and heroics.
2. Repeatable
Basic project management processes are established to track cost, schedule, and
functionality. The necessary process discipline is in place to repeat earlier successes on
projects with similar applications.
3. Defined
The software process for both management and engineering activities is documented,
standardized, and integrated into all processes for the organization. All projects use an
approved version of the organization’s standard software process for developing and
maintaining software.
4. Managed
Detailed measures of the software process and product quality are collected. Both the
software process and products are quantitatively understood and controlled.
5. Optimizing
=============================================================================
If a software application is developed for a specific customer based on their requirement, then it
is called a Project.
If a software application is developed for multiple customers based on the market requirements,
then it is called a Product.
4. Absence-of-errors is a fallacy
Some organizations expect that testers can run all possible tests and find all possible defects,
but this is impossible. It is a fallacy (i.e., a wrong belief) to expect that just finding and fixing a
large number of defects will ensure the success of a system.
For example, testing all specified requirements and fixing all defects found could still produce
a system that is difficult to use but does not fulfill the users’ needs and expectations.
5. Testing is context-dependent
Testing is done differently in different contexts.
For example, testing in an Agile project is done differently than testing in a sequential
software development lifecycle project.
=============================================================================
SDLC is a process used by the software industry to design, develop and test software.
SDLC process aims to produce high-quality software that meets customer expectations.
The software development should be complete in the pre-defined time frame and cost.
SDLC consists of a detailed process that explains how to plan, build, and maintain specific
software.
Here, are prime reasons why SDLC is important for developing a software system.
Phases in SDLC:
1. Requirement Analysis:
We have to collect and understand the requirement of the customer. Normally BA’s, Project
Managers, and Product Managers are involved in this phase.
They will talk to the customer, get the requirements, and prepare a certain number of
documents (BRS/SRS).
This stage gives a clearer picture of the scope of the entire project and forecasts issues.
This helps companies to finalize the necessary timeline to finish the work of that system.
2. Design:
3. Coding:
Once the system design phase is over, the next phase is coding. In this phase, developers start
to build the entire system by writing code using the chosen programming language.
In the coding phase, tasks are divided into units or modules and assigned to the various
developers.
It is the longest phase of the Software Development Life Cycle process.
In this phase, the developer needs to follow certain predefined coding guidelines.
They also need to use programming tools like compilers, interpreters, and debuggers to
generate and implement the code.
4. Testing:
Once the software is complete, it is deployed in the testing environment. The testing team
starts testing the functionality of the entire system. This is done to verify that the entire
application works according to the customer’s requirements.
During this phase, QA and testing team may find some bugs/defects which they communicate
to developers. Then development team fixes the bug and sends it back to QA for a retest. This
process continues until the software is bug-free, stable, and working according to the business
needs of that system.
5. Deployment / Installation:
Once the software testing phase is over and no bugs or errors are left in the system then the
final deployment process starts. Based on the feedback given by the project manager, the
final software is released and checked for deployment issues if any.
6. Maintenance:
Once the system is deployed, and customers start using the developed system, the following
three activities may occur:
Bug fixing - Bugs are reported because of some scenarios which are not tested
at all.
Upgrade - Upgrading the application to the newer versions of the Software.
Enhancement - Adding some new features to the existing software.
=============================================================================
There is no model that we can consider as the best for software development process. But
nowadays the
Agile model is currently the most popular and widely used by all the software organization. In
this model after
every development stage (Sprint), the user is able to see whether the product meets his
requirements. By this
way the risks are reduced as continuous changes are done depending on client’s feedback.
=============================================================================
Types (Models) of SDLC:
1. Waterfall Model
2. Spiral Model
3. Rapid Application Development (RAD) Model
4. Iterative or Incremental Model
5. Prototype Model
6. V Model
7. Agile Model
1. Waterfall Model:
Waterfall is one of the earliest and most commonly used software development models
(processes), in which the development process looks like the flow, moving step by step
through the phases like analysis, design, coding, testing, deployment/installation, and
support. So, it is also known as the “Linear Sequential Model”.
This SDLC model includes gradual execution of every stage completely. This process is
strictly documented and predefined with features expected for every phase of this
software development life cycle model.
=============================================================================
2. Risk analysis and resolving: As the process goes to the second quadrant, all likely
solutions are sketched, and then the best solution among them gets selected. Then
the different types of risks linked with the chosen solution are recognized and
resolved through the best possible approach. As the spiral goes to the end of this
quadrant, a project prototype is put up for the most excellent and likely solution.
3. Develop the next level of product: As the development progress goes to the third
quadrant, the well-known and most required features are developed as well as
verified with the testing methodologies. At the end of this third quadrant, new
software or the next version of existing software is ready to deliver.
4. Plan the next Phase: As the development process proceeds in the fourth quadrant,
the customers appraise the developed version of the project and report if any further
changes are required. At last, planning for the next (subsequent) phase is initiated.
Prototype: A prototype is a blueprint of the software.
Initial requirements from the customer ---> PROTOTYPE ---> customer ---> Design, Coding,
Testing....
=============================================================================
3. RAD Model:
RAD stands for “Rapid Application Development”. As per the name itself, the RAD model is a
model to develop fast and high-quality software products by Requirements using workshops.
=============================================================================
4. Iterative Model:
1. In an iterative model application will get divided into small parts and development
will be done by specifying and implementing only small parts of the software, which
can be reviewed to identify further requirements.
2. This process is repeated, creating a new version of the software for each cycle of the
model. The iterative model is very simple to understand and use. In this model, we
can’t start developing the complete software with full specification of requirements
=============================================================================
5. Prototype Model:
It is a trial version of the software. It is a sample product that is designed before starting
actual testing.
This model is used when user requirements are not very clear, and this software is tested
based on raw requirements obtained from the user. The available types of prototyping
are Rapid, Incremental, Evolutionary, and Extreme.
Prototype Model will work like --
1. We will take basic requirements
2. Based on the discussion, we will create an initial prototype (A prototype – is a working
model)
3. Once the working prototype is built, we will ask the client to check and use it
4. Next step will be to test and enhance
5. Again, we will call the user to check and use it, and again we will make changes as per
the user's feedback until we get all the requirements from the user.
6. Once, all requirements are fulfilled and the client will agree, then the last step will be
signed off (Sign off - Deliver the product and finish the contract)
=============================================================================
6. V Model:
In parallel to the software development phase, a corresponding series of test phases also runs
in this model. Each stage ensures a specific type of testing is done, and once that testing is
passed, only then the next phase starts.
When the requirement is well-defined and ambiguous (uncertain), we use V-Model.
It is also known as Verification and Validation model.
Coding Phase:
After designing, the coding phase is started. Based on the requirements, a suitable
programming language is decided. There are some guidelines and standards for coding.
Before checking in the repository, the final build is optimized for better performance, and the
code goes through many code reviews to check the performance.
=============================================================================
1. Review:
Requirement reviews
Design reviews
Code reviews
Test Plan reviews
Test cases reviews etc.
2. Walkthrough:
It is an informal review.
It is not pre-planned and can be done whenever required.
Author reads the documents or code and discusses it with peers.
Also, the walkthrough does not have minutes of the meeting.
3. Inspection:
It is the most formal review type.
Inspection will have a proper schedule which will be intimated via email to the
concerned developer/testers.
In which at least 3-8 people sit in the meeting 1-reader, 2-writer, 3-moderator
plus concerned people.
1. Unit Testing
2. Integration Testing
3. System Testing
4. User Acceptance Testing (UAT)
=============================================================================
DESCRIPTION
1. Requirement Analysis
2. Test Planning
3. Test Design
4. Test Environment Setup
5. Test Execution
6. Test Closure
1. Requirement Analysis: In this phase, the requirements documents are analyzed and
validated, and the scope of testing is defined.
2. Test Planning: In this phase, test plan strategy is defined, estimation of test effort is defined
along with automation strategy and tool selection is done.
3. Test Design: In this phase test cases are designed; test data is prepared, and automation
scripts are implemented.
4. Test Environment Setup: A test environment closely simulating the real-world environment
is prepared.
5. Test Execution: To perform actual testing as per the test steps.
6. Test Closure: Test Closure is final stage of STLC where we will make all details documentations
which are required submit to client at time of software delivery. Such as test report, defect
report, test cases summary, RTM details, release note.
=============================================================================
Test Closure is final stage of STLC where we will make all details documentations which are required
submit to client at time of software delivery. Such as test report, defect report, test cases summary,
RTM details, release note.
What is the list of test closure documents?
It includes,
1. test case documents, (i.e., Test Case Excel sheet we prepare during actual testing)
2. test plan, test strategy,
3. Release note
4. test scripts,
5. test data,
6. traceability matrix, and
7. test results and reports like bug report, execution report etc.
What is a Build?
It is a number/identity given to Installable software that is given to the testing team by the
development team
What is release?
It is a number/ identity given to Installable software that is handed over to the customer/client
by the testing team (or sometimes directly by the development team)
What is deployment?
Deployment is the mechanism through which applications, modules, updates, and patches are
delivered from developers to end-user/client/customer.
How would you define that testing is sufficient and it’s time to enter the Test Closure phase? Or
when we should stop testing?
Testing can be stopped when one or more of the following conditions are met,
1. After test case execution – The testing phase can be stopped when one complete
cycle of test cases is executed after the last known bug fix with the agreed-upon value
of pass percentage.
2. Once the testing deadline is met - Testing can be stopped after deadlines get met
with no high priority issues left in the system.
3. Based on Mean Time Between Failure (MTBF)- MTBF is the time interval between
two inherent failures. Based on stakeholders’ decisions, if the MTBF is quite a large
one can stop the testing phase.
4. Based on code coverage value – The testing phase can be stopped when the
automated code coverage reaches a specific threshold value with sufficient pass
percentage and no critical bug.
=============================================================================
Methods of Testing (Testing Methods) (White box, Black box, Grey box)
1. Black Box testing
2. White Box Testing
3. Grey Box Testing
Black box testing is the testing of requirements and functionality without knowledge of
internal content. Inputs are fed into the system and outputs are determined expected or
unexpected.
White Box Testing:
White box testing is testing based on knowledge of the internal logic (algorithms) of an
application’s code. It’s an approach that attempts to cover the software’s internals in
detail. White box testing is also known as ‘glass box testing’, ‘clear box testing’,
‘transparent box testing’, and ‘structural testing’.
Grey box testing uses a combination of black and white box testing. Grey box test cases
are designed with knowledge of the internal logic (algorithms) of an application’s code,
but the actual testing is performed as the black box. Alternately a limited number of
white-box testing is performed followed by conventional black-box testing
1. Unit Testing:
A unit is a single component or module of software.
Unit testing conducts on a single program or single module.
Unit testing is a white box testing technique.
The developers conduct Unit testing.
2. Integration Testing:
Integration testing performed between two or more modules.
Integration testing focuses on checking data communication between multiple
modules.
Integration testing is a white box testing technique.
Integration testing conducted by the tester at the application level (at the UI level).
Integration testing conducted by the developer at the coding level.
3. System Testing: (This is the actual area where testers are mostly involved.)
Testing the overall functionality of the application with respective client
requirements.
It is a black box technique.
The testing team conducts System testing.
After completion of component (unit) and integration level testing, we start System
testing.
Usability Testing:
During this testing validates application provide context-sensitive help or not to the
user.
Checks how easily the end-users can understand and operate the application is called
usability testing.
This is like a user manual so that the user can read the manual and proceed further.
Functional Testing:
In functional testing, we check the functionality of the software.
Functionality describes what software does. Functionality is nothing but the behavior
of the application.
Functional testing talks about how your feature should work.
I. Objective Properties Testing
II. Database Testing: DML operations like insert, delete, update, select
III. Error Handling
IV. Calculation/Manipulations Testing
V. Links Existence and Links Execution
VI. Cookies and Sessions
Non-functional Testing
a. Performance Testing
Load Testing
Stress Testing
Volume Testing
b. Security Testing
c. Recovery Testing
d. Compatibility Testing
e. Configuration Testing
f. Installation Testing
g. Sanitation / Garbage Testing
h. Endurance testing
i. Scalability testing.
a. Performance Testing: Speed of the application.
Load: Gradually increase the load on the application slowly then check the
speed of the application. Here load means data.
Stress: Suddenly increase/decrease the load on the application and check
the speed of the application.
Volume: Check how much data can handle by the application. Here we apply
huge data to system until it gets hang. Generally, this test is performed to
check how system is responding to bulk of user at a time.
c. Recovery Testing:
Check the system change from abnormal to normal.
d. Compatibility Testing:
Forward Compatibility
Backward Compatibility
Hardware Compatibility (Configuration testing)
e. Configuration Testing:
It is a combination of hardware and software, in which we need to test
whether they are communicating properly or not. In simple words, we check
how the data is flow from one module to another.
f. Installation Testing:
Check screens are clear to understand.
Screens navigation
Simple or not.
Un-installation.
g. Sanitation / Garbage Testing
If any application provides extra features/functionality, then we consider
them a bug.
=============================================================================
Functional Testing:
Non-Functional testing:
Test Techniques / Test Design Techniques / Test Data / Test Written Techniques:
Data
Coverage (cover every area/functionality of the feature)
Test Design Techniques (During Designing Test Cases) (for Black Box Testing):
1. Equivalence Class Partitioning (ECP)
2. Boundary Value Analysis (BVA)
3. Decision Table
4. State Transition
5. Error Guessing
a. Partition data into various classes and we can select data according to class then test. It
reduces the number of tests cases and saves time for testing.
b. Value Check.
c. Classify/divide/partition the data into multiple classes.
2. Boundary Value Analysis (BVA):
*** Input Domain testing: The value will be verified in the textbox/input fields.
3. Decision Table:
4. State Transition:
a. In State Transition Technique input is given in sequence one step at a time. Under this
technique we can test for limited set of input values.
b. The technique should be used when the testing team wants to test sequence of events
which happen in the application under test.
c. The tester can perform this action by entering various input conditions in a sequence.
d. In the State transition technique, the testing team provides positive as well as negative
input test values for evaluating the system behavior.
5. Error Guessing:
a. Error guessing is one of the testing techniques used to find bugs in a software application
based on the tester's prior experience.
b. In Error guessing we do not follow any specific rules.
c. It depends on Tester’s Analytical skills and experience.
d. Some of the examples are,
Submitting a form without entering values.
Entering invalid values such as entering alphabets in the numeric field.
=============================================================================
Test Plan Vs. Test Strategy:
=============================================================================
=============================================================================
=============================================================================
Test Environment:
1. Test Environment is a platform specially built for test case execution on the software product.
2. It is created by integrating the required software and hardware along with proper network
configuration.
3. Test environment simulates production/real-time environment.
4. Another name for the test environment is Test Bed.
5. This is nothing, but an environment created to execute the Test Cases.
=============================================================================
Test Execution:
1. To perform actual testing as per the test steps. i.e., During this phase test team will carry out
the testing, based on the test plans and the test case prepared.
2. Entry Criteria (Inputs): Test Cases, Test Data, and Test Plan.
3. Activities:
Test cases are executed based on test planning.
Status of test cases is marked, like passed, failed, blocked, run, etc.
Documentation of the test results and log defects for failed cases are done.
All the clocked and failed test cases are assigned bug ids.
Retesting once they are fixed.
Defects are tracked till closure.
4. Deliverables (Outputs): Provides defect report and test case execution report with completed
results.
=============================================================================
3. Prepare a traceability matrix template: For a requirements traceability matrix template, you
can create a spreadsheet in excel and add a column for each artifact that you have collected.
The columns in the excel will be like: Requirements, Test cases, Test results
4. Adding the artifacts: You can start adding the artifacts you have to the columns. You can now
copy and paste requirements, test cases, test results & bugs in the respective columns. You
need to ensure that the requirements, test cases, and bugs have unique ids. You can add
separate columns to denote the requirement id such as Requirement_id, TestCaseID, BugID,
etc.
5. Update the traceability matrix: Updating the traceability matrix is an ongoing job which
continues until the project completes. If there is any change in the requirements, you need to
update the traceability matrix. There might be a case that a requirement is dropped; you need
to update this in the matrix. If a new test case is added or a new bug is found, you need to
update this in the requirements traceability matrix.
=============================================================================
=============================================================================
Defects/Bugs/Issues:
1. Any mismatched functionality found in an application is called a Defect/Bug/Issue.
2. During Test Execution Test engineers are reporting mismatches as defects to developers
through templates or using tools.
3. Defect Reporting Tools:
o Clear Quest
o DevTrack
o Jira
o Quality Center
o Bug Zilla etc.
Test Management tools and Bug tracking tools are completely different.
Test (Case) Management Tool Vs. Project Management Tool Vs. Bug Tracking Tool.
=============================================================================
=============================================================================
Priority
1. P1 (High)
2. P2 (Medium)
3. P3 (Low)
Defect Severity: S-T (S-S-T) (Severity-System)
Severity is assigned/given by the QA Testers.
It affects the functionality.
Severity describes the seriousness of the defect and how much impact on Business workflow
(functionality). It is categorized into Blocker, Critical, Major, Minor.
[Image: Example]
=============================================================================
Defect Resolution:
After receiving the defect report from the testing team, the development team conducts a review
meeting to fix defects. Then they send a Resolution Type to the testing team for further
communication.
Resolution Types:
1. Accept
2. Reject
3. Duplicate
4. Enhancement
5. Need more information
6. Not Reproducible
7. Fixed
8. As Designed.
=============================================================================
a. Activities:
Evaluate cycle completion criteria based on Time, Test coverage, Cost, Software Critical
Business Objectives, Quality.
Prepare test metrics based on the above parameters.
Document the learning out of the project.
Prepare Test summary report.
Qualitative and quantitative reporting of quality of the work product to the customer.
The result analysis to find out the defect distribution by type and severity.
b. Deliverables:
c. Test Metrics:
=============================================================================
QA/Testing Activities:
Understanding the requirements and functional specifications of the application.
Identifying required Test Scenarios.
Designing Test Cases to validate the application.
Setting up Test Environment (Test Bed).
Execute Test Cases to valid applications.
Log Test results (How many tests cases pass/fail.)
Defect reporting and tracking.
Retest fixed defects of the previous build.
Perform various types of testing in the application.
Reports to Test Leas about the status of assigned tasks.
Participated in regular team meetings.
Creating automation scripts.
Provides recommendations on whether or not the application/system is ready for production.
=============================================================================
Software Testing Terminologies: (Other Types of Testing):
a. Regression Testing
b. Re-testing
c. Exploratory testing
d. Adhoc Testing
e. Monkey Testing
f. Positive Testing
g. Negative Testing
h. End to End Testing
i. Globalization and Localization Testing
a. Regression testing:
Testing conducted on modified build (updated build) to make sure there will not be an impact on
existing functionality because of changes like adding/deleting/modifying features. Also, we can
say Smoke Testing is a small part of regression testing.
b. Re-Testing:
1. Whenever the developer fixed a bug, the tester will test the bug fix called re-testing.
2. Tester closes the bug if worked otherwise re-open and send to a developer.
3. To ensure that the defects which were found and posted in the earlier build were
fixed or not in the current build.
4. Example:
i. Build 1.0 was released, test team found some defects (Defect ID 1.0.1, 1.0.2)
and posted them.
ii. Build 1.1 was released, now testing the defects 1.0.1 and 1.0.2 in this build is
retesting.
d. Adhoc Testing:
Testing application randomly without any test cases or any business requirement
document.
Adhoc testing is an informal testing type with an aim to break the system.
Tester should have knowledge of application even though he does not have
requirements/test cases.
This testing is usually an unplanned activity.
e. Monkey/Gorilla Testing:
Testing applications randomly without any test cases or any business requirement.
Adhoc testing is an informal testing type with an aim to break the system.
Tester does not have knowledge of the application.
Suitable for gaming applications.
f. Positive Testing:
Testing the application with valid inputs is called positive testing.
It checks whether an application behaves as expected with positive inputs.
g. Negative Testing
Testing the application with invalid inputs is called negative testing.
It checks whether an application behaves as expected with the negative testing.
Positive Vs. Negative Test Cases:
For example, if a text box is listed as a feature and in FRS it is mentioned as a Text
box that accepts 6-20 characters and only alphabets.
=============================================================================
Agile – Scrum
Agile Principles:
Requirement changes are accepted from the customer.
So, the customer no need to wait, which provides customer satisfaction.
We develop, test, and release pieces of software to the customer with some features.
The Whole team works together toward achieving one goal.
Here we focus on F2F conversation.
We follow iterative and incremental nature.
Development Task:
Review the story
Estimate the story
Design
Code
Unit Testing
Integration Testing etc.
QA Task:
Review the story
Test Cases
Test Scenarios
Test Data
Test Review
Test Environments
Execute Test Cases
Report Bugs etc.
=============================================================================
Scrum: Scrum is a framework through which we build the software product by following Agile
Principles.
Scrum workflow -
User Stories → Product Backlog → Sprint Planning → Sprint Backlog → Sprint (1 – 3 weeks and
daily scrum meetings) → Product → Sprint Review and retrospective Meeting
Scrum Team:
1. Product Owner
2. Scrum Master
3. Development Team
4. QA Team
1. Product Owner:
Product Owner is the First Point of Contact.
He will get input from the customers.
Defines the feature of the product.
Prioritize the features according to the market value.
Adjust features or priority every iteration, as needed.
Product Owner can Accept or Reject the work result.
He will define features of the product in the form of User Stories or Epics.
2. Scrum Master:
Scrum Master has the main role of facilitating and driving the Agile Process.
He acts like a manager/team lead for Scrum Team.
He leads over all the Scrum ceremonies.
=============================================================================
If the below points are ready or clear regarding User Stories, is DOR.
It is achieved when,
=============================================================================
Scrum Terminologies:
Product Backlog: Contains a list of all requirements (like user stories and epics). Prepared by Product
Owner.
Epic: Collection of related user stories. Epic is nothing but a large (high level) requirement.
User Story: A feature/module in a software. Define the customer needs. It is nothing but the phrasing
of the requirement in the form of a story.
Task: To achieve the business requirements development team create tasks.
Sprint/Iteration: Period/time to complete (means development and testing) the user stories,
decided/selected by the Product Owner and Team. It is usually for 2-4 weeks of time.
Sprint Backlog: List of committed stories by the Developers and QAs for a specific Sprint.
Sprint Planning Meeting: This is the meeting with the team, to define what can be delivered in the
Sprint and its duration.
Sprint Review Meeting: Here we walkthrough and demonstrate the feature or story implemented by
the team to the stakeholder.
Sprint Retrospective Meeting: Conducts after completion of Sprint only. The entire team including
the Product Owner and Scrum Master should participate.
They discuss majorly on 3 things,
What went well?
What went wrong?
Improvements are needed in the upcoming sprint.
Backlog Grooming Meeting:
In this meeting, the scrum team along with the scrum master and product owner.
The product owner presents the business requirements and as per the priority
team discussed over it and identifies the complexity, dependencies, and efforts.
The team may also do the story pointing at this stage.
Story Point: Rough estimation given by Developers and QA in the form of the Fibonacci series.
Time Boxing in Scrum: Timeboxing is nothing but the Sprint which is the specific amount of time to
complete the specified amount of work.
Scum of Scrums:
Suppose 7 teams are working on a project and each team has 7 members. Each team leads
its particular scrum meeting. Now to coordinate among the teams a separate meeting has to
be organized, thatmeeting is called Scrum of Scrums.
An ambassador (team leads) (a designated person who represents the team) represents the
team in the scrum of scrums.
Few points discussed in the meeting are:
The progress of each team, after the last meeting.
The task is to be done before the next meeting.
Issues that the team had faced while completing the last task.
Spike in Agile Scrum: It is a story that cannot be estimated.
Sprint Zero:
Sprint zero usually takes place before the formal start of the project.
This Sprint should be kept lightweight and relatively high level. It is all about the origination
of project exploration and gaining an understanding of where you want to head while keeping
velocity low.
Release Candidate: The release candidate is a code/version/build released to make sure that during
the last development period, no critical problem is left behind. It is used for testing and is equivalent
to the final build.
Velocity:
Velocity (is a metric) used to measure the units of work done (completed) in the given time
frame.
We can say it is the sum of story points that the Scrum team completed over a sprint.
Burndown Chart:
Shows that how much work is pending/remaining in the Sprint. Maintain by the Scrum Master
daily. The progress is tracked by a Burndown chart.
Burndown chart is nothing but, it shows that estimated Vs. actual efforts of the Scrum Task.
To check whether the stories are doing progress towards the completion of the committed
story points or not.
Burnup Chart: Amount of work completed within a project.
=============================================================================
Difference between Scrum and Waterfall:
Feedback from the customers is received at an early stage in Scrum than Waterfall.
New changes can easily accommodate in Scrum than the Waterfall.
Rollback or accommodate new changes is easy in Scrum.
Testing is considered a phase in the Waterfall, unlike Scrum.
There are three types of persons involved in Scrum which are, Product Owner, Scrum Master, and
Scrum Team which includes Developer, Tester, and BA.
Product Backlog: Contains a list of all user stories. Prepared by Product Owner.
Sprint Backlog: Contains the User Stories committed by the Developer and QAs for a specific
Sprint.
=============================================================================
Extra Questions:
So, in the scrum, which entity is responsible for the deliverables? Scrum Master or Product Owner?
Neither the scrum master nor the product owner. It’s the responsibility of the team that owns the
deliverable.
How do you create the Burn-Down chart?
It is a tracking mechanism by which for a particular sprint; day-to-day tasks are tracked to check
whether the stories are progressing towards the completion of the committed story points or not.
Here, we should remember that the efforts are measured in terms of user stories and not hours.
How does agile testing (development) methodology differ from other testing (development)
methodologies?
In agile testing methodology, the entire testing process is broken into a small piece of codes and
in each step, these codes are tested. There are several processes or plans involved in this
methodology like communication with the team, short strategical changes to get the optimal
result, etc.
Do you think scrum can be implemented in all the software development processes?
Scrum is used mainly for
Complex projects.
Projects which have early and strict deadlines.
When we are developing any software from scratch.
In case you receive a story on the last day of the sprint to test and you find there are defects, what
will you do? Will you mark the story as done?
No, I will not be able to mark the story as done as it has open defects and the complete testing of
all the functionality of that story is pending. As we are on the last day of the sprint, we will mark
those defects as Deferred for the next sprint and we can spill over that story to the next Sprint.
How do you measure the complexity or effort in a sprint? Is there a way to determine and represent
it?
Complexity and effort are measured through “Story Points”. In Scrum, it’s recommended to use
the Fibonacci series to represent it. Considering the development effort+ testing effort + resolving
dependencies and other factors that would require to complete a story.
When we Estimate with story points, we assign the point value to each item.
To set the Story Point- Find the simplest story and assign the 1 value to that story and accordingly
on basis of complexity we can assign the values to user stories.
During Review, suppose the product owner or stakeholder does not agree with the feature you
implemented what would you do?
First thing we will not mark the story as done.
We will first confirm the actual requirement from the stakeholder and update the user story and
put it into the backlog. Based on the priority, we would be pulling the story in the next sprint.
Apart from planning, review, and retrospective, do you know any other ceremony in scrum?
These three meetings are the ones which occur on regular basis, apart from these We have one
more meeting which is the Product Backlog Grooming Meeting where the team, scrum master,
and product owner meet to understand the business requirements, splits it into user stories, and
estimating it.
Can you give an example of where scrum cannot be implemented? In that case, what do you
suggest?
Scrum can be implemented in all kinds of projects. It is not only applicable to software but is also
implemented successfully in mechanical and engineering projects.
You are in the middle of a sprint and suddenly the product owner comes with a new requirement,
what will you do?
In an ideal case, the requirement becomes a story and moves to the backlog. Then based on the
priority, the team can take it up in the next sprint.
But if the priority of the requirement is really high, then the team will have to accept it in the
sprint, but it has to be very well communicated to the stakeholder that incorporating a story in
the middle of the sprint may result in spilling over few stories to the next sprint.
Which are the top agile matrices?
Sprint burndown matric: Shows that how much work is pending/remaining in the Sprint. Maintain
by the Scrum Master daily. The progress is tracked by a Burndown chart. It shows that estimated
Vs. actual efforts of the Scrum Task.
Velocity: Velocity (is a metric) used to measure the units of work done (completed) in the given
time frame.
Work category allocation: This factor provides us a clear idea about where we are investing our
time or where to set priority.
Defect removal awareness: Quality products can be delivered by active members and their
awareness
Cumulative flow diagram: With the help of this flow diagram, the uniform workflow can be
checked, where X-axis shows time and Y-axis shows no. of efforts.
A business value delivered: Business value delivered is an entity that shows the team’s working
efficiency. This method is used to measure, which around 100 points are associated with each
project. Business objectives are given value from 1,2,3,5 and so on according to complexity,
urgency, and ROI.
Defect resolution time: It’s a process where team member detects the bug and priority intention
by the removal of the error. A series of processes are involved in fixing the bug:
Clearing the picture of a bug
Schedule fix
Fixation of Defect is done
Report of resolution is handed
Time coverage: Amount of time given to code in question in testing. It is measured by the ratio of
no. of the line of code called by the test suite by the total no. of the relative lines of code (in
percentage).
SDLC models are selected according to the requirements of the development process. Each model
provides unique features for software development. Due to that, it may vary software-to-software to
decide which model is best. But nowadays the Agile Model is the most popular and widely adopted
by software firms.
Different Types of test plans in software testing?
Test plans can be used as supporting documentation for an overall testing objective (a master test
plan) and specific types of tests (a testing type-specific plan).
Master test plan: A master test plan is a high-level document for a project or product’s overall
testing goals and objectives. It lists tasks, milestones, and outlines the size and scope of a
testing project. It encapsulates all other test plans within the project.
Testing type-specific plan: Test plans can also be used to outline details related to a specific
type of test. For example, you may have a test plan for unit testing, acceptance testing, and
integration testing. These test plans drill deeper into the specific type of test being conducted.
Design docs
Process guidelines docs
Corporate standards docs, etc.
Test deliverables are nothing, but the documents prepared after testing. Test deliverables will be
delivered to the client not only for the completed activities but also for the activities, which we
are implementing for better productivity.
Test deliverables
Test plan document,
Test case document,
Test Result Documents (will be prepared at the phase of each type of testing,
Test Report or Project Closure Report (Prepare once we rolled out the project to the client),
Coverage matrix, defect matrix, and Traceability Matrix,
Test design specifications,
Release notes,
Tools and their outputs,
Error logs and execution logs,
Problem reports and corrective action
=============================================================================