0% found this document useful (0 votes)
17 views45 pages

Chapter 1 - Fundamentals of testing Foundation Level Syllabus.pptx

Uploaded by

tazkya.humaira
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views45 pages

Chapter 1 - Fundamentals of testing Foundation Level Syllabus.pptx

Uploaded by

tazkya.humaira
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Chapter 1

Fundamentals of Software Testing


Foundation Level Syllabus - International Software Testing Qualifications Board
Version 2018 V3.1

Tazkya Humaira
1
OUTLINE

WHAT IS WHY IS TESTING


INTRODUCTION
TESTING? NECESSARY?

SEVEN TESTING TEST PROCESS THE


PRINCIPLES PSYCHOLOGY OF
TESTING
2
INTRODUCTION

ISTQB adalah sertifikasi untuk profesi Software Tester yang telah diterima dan
diakui secara internasional dimana ujian dilakukan secara online oleh Organisasi
authorised melalui Testing Provider.

https://www.istqb.org/

3
What is Testing?
4
What is Testing?

Testing is the process consisting of all lifecycle activities, both


static and dynamic, concerned with planning, preparation and
evaluation of software products and related work products to
determine that they satisfy specified requirements, to
demonstrate that they are fit for purpose and to detect defects.

5
What is Testing?

The Testing Process


● A common misperception of testing is that it only consists of running tests, i.e.,
executing the software and checking the results.

➔ Test execution
Software testing ➔ Test planning
➔ Test design
is a process
➔ Test analysis
which includes
➔ Test implementation
many activities; ➔ Reporting Test
➔ Evaluation

6
What is Testing?

Test Types

DYNAMIC TESTING STATIC TESTING

Involves testing against a running Does not involves testing against


component or system running system

7
What is Testing?

Validation

● Another common misperception of testing is that it


focuses entirely on verification of requirements, user
stories, or other specifications.
● While testing does involve checking whether the system
meets specified requirements, it also involves validation,
which is checking whether the system will meet user and
other stakeholder needs in its operational environment(s).

8
What is Testing?

Typical Objectives of Testing


For any given project, the objectives of testing may include:

● To prevent defects by evaluate work products such as requirements, user stories, design, and code
● To verify whether all specified requirements have been fulfilled
● To check whether the test object is complete and validate if it works as the users and other
stakeholders expect
● To build confidence in the level of quality of the test object
● To find defects and failures thus reduce the level of risk of inadequate software quality
● To provide sufficient information to stakeholders to allow them to make informed decisions, especially
regarding the level of quality of the test object
● To comply with contractual, legal, or regulatory requirements or standards, and/or to verify the test
object’s compliance with such requirements or standards

9
What is Testing?

Typical Objectives of Testing


The objectives of testing can vary, depending upon the context of the component or system being tested,
the test level, and the software development lifecycle model. These differences may include, for
example:

● During component testing, one objective may be to find as many failures as possible so that the
underlying defects are identified and fixed early. Another objective may be to increase code coverage
of the component tests.
● During acceptance testing, one objective may be to confirm that the system works as expected and
satisfies requirements. Another objective of this testing may be to give information to stakeholders
about the risk of releasing the system at a given time.

10
What is Testing?

Testing and Debugging

•Testing
Testing deal with finding the defects by conducting failure on the application or product.
This activity is performed by Testers.

•Debugging
Debugging is a development activity which deals with analyzing these defects, finding the
root cause and removes the cause of defect. This activity is commonly performed by
developers.

11
Why is Testing
Necessary?
12
Why is Testing Necessary?

● Rigorous testing of components and systems, and their associated documentation, can help
reduce the risk of failures occurring during operation.

● When defects are detected, and subsequently fixed, this contributes to the quality of the
components or systems.

● Software testing may also be required to meet contractual or legal requirements or industry-specific
standards.

13
Why is Testing Necessary?

Testing’s Contributions to Success

● Having testers involved in requirements reviews or user story refinement could detect
defects in these work products.

● Having testers work closely with system designers while the system is being designed
can increase each party’s understanding of the design and how to test it.

● Having testers work closely with developers while the code is under development can
increase each party’s understanding of the code and how to test it.

● Having testers verify and validate the software prior to release can detect failures that
might otherwise have been missed, and support the process of removing the defects
that caused the failures (i.e., debugging).

14
Why is Testing Necessary?

Quality Assurance and Testing

● While people often use the phrase quality assurance (or just QA) to refer to testing,
quality assurance and testing are not the same, but they are related.

● Quality assurance is typically focused on adherence to proper processes, in order to


provide confidence that the appropriate levels of quality will be achieved. When
processes are carried out properly, the work products created by those processes are
generally of higher quality, which contributes to defect prevention.

● Quality control involves various activities, including test activities, that support the
achievement of appropriate levels of quality. Test activities are part of the overall
software development or maintenance process.

15
Why is Testing Necessary?

Errors, Defects, and Failures

● Error (mistake)
A human action that produces an incorrect result
● Fault (defect, bug)
A manifestation of an error in software
- Also known as a defect or bug
- If executed, a fault may cause a failure
● Failure
Deviation of the software from its expected delivery or services
- In addition to failures caused due to defects in the code, failures can also be caused by
environmental conditions
- For example, radiation, electromagnetic fields, and pollution can cause defects in firmware or
influence the execution of software by changing hardware conditions.

16
Why is Testing Necessary?

Causes of Software Defects

Errors may occur for many reasons, such as:


● Time pressure
● Human fallibility
● Inexperienced or insufficiently skilled project participants
● Miscommunication between project participants, including miscommunication about requirements
and design
● Complexity of the code, design, architecture, the underlying problem to be solved, and/or the
technologies used
● Misunderstandings about intra-system and inter-system interfaces, especially when such
intrasystem and inter-system interactions are large in number
● New, unfamiliar technologies

17
Why is Testing Necessary?

Defects, Root Causes and Effects

● The root causes of defects are the earliest actions or conditions that contributed to creating
the defects.

● Defects can be analyzed to identify their root causes, so as to reduce the occurrence of
similar defects in the future.

● By focusing on the most significant root causes, root cause analysis can lead to process
improvements that prevent a significant number of future defects from being introduced.

18
Seven Testing
Principles
19
Seven Testing Principles

1. Testing shows the presence of defects

2. Exhaustive testing is impossible

3. Early testing

4. Defects cluster together

5. Pesticide paradox

6. Testing is context dependent

7. Absence-of-errors is a fallacy

20
Seven Testing Principles
Testing shows the presence of defects,
not their absence

Testing can show that defect are present in a system software, but no matter
whatever count of defect we find, at any point of time we can’t say the no
more defects.
Also to add to it, in case no defects are found, it’s not a proof correctness.

21
Seven Testing Principles

Exhaustive testing is impossible

Testing everything, which mean all possible combination of inputs and


preconditions is generally not feasible to be conducted.
As an alternate to exhaustive testing, risk analysis, test techniques, and
priorities should be used to focus on testing efforts.

22
Seven Testing Principles

Early testing saves time and money

To find defects earlier or to prevents defects being introduced, testers must


be involved as early as possible in the development lifecycle. Testing
activities must be coordinated with corresponding development activities.
Tester are good contributor in reviews and must participate. This helps them
understand the requirements earlier and prepare test cases earlier in life
cycle.

23
Seven Testing Principles

Defects cluster together

It is possible that the more defects are clustered in smaller modules


compared to being distributed among bigger and other modules.

In this regards, a tester must consider and prepare proportional test case to
test such system.

24
Seven Testing Principles

Beware of the pesticide paradox

If the same tests are repeated over and over again, eventually these tests no
longer find any new defects.

To overcome this “pesticide paradox” test case need to be regularly reviewed


and revised/

25
Seven Testing Principles

Testing is context dependent

● Testing is done differently in different contexts.

● Two different software are testing with different strategy.

● Example, testing in an Agile project is done differently than testing in a


sequential software development life cycle project

26
Seven Testing Principles

Absence-of-errors is a fallacy

● Meeting the requirement is equally important.

● Finding and fixing defect doesn’t help if the system built doesn’t fulfill
the users’ need and expectation

27
Test Process

28
Test Process
Test Process in Context
● Contextual factors that influence the test process for an organization, include, but
are not limited to:
● Software development lifecycle model and project methodologies being used
● Test levels and test types being considered
● Product and project risks
● Business domain
● Operational constraints, (Budgets and resources, timescales, complexity,
contractual and regulatory requirements)
● Organizational policies and practices
● Required internal and external standards

29
Test Process
Test Activities and Tasks
A test process consists of the following main groups of activities:

● Test planning
● Test monitoring and control
● Test analysis
● Test design
● Test implementation
● Test execution
● Test completion

30
Test Process
Test Activities and Tasks
Test planning

● Determining the scope and risks and identifying the objectives of testing.
● Defining the overall approach of testing.
● Scheduling test activities, Assigning resources for the activities.
● Defining the amount, detail, template for documentation.
● Selecting metrics for monitoring and controlling.
● Defining entry and exit criteria.
● Deciding about automation

31
Test Process
Test Activities and Tasks
Test monitoring and control

● Test monitoring involves the on-going comparison of actual progress against planned progress using any
test monitoring metrics defined in the test plan.

● Test control involves taking actions necessary to meet the objectives of the test plan

● Test monitoring and control are supported by the evaluation of exit criteria, which are referred to as the
definition of done in some software development lifecycle models.

● For example, the evaluation of exit criteria for test execution as part of a given test level may include:
- Checking test results and logs against specified coverage criteria. Assessing the level of component
or system quality based on test results and logs
- Determining if more tests are needed (e.g., if tests originally intended to achieve a certain level of
product risk coverage failed to do so, requiring additional tests to be written and executed)
32
Test Process
Test Activities and Tasks
Test analysis

Test analysis includes the following major activities:


- Analyzing the test basis
- Evaluating the test basis and test items to identify defects of various types, such as, ambiguities, emissions,
inconsistencies, inaccuracies, etc.
- Identifying features and sets of features to be tested
- Defining and prioritizing test conditions for each feature based on analysis of the test basis
- considering functional, non-functional, and structural characteristics, other business and technical factors,
and levels of risks
- Capturing bi-directional traceability between each element of the test basis and the associated test
conditions

33
Test Process
Test Activities and Tasks
Test design

Test design includes the following major activities:


- Designing and prioritizing test cases and sets of test cases
- Identifying necessary test data to support test conditions and test cases
- Designing the test environment and identifying any required infrastructure and tools
- Capturing bi-directional traceability between the test basis, test conditions, and test cases

34
Test Process
Test Activities and Tasks
Test implementation

Test implementation includes the following major activities:


- Developing and prioritizing test procedures, and, potentially, creating automated test scripts
- Creating test suites from the test procedures and (if any) automated test scripts
- Arranging the test suites within a test execution schedule in a way that results in efficient test execution
- Building the test environment (including, potentially, test harnesses, service virtualization, simulators, and
other infrastructure items) and verifying that everything needed has been set up correctly
- Preparing test data and ensuring it is properly loaded in the test environment
- Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test
procedures, and test suites.

35
Test Process
Test Activities and Tasks
Test completion
Test completion includes the following major activities:
● Checking whether all defect reports are closed, entering change requests or product backlog items for any
defects that remain unresolved at the end of test execution
● Creating a test summary report to be communicated to stakeholders
● Finalizing and archiving the test environment, the test data, the test infrastructure, and other testware for
later reuse
● Handing over the testware to the maintenance teams, other project teams, and/or other stakeholders who
could benefit from its use
● Analyzing lessons learned from the completed test activities to determine changes needed for future
iterations, releases, and projects
● Using the information gathered to improve test process maturity
36
Test Process
Test Work Product
Test monitoring and control work
Test planning work products
products
● Test planning work products typically include ● Test monitoring and control work products
one or more test plans. typically include various types of test reports,
including test progress reports produced on an
● The test plan includes information about the test
ongoing and/or a regular basis, and test summary.
basis, to which the other test work products will
be related via traceability information. ● Test monitoring and control work products should
also address project management concerns, such as
task completion, resource allocation and usage,
and effort.

37
Test Process
Test Work Product
Test analysis work products Test design work products

● Test analysis work products include defined and ● Test design results in test cases and sets of test
prioritized test conditions, each of which is cases to exercise the test conditions defined in test
ideally bidirectionally traceable to the specific analysis.
element(s) of the test basis it covers. ● Test design also results in:
● For exploratory testing, test analysis may - the design and/or identification of the
involve the creation of test charters. necessary test data
- the design of the test environment
- the identification of infrastructure and tools
- Though the extent to which these results are
documented varies significantly.
38
Test Process Test execution work products

Test execution work products include:


Test Work Product ● Documentation of the status of individual test
cases or test procedures (e.g., ready to run, pass,
fail, blocked, deliberately skipped, etc.)
Test implementation work products ● Defect reports
● Documentation about which test item(s), test
Test implementation work products include: object(s), test tools, and testware were involved in
the testing
- Test procedures and the sequencing of those test
procedures
- Test suites Test completion work products
- A test execution schedule
Test completion work products include test
summary reports, action items for improvement
● Test implementation also may result in the of subsequent projects or iterations, change
creation and verification of test data and the test requests or product backlog items, and finalized
environment. testware.
39
Test Process
Traceability between the Test Basis and Test Work Products

In addition to the evaluation of test coverage, good traceability supports:


● Analyzing the impact of changes
● Making testing auditable
● Meeting IT governance criteria
● Improving the understandability of test progress reports and test summary reports
to include the status of elements of the test basis (e.g., requirements that passed
their tests, requirements that failed their tests, and requirements that have pending
tests)
● Relating the technical aspects of testing to stakeholders in terms that they can
understand
● Providing information to assess product quality, process capability, and project
progress against business goals

40
The Psychology
of Testing
41
The Psychology of Testing
Human Psychology and Testing

● An element of human psychology called confirmation bias can make it difficult to accept
information that disagrees with currently held beliefs. For example, since developers
expect their code to be correct, they have a confirmation bias that makes it difficult to
accept that the code is incorrect.
● Further, it is a common human trait to blame the bearer of bad news, and information
produced by testing often contains bad news.
● As a result of these psychological factors, some people may perceive testing as a
destructive activity, even though it contributes greatly to project progress and product
quality.
● This way, tensions between the testers and the analysts, product owners, designers, and
developers can be reduced. This applies during both static and dynamic testing.

42
The Psychology of Testing
Human Psychology and Testing

Testers and test managers need to have good interpersonal skills to be able to
communicate effectively about defects, failures, test results, test progress, and risks,
and to build positive relationships with colleagues. Ways to communicate well include
the following examples:
● Start with collaboration rather than battles.
● Emphasize the benefits of testing. For example, For the organization, defects found
and fixed during testing will save time and money and reduce overall risk to product
quality.
● Communicate findings on the product in a neutral, fact focused way without
criticzing the person who created it.
● Try to understand how the other person feels and why they react the way.
● Confirm that the other person has understood what has been said and vice versa.

43
The Psychology of Testing
Tester’s and Developer’s Mindsets

● Developers and testers often think differently.


● These are different sets of objectives which require different mindsets.
● A mindset reflects an individual’s assumptions and preferred methods for decision making and
problem solving. A tester’s mindset should include curiosity, professional pessimism, a critical eye,
attention to detail, and a motivation for good and positive communications and relationships.
● A developer’s mindset may include some of the elements of a tester’s mindset, but successful
developers are often more interested in designing and building solutions than in contemplating
what might be wrong with those solutions. In addition, confirmation bias makes it difficult to
become aware of errors committed by themselves.
● With the right mindset, developers are able to test their own code.
● Having some of the test activities done by independent testers increases defect detection
effectiveness, which is particularly important for large, complex, or safety-critical systems.

44
Thanks !

You might also like