0% found this document useful (0 votes)
47 views

FAST - A Framework For Automating Software Testing A Practical Approach

This document discusses test automation and proposes a framework called FAST (Framework for Automating Software Testing). It summarizes that test automation can help reduce costs and improve quality but often fails to achieve benefits durably. The framework aims to systematically introduce test automation by providing guidelines for designing, implementing, and maintaining automated test systems. The framework was developed based on a literature review and case studies which found that immature test processes and lack of guidelines hinder effective test automation adoption.

Uploaded by

Gerald Casabar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views

FAST - A Framework For Automating Software Testing A Practical Approach

This document discusses test automation and proposes a framework called FAST (Framework for Automating Software Testing). It summarizes that test automation can help reduce costs and improve quality but often fails to achieve benefits durably. The framework aims to systematically introduce test automation by providing guidelines for designing, implementing, and maintaining automated test systems. The framework was developed based on a literature review and case studies which found that immature test processes and lack of guidelines hinder effective test automation adoption.

Uploaded by

Gerald Casabar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

FAST – a Framework for Automating Software

Testing: a Practical Approach


project costs [13] and is becoming increasingly popular with the
Abstract—Context: the quality of a software product can be need to improve software quality amid growing system
directly influenced by the quality of its development process. complexity [14]. Test automation can be used to gain velocity
Therefore, immature or ad-hoc test processes are means that are [7], turn test repeatable [7] and manage more tests in less time
unsuited for introducing systematic test automation, and should not be [15]. Furthermore, it can be a very efficient way of reducing the
used to support improving the quality of software. Objective: in order
to conduct this research, the benefits and limitations of and gaps in effort involved in the development, eliminating or minimizing
automating software testing had to be assessed in order to identify the repetitive tasks and reducing the risk of human error [6]. It can
best practices and to propose a strategy for systematically introducing be considered an effective way of reducing effort and
test automation into software development processes. Method: to minimizing the repetition of activities in the software
conduct this research, an exploratory bibliographical survey was development lifecycle [9].
undertaken so as to underpin the search by theory and the recent Many attempts have failed to achieve the real benefits of
literature. After defining the proposal, two case studies were conducted
so as to analyze the proposal in a real-world environment. In addition, automation in a durable way [14] and some problems related to
the proposal was also assessed through a focus group with specialists automation adoption can be listed [4], such as:
in the field. Results: the proposal of a Framework for Automating • Inadequate choice or absence of some types of test,
Software Testing – FAST, which is a theoretical framework consisting which leads to inefficiency in the execution of tests;
of a hierarchical structure to introduce test automation. Conclusion: • Incorrect expectations regarding the benefits of
The findings of this research showed that the absence of systematic
automation and the investment to be made;
processes is one of the factors that hinder the introduction of test
automation. Based on the results of the case studies, FAST can be • The absence of diversification of the automation strategy
considered as a satisfactory alternative that lies within the scope of to be adopted;
introducing and maintaining test automation in software development. • Use of test tools focused only on test run where other
potential areas are overlooked.
Keywords—Software process improvement, software quality, It can be observed that there is a technical deficiency in many
software testing, test automation, implementations of test execution automation, causing
problems for the use and maintenance of automation systems
[8]. In addition, it is observed that there is no clear definition as
I. INTRODUCTION to how to design, implement and maintain a test automation

I t is understood that "the quality of a system or product is


highly influenced by the quality of the process used to
develop and maintain it," [1], in which the perspective that
system in order to maximize the benefits of automation in a
given scope, although there is a demand for such between the
developers and testers [8].
process improvement is a necessary factor for the development It is observed that there is still a gap in the research related to
of software with quality to be delivered to the market. In this test automation, given the absence of approaches and guidelines
sense, software testing has emerged as another aspect to be to assist in the design, implementation and maintenance of test
developed when talking about software quality, in which test automation approaches [16]. A multivocal review of the
can be defined as "the dynamics of checking the behavior of a literature [17] presented a research on the assessment of test
program from a finite set of test cases, properly selected from process improvement maturity, noting that the following
an infinite domain of executions "[2]. problems are relevant:
However, the software test knowledge area is quite • Software testing processes remain immature and
comprehensive and this work focuses on test automation, which conducted ad- hoc;
is understood as the use of software to perform the test activities • Immature practices lead to inefficiency in defect
[3], considered an important topic of interest which has been detection;
widely studied in the literature [4]-[9]. • Schedules and costs constantly exceed what was
The benefits of automation can be observed in the long run, planned; and,
whose focus should be on increasing the coverage of tests and • Test is not being conducted efficiently.
not only on reducing the cost [8],[10],[11],[12]. Studies show Given the problematic presented and, based on these
that tests account for 50% or more of the total cost of the project problems, identified in both the software industry and in the
[5] since while it may be costly to deliver a delayed product to references studied, the problem of this study is associated with
the market, a defective product can be considered catastrophic the following research question:
[10]. How should test automation be introduced and
Test automation has been proposed as a solution to reduce maintained in the software development process?
The general objective of this research is to propose a strategy through tools such as:
for the systematic introduction of test automation practices in • Management of test cases;
the context of a software development project. • Monitoring and control of the test;
In addition, it has the following specific objectives: • Generation of test data;
• Analyze the failure factors of software testing automation • Statistical analysis;
deployment in organizations; and • Generation of test cases;
• Identify good practices to introduce automation of tests • Monitoring and control of the test; and,
mentioned in the literature. • Implementation and maintenance of the test
environment.
II. BIBLIOGRAPHICAL REVIEW
B. Test Automation
A. Software Testing Software testing should efficiently find defects as well as be
Barr et al. [18] describe software testing as an activity whose efficient so that its execution can be performed quickly and at
purpose is to stimulate the system and observe its response, in the lowest possible cost. In this scenario, the test automation
which both the stimulus and the response have a value, which can directly support the reduction of the effort required to
may coincide when the stimulus and response are true. The perform test activities or to increase the scope of the test to be
stimulus, in this scenario, corresponds to the activity of executed within a certain time frame [23].
software testing that can be performed from various forms and Test automation effectively supports the project to achieve
techniques. cumulative coverage [10], which, as defined by the author, is
Bertolino [19] comments on the importance of software the fact that, over time, the set of automated test cases allows
testing engineering, whose aim is to observe the execution of a them to be accumulated so that both existing and new
system to validate if it behaves as intended and identifies requirements can be tested throughout the test lifecycle.
potential failures. In addition, test has been widely used in Fewster and Graham [23] also point out that test and test
industry as a quality control, providing an objective analysis of automation are different since the first one refers to the ability
software behavior. to find and execute the most appropriate test cases for a
"The work of testing a software can be divided between particular context, given the innumerable possibilities of
automatic testing and manual testing" [15] and testing activity existing tests in a project. Test automation can also be
assists quality assurance by collecting information about the considered a skill, but in a different context, because the
software studied [13]. Software testing includes activities that demand, in this case, refers to the proper choice and correct
consist of designing test cases, running the software with test implementation of the tests to be automated.
cases, and examining the results produced by the execution Test automation improves the coverage of regression tests
[13]. due to the accumulation of automated test cases over time, in
In this context, test automation addresses the automation of order to improve productivity, quality, and efficiency of
software testing activities, in which one of the reasons for using execution [24]. Automation allows even the smallest
automated testing instead of manual is that manual test maintenance changes to be fully tested with minimal team effort
consumes more resources and automated test increases [23].
efficiency [20]. The decision of what should be automated According to ISO / IEC / IEEE 29.119-1 [22], it is possible
should be carried out as soon as possible in the software to automate several activities described in the Test Management
development process, in order to minimize the problems related and Dynamic Testing process described in ISO / IEC / IECC
to this activity [21]. 29.119-2 [22], and although they are usually associated only
According to Bertolino [19], test automation is a significant with the test execution activity, there are many additional
area of interest, whose objective is to improve the degree of testing tasks that can be supported by software-based tools.
automation, both through the development of better techniques
for the generation of automated tests, and automation of the III. RELATED WORK
process of automation. test. The analysis of the studies related to this research was based
Ramler and Wolfmaier [5] present an analysis of the on the literature review, analysis of existing systematic reviews
economic perspective between manual and automatic testing, [25]-[27] and multivocal review [17]. The results achieved were
commenting that for the implementation of automation, only organized so that some approaches were classified as correlated
costs are evaluated and their benefits ignored, compared to works, which are those that present suggestions for test
manual tests. automation through more comprehensive proposals to improve
In addition, test automation is a suitable tool for quality the test process as a whole. Some others were considered related
improvement and cost reduction in the long run, as the benefits work because they specifically address test automation
require time to be observed [15]. Also in this context, ISO / processes.
IEEE 29.119-1 [22] comments that it is possible to automate From this research, the following correlated works were
many of the activities described in its processes, in which selected:
automation requires the use of tools and is mainly focused on • Test Maturity Model – TMM [28];
test execution. However, many activities can be performed • Test Improvement Model – TIM [29];
• Test Process Improvement – TPI [30]; automation at level C, where A is the initial and C is the highest.
• Software Testing Enhancement Paradigm – STEP [31]; Although it addresses maturity in test automation, it is observed
• Agile Quality Assurance Model – AQAM [32]; negligence in the evaluation of the context of automation, in
• Brazilian Test Process Improvement (MPT.BR) (2011) which it is only the use of tools and management of automation,
[33],[34]; since the scope of automation is more complex than only the
• Test Maturity Model Integration (TMMI) [35]; and presented processes.
• Automated Test Generation – ATG [36]. STEP [31] is a business process-oriented test process
Besides these, 2 related works were found, such as: improvement framework that is organized by: (1) Business
• Maturity Model for Automated Software Testing – objectives; (2) Process objectives; and (3) 17 process areas.
MMAST [37]; and Test automation is present in the Test Execution process area,
• Test Automation Improvement Model – TAIM [38]. which suggests that the introduction of test automation is
Given this context, Fig. 1 presents a chronological view of through the implementation of test automation scripts.
the appearance of these works. However, there are no suggestions for practices that indicate
how automation scripts should be implemented and introduced
in the context of the project.
AQAM [32] is a model for quality assurance in an agile
environment, comprising 20 key process areas (KPAs), which
include guides, processes, best practices, templates,
customizations and maturity levels. The purpose of identifying
KPAs is to objectively assess the strengths and weaknesses of
the important areas of the quality assurance process and then
develop a plan for process improvement as a whole. Among the
existing KPAs, 9 of them are directly related to the test
activities, such as: (1) Test planning; (2) Test case management;
Fig. 1 Related and corelated work timeline (3) Defect analysis; (4) Defect report; (5) Unit test; (6)
A. Corelated Works Performance test; (7) Test environment management; (8)
Organization of the test; and (9) Test automation. Despite
The TMM [28] is a model composed by 5 levels of maturity:
presenting KPAs related to the test process, they do not cover
(1) Initial; (2) Phases Definition; (3) Integration; (4)
all levels of test automation for the context of a project, being
Management and Measurement; and (5) Optimization and
restricted to unit and performance testing. Therefore, with
Defect Prevention and Quality Control. In its structure, each
regard to the introduction of test automation, the proposal is
level contemplates objectives that are achieved through
limited and, in addition, the existing documentation does not
activities, tasks, and responsibilities. Level 5 has, as one of the
present a complete description of its practices.
objectives, the use of tools to support the planning and
MPT.BR [33],[34] addresses the improvement of the testing
execution of the tests, as well as collecting and analyzing the
process throughout the product test life cycle. The model is
test data. However, the presented approach is superficial and
composed of five maturity levels, and each maturity level is
does not contemplate indications of introduction and
composed of process areas.
maintenance of the test automation.
In the scope of test automation, the model presents the Test
TIM [29] is a maturity model defined through two
Execution Automation (AET) process area, whose purpose is
components: a framework that includes levels and key areas,
the definition and maintenance of a strategy to automate the
and an evaluation procedure. There are 4 levels, such as (1)
execution of the test. This process area is composed of the
Baseline; (2) Effective cost; (3) Risk reduction; and (4)
following list of specific practices:
Optimization; in addition to 5 key areas: Organization, Planning
• AET1 - Define objectives of the automation regime;
and Monitoring, Testware, Test Cases and Review. In this
• AET2 - Define criteria for selection of test cases for
context, Testware is the key area that includes the definition of
automation;
configuration management of test assets and the use of tools to
• AET3 - Define a framework for test automation;
perform repeatable and non-creative activities. However, the
• AET4 - Manage automated test incidents;
systematization of the use of tools to perform the test
• AET5 - Check adherence to automation objectives; and,
automation activities is not detailed.
• AET6 - Analyze return on investment in automation.
The TPI [30] is a model composed of 20 key areas, each with
Although the specific practices present a systematic way for
different levels of maturity, which are established from a
the introduction of test automation, they are still vague in the
maturity matrix, in which each level is described by
sense of identifying the moment in which the automation should
checkpoints and suggestions of improvement. One of the key
be performed. There is no specific information about the
areas is called Test Automation, which describes that
introduction of automation in the software development
automation can be implemented to address the following
process, as well as not informing which levels of testing can be
checkpoints: (1) the use of tools at maturity level A; (2)
automated. The written format is generic and comprehensive,
automation management at level B; and (3) optimization of test
which can suit every type of automation. However, it does not
help the choice of where the automation should start and what in parallel with the software design. The ATG is limited in the
benefits can be achieved. scope of its proposal because, in addition to focusing only on
There is also the Tool Management (GDF) process area, the generation of tests based on models, it does not explain how
whose objective is to manage the identification, analysis, the models should be generated for the different levels of test
selection, and implementation of tools in the organization, automation. In addition, in industry, during the adoption of
composed of the following specific practices: ATG techniques, some limitations have been encountered, due
• GDF1 - Identify necessary tools; in part to the difficulty of creating and maintaining test models,
• GDF2 - Select tools; which may further hinder the adoption of the approach.
• GDF3 - Conduct pilot project;
B. Related Works
• GDF4 - Select tool's gurus;
• GDF5 - Define tool's deployment strategies; and, The Maturity Model for Automated Software Testing
• GDF6 - Deploy tools. (MMAST) [37] is a model that was developed for
Although GDF provides guidelines for using test tools and manufacturers of computerized medical equipment and its
how they should be maintained, GDF does not provide purpose is to define the appropriate level of automation in
objective suggestions on how tools can be used to support test which an equipment manufacturer fits. It consists of 4 levels of
automation, since the process area does not only address tools maturity, such as:
of automation. Therefore, it is considered that, despite • Level 1 - Accidental automation: characterized by the ad
presenting a guide for automation introduction, it is generic and hoc, individualistic and accidental process of carrying
does not go into detail about the automation process. out the activities, in which there is no documentation of
The TMMI [35] is a model for the improvement of the test the important information that is restricted to key pieces
process, developed by the TMMi Foundation as a guide, of the organization. Test automation is performed in a
reference framework and complementary model to CMMI timely manner and is not based on process and/or
version 1.2 [39]. TMMI follows the staged version of CMMI, planning.
and also uses the concepts of maturity levels for assessment and • Level 2 - Beginning automation: is associated with the
improvement of the test process. The specific focus of the use of capture and replay tools that reproduce the
model is the improvement of the test process, starting from the responses of the system under test. Documentation
chaotic stage to the mature and controlled process through begins by recording software and test requirements, the
defect prevention. writing of which provides the basis for level 3
TMMI is composed of 5 maturity levels, such as: level 1 - implementation.
Initial; level 2 - Managed; level 3 - Defined; level 4 - Measured; • Level 3 - Intentional automation: the focus is the
and level 5 - Optimization. Each maturity level presents a set of execution of the defined and planned automation for the
process areas necessary to reach maturity at that level, where context of a project, based on the requirements and
each is the baseline to the next level. scripts of the automatic test and it assumes that the test
Although it is a maturity model specifically for the test area team is part of the project team. The model indicates that
and presents systematic ways to introduce the practice of level 3 is suitable for the manufacture of medical
software testing in the context of project development, it does equipment.
not present a process area specifically dedicated to testing tools • Level 4 - Advanced Automation: it is presented as an
and / or automation. It does not include systematic suggestions improved version of Level 3 with the inclusion of the
to improve tests automation, as described in the model: post-delivery defect management practice, in which it is
“Note that the TMMi does not have a specific process area captured and sent directly to the processes of correction,
dedicated to test tools and/or test automation. Within TMMi test creation, and regression of the tests.
tools are treated as a supporting resource (practices) and are The model also presents a checklist to support and identify at
therefore part of the process area where they provide support, what level the test process should be automated, based on the
e.g., applying a test design tool is a supporting test practice following questions:
within the process area Test Design and Execution at TMMi • How big are your software projects?
level 2 and applying a performance testing is tool is a • How complex is your product?
supporting test practice within the process area Non-functional • What financial risk does your product represent for the
Testing at TMMi level 3” [35]. company?
The ATG [36] is a process that was developed to complement • What risk does your product pose to the patient and the
the TPI [30], whose objective is the generation of automated operator?
tests from models. In this context, the creation of the ATG took From the checklist, the automation level is recommended.
place from the introduction of 4 process areas and modifications However, despite being a maturity model, it does not present
in some areas existing in the TPI. The models, in this scenario, key areas or process areas and its description is abstract and
need to be computable, processed by the computer, from, for does not contemplate aspects on how test automation can
example, requirements, design, source code, test objects, etc. If actually be introduced.
the model cannot be computable, it must be converted as soon
as possible into the software development process, preferably
The Test Automation Improvement Model (TAIM) [38] is a in Fig. 2 and described in Table I.
model for process improvement based on measurements to
determine if one activity is better than the other, focusing on the
calculation of the return on investment. In addition, it is based
on the view that the testing process can be fully or partially
automated, and provides insight into how it is possible to fully
automate the testing process by defining 10 key areas (Key Area
- KA):
1. Test management;
2. Testing requirements;
3. Test specification;
4. Implementation of the test;
Fig. 2 FAST’s conceptual structure.
5. Automation of the test process;
6. Execution of the test; TABLE I
7. Test Verdicts; FAST CONCEPTUAL ELEMENTS
8. Test environment; Element Description
9. Testing tools; and Automation This element has been adapted from the concept of test level,
10. Fault and fault management. Level described as an independent test effort with its own
documentation and resources (IEEE 829, 2008). The
In addition, the model includes a Generic Area (GA), which automation test level, in turn, can be understood as the scope
consists of Traceability; Defined measurements such as in which automation test processes will take place.
Efficiency, Effectiveness, and Cost (return of investment); Area It is the element that aggregates process areas, considering the
practical aspects of the model that emphasize the main
Analysis e.g. Trends, Data Aggregation; Testware; Standards;
relationships between process areas. It corresponds to the
Quality of Automation and Competence. areas of interest for which FAST is divided into two,
However, "TAIM can be viewed as work-in-progress and as considering both technical and support aspects, which are
a research challenge" [38], as it is still very generalized and the required to introduce automation into a software project. This
element was adapted from the CMMI-DEV category concept
evolution of KAs is presented as a future work. In addition, the (2010).
model is based on measurements but does not describe them Area
Corresponds to the purpose of the area to which it is related.
and neither presents how they should be used and how they are Objective
Process Area A set of practices related to a given area that, when
related to KAs and GA. implemented collectively, satisfy a set of objectives
Although presenting KAs as a way of introducing test considered important for the improvement of that area (SEI,
automation, it is not clear how they can be systematically 2010).
introduced in the software development process since they are Process Area It includes the description of the objectives of the process
Purpose area.
presented in isolated ways as a guide to best practices. Practice It is an element that presents the description of the activities
However, the model fails to provide information on how it can considered important for the process area to be achieved (SEI,
be adapted to the software development context. 2010). They are described in a way that shows the sequence
between them.
Subpractice Detailed description to provide guidance on how to interpret
IV. FAST and implement a particular practice (SEI, 2010).
Work product Suggestions of artifacts associated with a particular practice,
According to the ISO / IEC / IEEE 24.765 [40] software including information on the outputs to be elaborated from the
engineering dictionary, a framework can be defined as a model execution of the same.
that it refines or specializes to provide parts of its functionality
in a given context. This concept has been adapted to FAST for It was conceived considering two distinct but complementary
the purpose of establishing a set of practices so that the aspects to compose its proposal from its Technical and Support
introduction of test automation is consistent, structured and areas, in which each of them has a set of process areas, as shown
systematic for the project that implements it. in Fig. 3. In this context, the proposed framework is structured
Although the concept of a framework is more related to the in such a way that the areas aggregate process areas specifically
technical component for the construction of systems, it was organized to maintain a coherent semantics with the objective
adapted to assume the perspective of a theoretical framework of each area.
that contemplates a hierarchical structure for the The process areas, in turn, were defined through objectives,
implementation of test automation from practices that can be practices with their respective subpractices and work products,
instantiated according to specific and distinct needs of each in order to present suggestions on how automation can be
project context. introduced in the context of a software development project.
This, in turn, differs from a maturity model by the fact that
there is no obligation in the implementation of its process areas
since the concept of maturity/capacity levels does not apply to
the proposed framework.
The conceptual framework of FAST is composed of elements
that have been defined according to CMMI-DEV [1], as shown
B. Fast Support Area
The Support Area was created with the purpose of suggesting
practices that support the execution of technical area processes
so that to support the systematic implementation, execution,
and monitoring of the test automation in the project.
The Support Area is comprised of the following Process
Areas (AP): (1) Test Project Planning, (2) Test Project
Monitoring, (3) Configuration Management, (4) Measurement
and Analysis, (5) Requirements and (6) Defects management,
which have practices that can relate to each other.
In addition, its PAs permeate all levels of automation in the
Technical Area and provide support for the execution of
automation from the proposed process areas, as represented by
Fig. 3 FAST Process Areas
Fig. 5.
A. FAST Technical Area
The Technical Area was created to contemplate test
automation practices through its process areas so that they can
be adapted according to the context in which they will be
implemented. Its purpose is to establish and maintain
mechanisms for the automatic accomplishment of software
testing, in order to support the creation of project environments Fig. 5 Support Area
that are efficient to develop, manage and maintain automated
tests in a given domain. Based on the proposal’s two areas, the framework was
In addition, this area is focused on presenting practical described to contemplate process areas, practices, subpractices
aspects for systematically performing automatic tests, and work products. However, given the length of its
composing an architecture based on process areas that can be documentation and space limitations of this article, it was
integrated into the product development process, providing chosen to present only an overview of each process area, as
strategic support for process improvement through test presented in Tables II and III.
automation.
The Technical Area is composed of the following Process V. CASE STUDY
Areas: (1) Unit Testing, (2) Integration Testing, (3) Systems A. Case Study Plan
Testing and (4) Acceptance Testing, in which each one is
The objective of a case study is the definition of what is
presented in an independent and cohesive manner, and can be
expected to achieve as a result of its execution [41]. In this
instantiated from the level of automation required by the
context, according to Robson's [42] classification, the purpose
context in question, and therefore, there are no prerequisites
of a case study is the improvement, in which, in the context of
among them. The level of automation represents the level of
this work, feedback is sought from the use of FAST in real and
abstraction with which the automation will be performed, as
contemporary software project development environment,
represented in Fig 4.
which has a high degree of realism.
In addition, each process area presents practices that include
In this scenario, research questions are statements about the
designing, implementing, and running the automated tests. The
knowledge being sought or expected to be discovered during
descriptions of each process area will be presented in the
the case study [41]. From this context, the research questions
following sections.
defined for this case study were as follows:
RQ1- Has FAST supported the systematic introduction of
test automation in the project?
RQ2- What were the positive aspects (success factors) of
using FAST to introduce test automation into the project?
RQ3- What were the negative aspects (failure factors) arising
from the use of FAST in the implementation of test automation
in the project?
RQ4- What are the limitations for introducing test
automation into the project, from using FAST?

Fig. 4 Technical Area Abstraction


TABLE II
FAST TECHNICAL AREA DESCRIPTION
Process
Practices Subpractices Work Products
Area
• Identify additional requirements and procedures that need to be tested.
• Identify input and output data types. • Unit test design.
Design Unit Testing • Design the architecture of the test suite. • Specification of test procedures.
• Specify the required test procedures. • Specification of the test cases.
• Specify test cases.
• Test data.
• Obtain and verify the test data.
Unit Testing

• Test support resources.


Implement Unit Testing • Obtain test items.
• Configuration of test items.
• Implement unit tests.
• Summary of tests..
• Summary of tests.
• Perform unit testing. • Test execution log.
Execute Unit Testing
• Determine unit test results. • Revised test specifications.
• Revised test data.
• Evaluate the normal completion of the unit test. • Test summary.
Finish Unit Testing • Evaluate abnormal completion of the unit test. • Revised test specifications.
• Increase the unit test set. • Additional test data.
Establish Test Integration • Establish integration test granularity.
• Integration test project.
Techniques. • Select integration test design techniques.
• Derive test conditions.
• Derive test coverage items.
Design Integration Testing • Derive the test cases. • Specification of the test design.
Integration Testing

• Assemble test sets.


• Establish test procedures.
Implement Integration
• Implement integration test scripts. • Integration test scripts.
Testing
• Run integration testing scripts. • Summary of tests.
Execute Integration Testing
• Determine test results. • Results of implementation.
• Evaluate normal completion of the integration test.
• Evaluate abnormal completion of integration test. • Summary of tests.
Finish Integration Testing
• Increase the integration test suite. • Test data.
• Perform post-conditions for test cases.
• Select system test types.
Establish System Testing • Select techniques for the systems test project.
• Test Project.
Types and Techniques • Select data generation techniques.
• Select techniques for generating scripts.
• Derive test conditions.
• Derive test coverage items.
System Testing

Design System Testing • Derive the test cases. • Specification of the test design.
• Assemble test sets.
• Establish test procedures.
Implement System Testing • Implement the system test scripts. • System test scripts.
• Run system test scripts. • Summary of tests.
Execute System Testing
• Determine test results. • Results of implementation.
• Evaluate normal system test completion.
• Evaluate abnormal system test completion. • Summary of tests.
Finish System Testing
• Increase the system test set. • Test data.
• Perform post-conditions for test cases.
Establish Acceptance • Define type of acceptance test.
• Test design.
Testing Criteria • Define acceptance criteria.
• Derive test conditions.
• Derive test coverage items.
Design Acceptance Testing • Derive test cases. • Specification of the test design.
Acceptance Testing

• Assemble test sets.


• Establish test procedures.
Implement Acceptance
• Implement the acceptance test scripts. • System test scripts.
Testing
Execute Acceptance • Run acceptance testing scripts. • Test summary.
Testing • Determine test results. • Results of implementation.
• Evaluate the normal completion of the acceptance test.
• Evaluate abnormal completion of the acceptance test. • Test summary.
Finish Acceptance Testing
• Increase the acceptance test set. • Test data.
• Perform post-conditions for test cases.
TABLE III
FAST SUPPORT AREA DESCRIPTION
Process
Practices Subpractices Work Products
Area
• Define product risk categories. • Analytical risk structure - EAR.
Assess Product Risks • Identify risks. • Product risk chart.
• Analyze risks. • Risk exposure matrix.
• Identify scope of test automation.
• Define test automation strategy.
• Test automation strategy.
Define Test Automation • Define input criteria.
• Test automation criteria.
• Define exit criteria.
Test `Project Planning

Strategy
.
• Set test suspension criteria.
• Define test restart criteria.
• Project analytical framework.
• Life cycle of the automation project;
• Establish Project Analytical Framework - EAP.
• Estimates of the size of the test
Establish Estimates • Define the test life cycle.
automation project.
• Determine size and cost estimates.
• Cost estimate of test automation
project.
• Plan schedule. • Timeline of the test automation
• Plan human resources. project.
Develop Test Automation
• Plan involvement of project stakeholders. • Human resources matrix.
Plan
• Identify project risks. • Stakeholder engagement matrix.
• Establish the plan. • Project risks.
• Monitor the progress of the project.
Monitor project • Test project follow-up worksheet.
Monitoring,
Test Project

• Identify deviations from the project.


Manage corrective actions. • Record corrective actions • Test project follow-up worksheet.
• Follow corrective action until it closes.
• Establish environmental requirements:
Establish test automation
• Implement the test automation environment. • Configuration management plan.
Configuration

environment.
Management

• Maintain the test automation environment.


• Identify configuration items.
Establish baselines of the • Configuration management plan.
• Establish the configuration management environment.
test automation project. • Configuration management system.
• Generate baselines and releases.
• Track change requests
Track and control changes. • Configuration management system.
• Control configuration items
• Establish test automation measurement objectives.
Establish mechanisms of • Specify measurements of test automation.
Measurement and

measurement and analysis. • Specify procedures for collecting and storing measurements. • Test automation measurement plan,
Analysis

• Specify procedures for analyzing measures.

• Collect automation measurement data.


Provide test automation • Analyze automation measurement data.
• Test automation measurement plan,
measurement results. • Communicate results.
• Store data and results.
Define Automation • Define objective criteria for assessing requirements.
• Criteria for analysis of requirements
Requirements • Elicit automation requirements.
• Automation requirements.
Requirements

Define traceability between • Determine traceability scheme between requirements and automation • Conceptual traceability scheme.
automation requirements work products. • Traceability Matrix.
and your work products. • Define bidirectional traceability of requirements.
• Request for change.
Manage automation • Manage test automation scope change. • Impact analysis.
requirement changes. • Maintain bidirectional traceability of test automation requirements. • Traceability Matrix

• Define defect management strategy.


Establish defect • Default state machine.
• Define a defect management system.
management system • Defect management system.
Management

• Maintain defect management system.


Defect

• Record, classify and prioritize defects. • Defect registration.


Analyze test result.
• Analyze the root cause of the selected defects. • Root cause analysis of defects.
Establish actions to
• Propose corrective actions.
eliminate the root cause of • Corrective actions.
• Monitor implementation of corrective actions.
defects.
These questions were defined with the purpose of FAST implementation in the context of the 2 selected Cases. In
consolidating information that could serve as inputs to analyze this scenario, it was planned to be executed semi-structured
if FAST is a mechanism that can be used to answer the research [43], with previously planned questions, whose order may vary
question of this work, which is: How test automation should be according to the course of the interview. The interview protocol
introduced and maintained in the process software was organized according to the funnel model principle [44], in
development? which the most general questions were initially addressed and
then refined to the more specific ones as described below:
B. Case Definition
Generic Questions:
Case 1 occurred in the context of a project for the Characterization of the participant, considering the following
development of a product for academic management, offered as aspects, such as level of experience, demographic data,
Software as a service (SaaS), with functionalities aimed at individual interests, and technical skills.
public and private school, through the management of teaching Specific Questions:
units, classes, grades, student enrollment, school year closing, 1. What do you think of the organization and goals of
among others. The product is based on the following automation levels?
technologies: 2. Are they suitable for introducing test automation in the
• PHP 7 (development language); project?
• Laravel (framework for PHP development); 3. Do you think automation levels supported the planning of
• Redis (in-memory database structure used for caching); the test automation strategy?
• ElasticSearch (search tool to handle large amounts of data 4. What do you think of the support area process areas
in real time); (Planning, Monitoring, Configuration Management,
• Postgresql (used database); Requirements, Measurement, Defect Management)?
• Git (tool for project versioning); 5. Are they suitable to support test automation in the project?
• VueJS (framework for JavaScript); 6. Did you miss any process area? Need to add some process
• Bootstrap (framework for CSS / HTML); area to the Support Area?
• Gulp (task runner); and 7. Did you miss any practice?
• NGinx (HTTP server). 8. Did you miss the specification and/or description of what
The team that worked on Case 1 was composed of 8 aspects of FAST to use?
participants who worked as Product Owner, Software Engineer, 9. What do you think of the process areas of the technical
and Scrum Master. The age of the group varied from 22 to 38 area (Unit Test, Integration Test, Systems Test, and Acceptance
years old and they were male and female. Test)?
Case 2 occurred in the context of the development of a 10. Are they adequate to support the test automation in the
software that centralizes user identities of several systems of the project?
organization, developed to meet the requirements of the OAuth 11. Did you miss any process area? Need to add some process
2 protocol, in order to optimize the management of logins for area to the Technical Area?
the various systems and to facilitate the integration of new 12. Did you miss practice?
systems. The product is based on the following technologies: 13. Did you miss the specification and/or description of what
• JAVA 8 (Backend programming language); aspects of FAST to use?
• Tomcat (Web container); General:
• Spring MVC (framework for development in JAVA); 14. Did FAST facilitate the systematic introduction of test
• PostgreSQL (Database used for project data); automation in the project?
• Oracle (the database used for educational ERP data); 15. What are the positive aspects (success factors) of using
• Rest Full Web services (WEB systems interoperability FAST to introduce test automation into the project?
standard); 16. What negative aspects (failure factors) resulting from the
• Angular Material (framework for JavaScript development use of FAST in the implementation of test automation in the
for front-end) and project?
• Gradle (project configuration manager). 17. What are the limitations for introducing test automation
The team that worked in Case 2 was composed of 6 into the project from the use of FAST?
participants who worked as Project Management, Software 18. How do you evaluate the feasibility of FAST?
Architect, Team Leader and Developer. The age of the group 19. How do you evaluate the integrity of FAST?
varied from 26 to 33 years old and they were both male and 20. In relation to the distribution of process areas in
female. Technical Area and Support Area, do you consider adequate for
C. Data Collection Plan the implementation of automation in the project?
The methods for collecting data derived from the case studies 21. Do you consider FAST adequate to support the
were interviews and metrics. introduction of test automation in your project?
The interview was selected to collect the qualitative data 22. What are the difficulties of using FAST in your project?
generated from the introduction of the automated test system by For the definition of the metrics, the Goal Question Metric
(GQM) method was used [45], [46], where specific metrics area
described as it follows. months prior to the beginning of the case study and, therefore,
1. Project size: project scope count through the use of its size of 1309 FP (metric 1) considered all the functionalities
function point technique. developed from the initiation until the moment of the count,
2. Size of the test automation planning: Planned project scope which occurred 2 months after its inception. As this was an
count for automation by defining the test automation strategy, ongoing project, whose requirements were being defined and
separated by level of automation. implemented, the project scope increased until the end of the
3. Size of the accomplished of the project automation: Count case study, but due to staff limitations and availability, the
of the scope of automation carried out in the project, separated project size count could not be updated.
by levels of automation, according to the strategy of test For metric 2, automation planning size, its value was equal
automation. to the size of the project because the planned automation
4. Test Automation Effort in the Pilot Project: Count the strategy was that the entire scope of the project should have
effort expended by the project participants. automated unit testing. For strategic organizational issues, and
5. Number of defects: Display the number of defects found as a way to gradually introduce the test automation, the project
from the execution of the test automation. prioritized only the automation of the unit tests. However, the
6. Return on investment (ROI): Present the return on automation strategy did not restrict planning to automate testing
investment made for the test automation project. at integration levels and systems, but they were not performed
during the period in which the case study was analyzed. In spite
D.Collected Data
of this, Case 1 did not count the planned scope for integration
To collect the interview data, participants were selected and systems levels and therefore, metric 2 contemplated only
based on a non-probabilistic sample, considered as the most the planned size for unit tests. Such counting was not performed
appropriate method for qualitative research, through an because it was a more complex activity, which required a longer
intentional sampling, based on the assumption that the time. The complexity in this count is mainly associated with the
researcher wants to understand a certain vision/understanding integration testing, since it relates to several functionalities and
and, therefore, should select a sample from which the maximum is directly associated with the system architecture.
can be learned [43]. Table IV presents information related to In this context, in the case study period, automation was
the execution of the interviews in the case studies. introduced only in the unit test, in which 383 PFs (metric 3)
were automated and executed, out of a total of 1309 PFs, that
TABLE IV
CASE STUDY INTERVIEW INFORMATION is, 29.26% of the unit tests were automated. It is worth
Date Participant Case Duration
mentioning that, although the case study started on 21/12/2016,
03/29/2017 P2 C1 32 min automation itself did not begin on this date, since Case 1
03/29/2017 P3 C1 44 min invested in the planning, configuration of the environment and
04/18/2017 P1 C1 58 min understanding of FAST, so that automation could be started.
05/11/2017 P3 C2 48 min Metric 4 brought the value of 244 hours spent on the project,
05/18/2017 P1 C2 56 min including training, follow-up meetings, planning, design,
implementation and execution of automated unit testing. This
In addition to the qualitative data from the interviews, number represents 11.14% of all the effort spent in this period,
metrics were also collected, as detailed in Table V. whose sum was 2,191 hours spent by the whole team.
In relation to the number of defects, metric 5, Case 1 did not
TABLE V report a defect due to the fact that the defects resulting from the
CASE STUDY METRIC INFORMATION unit test automation do not generate a fault record in the tool,
Metrics Case 1 Case 2 since this automation was performed by the developer and at
1 1.309 FP 24 FP the same time in which the defect was found, it was directly
Unit Testing: 12FP repaired. Thus, the fact that metric 5 is zero does not mean that
2 Unit Testing: 1309 FP
Integration Testing: 20FP
Unit Testing: 7FP the automation of the unit test did not cause a defect, but rather
3 Unit Testing: 383 PF that it was not accounted for due to the organization's strategy
Integration Testing: 12FP
4 244h 280h of not including in its process the formal registration defects
5 0 0 from the unit test automation.
6 Not Collected Not Collected In the scope of the ROI, metric 6, this could not be collected
by making use of historical data on the cost of the correction of
the defect without the automation of test, given that the
E. Data Analysis
organization does not have and therefore can not generate the
The process of data analysis is dynamic and recursive, and calculation of the same.
becomes more extensive throughout the course of the study The Case 2 project started in conjunction with the case study,
[43]. The quantitative data, from the collection of the metrics, and although the reported project size is only 24 FPs (metric 1),
served to characterize the scope of each case study project and it is complex and highly critical given its scope of various
will be analyzed separately by Case. systems integration.
In relation to Case 1, the project had already started 14 The planned automation strategy for this case included the
automation of unit and integration tests, with the scope of 12 PF • They must be conceptually congruent, where each level of
and 20 PF, respectively (metric 2). From this scope, 7 PF abstraction should categorize sets of the same level.
(metric 3) were automated at the unit test level, that is, 58.33%
F. Threats to Validity
of the planned. For the integration tests, 20 PF (metric 3) were
automated, representing 60% of the predicted scope. Conducting research on real-world issues implies an
There were 180 hours (metric 4) spent in planning, exchange between the level of control and the degree of realism,
designing, implementing and executing the automated tests, and whose realistic situation is often complex and not deterministic,
the total number of project hours during the same period was which makes it difficult to understand what is happening,
900 hours and, therefore, automation accounted for 20% of the especially for studies with explanatory purposes [44]. This
effort from the project. scenario is characteristic of the case study, in which the
No formal defects have been opened in this period so that limitations are in place and require a consolidated analysis to
metric 5 is zeroed since the project does not have the minimize their impacts on the results obtained.
formalization of defect management. In the context of metric 6, Among the limitations of the case study, it can be mentioned
similar to Case 1, the reasons were the same for not having that in none of the cases the FAST was completely implanted,
collected the ROI. which limits the analysis of its results from the practical
In addition, the diagnosis performed after the FAST experiences only of the areas of Unit Test, Integration Test, Test
implementation showed that the process areas of Unit Testing, Project Planning, Test Project Tracking, and Configuration
Integration Testing, Test Project Planning, Test Project Management. This limitation, in turn, had to be accepted within
Monitoring, and Configuration Management have improved the research, since the researcher could not interfere in the
their practices for inclusion of automation test. project so that the framework could be fully implemented.
In order to analyze the interviews, the collected data were In addition, it was not possible to ensure that participants read
codified and classified, whose challenge was to construct all FAST documentation, where it was observed that the focus
categories that captured the recurrent patterns that permeate all of the responses was based on practical experiences with scope
the data originated from the interviews. For this, the audios restricted to the process areas deployed. While aware of this
were heard and transcribed in the tool Microsoft Excel, through fact, the questions throughout the interview were not limited
which, from the generated codes of the first interview, and were comprehensive for the entire FAST scope.
categories were pre-established for the classification of the Another limitation of this study is the fact that FAST
following interview. When the following interview was implementation was performed in 2 cases in which software
analyzed, from the data coding, an attempt was made to fit them development processes already existed. Although its proposal
into one of the categories, to evaluate if there was recurrence in is not limited to this scope, it is not possible to conclude how it
the information. If necessary, new categories were created. This would be implemented in a completely ad-hoc environment.
categorization process was repeated for all interviews, and in According to Easterbrook et al. [47], the major limitation of
the end, a list of categories was refined, excluding possible case studies is that data collection and analysis are more open
redundancies, for the generation of the conceptual scheme, as to the researcher's interpretation and bias, and to minimize this
represented in Fig. 6. threat to validity, an explicit framework for selecting and
collecting data is required. were performed by means of a
carefully detailed protocol and followed throughout the study.
Another threat raised was the possibility of bias for data
analysis in a specific domain of software development and so
that it could be minimized in this study were selected 2 different
scenarios, in which the absence of certain information in one
can be complemented by the existence in the other.
However, the threat to the researcher's participation in the
context of supporting FAST implementation may have
influenced the results obtained, as well as may have constrained
interview participants in providing their answers.
However, according to Merriam and Tisdell [43], what
makes experimental studies reliable is their careful design,
Fig. 6 Data Analysis Categories
based on the application of standards well developed and
accepted by the scientific community, a fact that can be
The categories were defined according to the criteria
observed from the information in detail described in this
proposed by Merriam [43], considering the following aspects:
chapter.
• Must be aligned and answer the research questions;
• They must be comprehensive so that there are sufficient
categories and all relevant data are included;
• They must be mutually exclusive;
• The name should be as sensitive as possible to the context
that is related; and,
VI. FOCUS GROUP a specialist questioned the applicability of the level of
acceptance, given that the same should be performed by the
A. Focus Group Plan
client. From this consideration, another expert pointed out that
The research problem within the scope of the realization of performing the acceptance test is a consequence of systems
the focus group is associated with the central problem of this testing within the scope of the customer's environment.
work, which would be how automation can be introduced in the For the analysis of the support area, the Requirements
context of software development. From this question, the process area presented doubts at the beginning of the
objective of the focus group is to evaluate the FAST, under the discussion, in which it was asked if it would not be better if this
aspects of completeness, clarity and adequacy of the proposed process area were treated as part of the test automation project
structure. planning. In addition, there were questions about the difference
In order for this objective to be achieved, the roadmap was between the requirements of automation and those of the project
designed to address the following aspects of FAST: and how this is addressed in the proposal. During the group's
1 Presentation of the objectives for the focus group. own discussion, it was understood that the project requirements
2 Presentation of the proposal overview. could serve as a basis for automation requirements. It was also
3 Discussion on test automation levels. questioned whether the proposal addresses techniques for
4 Discussion and analysis on the two areas (Technical and surveying automation requirements for projects that
Support) contemplate legacy systems, in which it was observed that the
5 Discussion and analysis of the process areas of the suggestion could be included as an improvement point for the
technical area. FAST.
6 Discussion and analysis of the process areas of the support In the scope of the Process Automation Monitoring process
area. area, it was asked how the framework addresses the monitoring
Based on the issues addressed, the experts were selected from of the automation project, so that this practice can be performed
the profile and knowledge in test automation, and despite objectively. After a long discussion, it was suggested the
having invited 7 participants, only 4 attended the meeting. definition and monitoring of indicators related to automation,
B. Execution whose process area is present in the description of FAST.
The focal group took place on September 19, 2017, with an Regarding the defect management process area, it was
approximate duration of 90 minutes, being recorded in audio, commented on how the proposal addresses a way to report the
with the permission of the participants, in addition to the paper results of automation, in which the practices and subpractices
notes made by the moderator, whenever deemed necessary. in this process area were discussed. From the discussion and
The section started from the presentation of the focal group interaction among the members, it was suggested to include, in
method, indicating the objectives of its application in the the framekwork description, a suggestion for a standard that
context of this research for the participants, explaining that the could support the report of defects in the project.
objective of this method was to collect the largest amount of In addition, it was asked about the adaptation and
information from the interaction between the members of the instantiation of the framework and its processes in
group [ 48]. At that moment, the schedule for that session was environments that already exist process and / or agile practices.
presented. After the beginning, the participants were asked to It was also questioned if the proposal contemplated the
feel free to contribute people's opinions and ideas on the topics definition of a process for automation deployment using FAST,
covered. and after discussion among the group, the difference between
The FAST overview was presented, showing the components the proposed framework and an automation process was
of the framework, areas, process areas. understood, which could be a future work to be done.
Given this scenario, the data were collected, analyzed and researched.
consolidated from the planned topics, whose analyzes will be Regarding the Technical and Support Areas, the participants
described in the next Section. considered that the titles and objectives were adequate to their
proposal, and that the form of organization was consistent with
C. Data Analysis the proposed strategy. It was observed that, initially, the
For the analysis of the data, an analysis and comparison of members had difficulty understanding the relation between the
the discussions around the topics presented throughout the areas.
focus group was carried out, to identify the similarities and In this context, it was asked about the limitations of the use
obtain an understanding of how the various variables behave in of the framework in practice, when one of the experts
the context of the research [49]. For this, the qualitative data commented that it does not see limitations in the framework,
were analyzed from the audio transcription, which was since it addresses all practices for test automation, but that the
performed from the Microsoft Excel tool. limitations are due of the priorities that the projects deal with,
Regarding the levels of automation, after presenting its that is, the introduction of automation is not always performed,
definition, concepts and objectives, all experts indicated that its because the practice is not prioritized in the scope of the project.
description was clear. In addition, the amount of automation Also in this context, it was mentioned that the absence of a
levels was considered adequate, with no need for addition or process contemplating step by step how to use the framework
removal of any level. However, in the course of the discussion, can be a limitation for its use. From the defined step-by-step,
the expert suggested setting a metric to assess how FAST could occur systematically in the context of a software
adheres to the project. Although not in the process, this metric development project.
is analyzed when the diagnosis is made prior to its From this scenario, this research was developed, and had a
implementation. methodology based on an empirical interview, development of
the FAST and its evaluation through case study and focus
D.Threats to Validity
group.
Among the limitations of the application of the focus group, The planning of the case study was carried out so that
the fact that more experienced participants can intimidate the information could be collected throughout its execution. The
contribution of the others at the moment of projecting their Selection of Cases was carried out in order to provide
opinion on a certain topic. As a way to mitigate this limitation, complementary profiles, where the environment of a private
a homogeneous group was selected in terms of experiments in company and a public institution are distinct scenarios and that
test automation. have brought valuable results to the demands of this research.
Another restriction of this method is associated with the Based on data from the case study and the focus group,
limited understanding by the specialists of what is being qualitative and quantitative, they were analyzed and organized
discussed, since there is a time limit for the realization of the from the following categories:
method. In this context, very complex issues can be • Guide introduction of automation;
misinterpreted, which may lead the group to have difficulties in • Understanding the automation context;
converging to the central theme discussed. As a way to reduce • Completeness of the framework;
this threat, the moderator acted in a way to make an explanation • Ease of reading;
and prior presentation of the topic to be discussed, so that the • FAST limitations;
experts could have a better understanding of the topic. • Project limitations;
One of the threats to the validity of the focus group Difficulty in interpreting FAST; and,
application was that not all participants were able to understand • Suggestions for improvements.
the applicability of the framework, since the session has a time Considering the data from the case study and focus group, an
limitation and the participant needs to abstract the association could be made to answer the research question of
understanding to the practical implementation of the proposal. this work, which is:
In this sense, since the selected participants' profile was focused How should test automation be introduced and maintained in
on professionals with practical experience of automation, and the software development process?
not on the use and implementation of process improvement in In this context, the test automation should be introduced in a
the context of projects, a limitation of their evaluation was systematic way in the context of the software development
identified. process, in which the FAST proved to be a viable strategy for
Another identified threat is the fact that the author of the the introduction and maintenance to be performed. In view of
proposal was the mediator of the focus group, which is why the this view, the main contribution of this work is a solution to
participants could be measured in their comments when systematize the test automation processes, from a framework
evaluating the proposal. In order to minimize this threat, we that includes technical and support aspects, which are essential
sought the selection of participants who had greater autonomy for the systematic introduction of test automation.
and independence from the author and who did not have ties of From the general objective of the work, which focuses on a
friendship that interfered strategy for systematic introduction of test automation practices
in the context of software development project, it was observed
VII. CONCLUSION that FAST can be considered as one of the strategies derived
The studies show that software testing is a discipline of from this objective. In addition, some of the failure factors of
software engineering that is constantly growing, whose demand the automation deployment could be observed, both from the
for research grows from the constant search for best practices, knowledge acquired from the literature review, and from the
aiming to provide visibility regarding the quality of the product case study and focus group.
so that it is improved. This work is relevant because it responds to the practical
In this context, the test appears through a perspective to problems of test automation based on recent research and
provide a more objective view on the quality of the product, in market demand and is innovative because it consolidated from
which the search for best practices to realize it comes from the a single proposal both the technical aspects directly related to
focus of research found in the literature, as well as the demands test automation, as those who support the design,
observed in the market. It is in this context that the introduction implementation, execution, and closing of the test, through the
of test automation has emerged as a demand to systematize their support area.
processes so that the real benefits are achieved.
Several approaches to process improvement have also REFERENCES
emerged in the literature, both as part of maturity models for [1] SOFTWARE ENGINEERING INSTITUTE (SEI). CMMI for
software testing and as specific approaches to automation. development, version 1.3, staged representation, 2010. Pittsburgh, PA.
However, it was noted that none of them provided practical Available at www.sei.cmu.edu/reports/10tr033.pdf, CMU/SEI-2010-TR-
033.
subsidies so that the introduction of test automation practices
[2] SWEBOK (2007). Guide to the Software Engineering Body of [24] NAIK, K e TRIPATHY, P. Software Testing and Quality Assurance.
Knowledge. Body of knowledge, Fraunhofer IESE, Robert Bosch GmbH, Wiley, 2008.
University of Groningen, University of Karlskrona/Ronneby, Siemens. [25] GARCIA, C; DÁVILA, A; PESSOA, M. Test process models: systematic
[3] ISTQB 2012. Standard glossary of terms used in Software Testing, literature review, In: Software Process Improvement and Capability
Version 2.2, 2012 Determination (SPICE 2014), Springer, 2014, pp. 84–93.
[4] BERNER, S; WEBER, R; KELLER, R. Observations and lessons learned [26] BÖHMER, K., RINDERLE-MA, S. A systematic literature review on
from automated testing. In: International Conference on Software process model testing: Approaches, challenges, and research directions.
Engineering (ICSE), May 15-21, St. Louis, USA, 2005, pp. 571-579. Cornell University Library, set. 2015. Available at:
[5] RAMLER, R; WOLFMAIER, K. Economic Perspectives in Test https://arxiv.org/abs/1509.04076 Acess on 01 ago. 2016.
Automation: Balancing Automated and Manual Testing with Opportunity [27] AFZAL, W; ALONE, S; GLOCKSIEN, K; TORKAR, R. Software test
Cost. In: International workshop on Automation of Software Test (AST), process improvement approaches: a systematic literature review and an
23 May, Shanghai, China, 2006, pp. 85-91. industrial case study. Journal of System and Software, Volume 111. 2016,
[6] KARHU, K; REPO, T; TAIPALE, O; SMOLANDER, K. Empirical pp. 1–33.
Observations on Software Testing Automation. In: 2009 International [28] ________. Test Maturity Model (TMM). 1996. Illinois Institute of
Conference on Software Testing Verification and Validation. IEEE, Apr. Technology, Disponível em: http://science.iit.edu/computer-
2009, pp. 201–209. science/research/testing-maturity-model-tmm 2014.08.11
[7] RAFI D; MOSES K; PETERSEN K; MÄNTYLÄ M. Benefits and [29] ERICSON, T; SUBOTIC, A; URSING, S. TIM - A Test Improvement
limitations of automated software testing: systematic literature review and Model. Software Testing Verification and Reliability, Volume 7, pp. 229-
practitioner survey. In: 7th International Workshop on Automation of 246, 1997.
Software Test (AST), June, 2012, Zurich, p.p. 36-42. [30] KOOMEN, T. e POL, M. Improvement of the test process using TPI, In:
[8] WIKLUND, K; ELDH, S; SUNDMARK, D; LUNDQVIST, K. Technical Proc. Sogeti Nederland BV. Nederland ,1998.
Debt in Test Automation. In: IEEE Fifth International Conference on [31] CHELLADURAI, P. Watch Your STEP, 2011
Software Testing, Verification and Validation, 17-21 April, Montreal, http://www.uploads.pnsqc.org/ 2011/papers/T-56 _ Chelladurai _
QC, 2012, pp. 887 – 892. paper.pdf. Last accessed: Feb. 2016.
[9] WIKLUND, K; SUNDMARK, D; ELDH, S; LUNDQVIST, K. [32] HONGYING, G; CHENG. Y. A customizable agile software quality
Impediments for Automated Testing - An Empirical Analysis of a User assurance model, In: 5th International Conference on New Trends in
Support Discussion Board. In: IEEE International Conference on Information Science and Service Science (NISS), 2011, pp. 382–387.
Software Testing, Verification, and Validation, ICST 2014, March 31 - [33] FURTADO, A; GOMES, M; ANDRADE, E; FARIAS JR, I. MPT.BR: a
April 4, Cleveland, OH, 2014, pp 113-122. Brazilian maturity model for testing. Published In: Quality Software
[10] HAYES, L. Automated Testing Handbook. Software Testing Institute, International Conference (QSIC), Xi'an, Shaanxi, (2012), pp. 220-229.
Richardson, TX, Março, 2004. [34] ________. Melhoria do Processo de Teste Brasileiro (MPT.BR).
[11] DAMM, L; LUNDBERG, L. Results from introducing component-level SOFTEXRECIFE, RECIFE, 2011.
test automation and test-driven development. Journal of Systems and [35] ________. Test Maturity Model Integration (TMMI). TMMi Foundation,
Software, Volume 79, no. 7, pp. 1001–1014, 2006. Versão 1.0, 2012. Available at:
[12] WISSINK, T; AMARO, C. Successful Test Automation for Software http://www.tmmi.org/pdf/TMMi.Framework.pdf 2014.08.11. Last
Maintenance. In: Software Maintenance, 2006. ICSM’06. 22nd IEEE access: January 2015.
International Conference on. IEEE, 2006, pp. 265–266. [36] HEISKANEN, H., MAUNUMAA, M., KATARA M. A test process
[13] HAROLD, M. Testing: a roadmap. In: The Future of Software improvement model for automated test generation, In: Product-Focused
Engineering. Ed by Finkelstein, A., 22th International Conference on Software Process Improvement, Springer, 2012, pp. 17–31.
Software Engineering (ICSE), Limerick, Ireland, June 2000, pp. 61-72. [37] KRAUSE, M. E. A Maturity Model for Automated Software Testing.
[14] FEWSTER, M. Common Mistakes in Test Automation. In: Fall test Medical Device & Diagnostic Industry Magazine, December 1994,
automation conference, Boston, 2001. Available at http://www.mddionline.com/article/software-maturity-
[15] TAIPALE, O.; KASURINEN, J.; KARHU, K.; and SMOLANDER, K. model-automated-software-testing. Last access: January 2015.
Trade-off between automated and manual software testing. In: [38] ELDH, S; ANDERSSON, K; WIKLUND, K. Towards a Test Automation
International Journal of System Assurance Engineering and Management, Improvement Model (TAIM). In: IEEE International Conference on
June, 2011, Volume 2, Issue 2, pp. 114–125. Software Testing, Verification, and Validation Workshops (ICSTW),
[16] FURTADO, A. P; MEIRA, S; SANTOS, C; NOVAIS, T; FERREIRA, March 31st to April 4th , Cleveland, OH, 2014, pp. 337-342.
M. FAST: Framework for Automating Software Testing. Published In: [39] CHRISSIS M; KONRAD, M; SHRUM, S. CMMI Second Edition:
International Conference on Software Engineering Advances (ICSEA), guidelines for process integration and product improvement. Addison
Roma, Italia, (2016), pp. 91. Wesley, 2007.
[17] GAROUSI, V; FELDERER, M; HACALOGLU, T. Software test [40] INTERNATIONAL ORGANIZATION FOR STANDARDIZATION/
maturity assessment and test process improvement: A multivocal INTERNATIONAL ELECTROTECHNICAL COMISSION/IEEE
literature review. Information and Software Technology. Volume 85, COMPUTER SOCIETY (ISO/IEC/IEEE) 24.765. Systems and software
Maio, 2017, pp. 16-42. engineering – vocabulary, 2010.
[18] BARR, E; HARMAN, M; MCMINN, P; SHAHBAZ, M; YOO, S. The [41] RUNESON, P; HÖST, M; RAINER, A; REGNELL, B. Case Study
oracle problem in software testing: a survey. IEEE Transactions on Research in software engineering: Guidelines and Examples. Wiley,
Software Engineering, Volume 41, Issue n° 5, May 2015, pp. 507-525. 2012.
[19] BERTOLINO, A. Software testing research: achievements, challenges, [42] ROBSON, C. Real World Research. Blackwell, Second Edition, 2002.
dreams, future of software engineering. IEEE Computer Society, 2007, pp [43] MERRIAM, S. B; TISDELL, E. J. Qualitative Research: A guide to
85–103. doi: 10.1109/FOSE.2007.25 Design and Implementation. Jossey-Bass, 4th edition, 2016.
[20] DUSTIN, E; RASHKA, J; PAUL, J. Automated software testing: [44] RUNESON, P. e HÖST, M. Guidelines for conduction and reporting case
introduction, management, and performance. Boston, Addison-Wesley, study research in software engineering. In: Empirical Software
1999. Engineering Journal, Volume 14, Issue 2, pp. 131-164, April 2008.
[21] PERSSON, C; YILMAZTURK, N. Establishment of automated [45] BASILI, V. R, WEISS, D. M. A methodology for collecting valid
regression testing at ABB: industrial experience report on ‘Avoiding the software engineering data. IEEE Transactions on Software Engineering,
Pitfalls’. In: The 19th international conference on automated software Volume SE-10, Issue, 1984, pp. 728–739.
engineering (ASE’04). IEEE Computer Society. DOI: [46] VAN SOLINGEN, R; BERGHOUT; E. The goal/question/metric
10.1109/ASE.2004.1342729. method: A practical guide for quality improvement of software
[22] INTERNATIONAL ORGANIZATION FOR STANDARDIZATION/ development. McGraw-Hill, 1999.
INTERNATIONAL ELECTROTECHNICAL COMISSION/IEEE [47] EASTERBROOK, S; SINGER, J; STOREY M; DAMIAN, D. Selecting
COMPUTER SOCIETY (ISO/IEC/IEEE) 29.119-1. Software and empirical methods for software engineering, Springer London, 2008, pp.
systems engineering -- Software testing -- Part 1: Concepts and 285-311.
definitions, 2013. [48] REED, J; PAYTON, V. R. Focus groups: issues of analysis and
[23] FEWSTER, M. e GRAHAM, D. Software Test Automation: Effective interpretation. In: Journal of Advanced Nursing, Volume 26, pp. 765-771,
Use of Test Execution Tools. Addison-Wesley, New York, 1999. 1997.
[49] KITZINGER, J. The methodology of focus group: the importance of
interaction between research participants. In Journal of Sociology of
Health and Illness. Volume 16, Issue 1, pp. 103-121, 1995.

You might also like