0% found this document useful (0 votes)
21 views84 pages

Automatedtesting

Uploaded by

Miloš Ristić
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views84 pages

Automatedtesting

Uploaded by

Miloš Ristić
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

1

Automated Testing Strategy.

Learn automation testing fundamentals fast

This version was published on 2021-05-24

The right of Anton Smirnov to be identified as the author of this


work has been asserted by him in accordance with the Copyright,
Design and Patents Act 1988.

This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing
process. Lean Publishing is the act of publishing an in-progress ebook using lightweight tools
and many iterations to get reader feedback, pivot until you have the right book and build
traction once you do.

The views expressed in this book are those of the author.

Contact details:

[email protected]

Related Websites:

Automated Testing Strategy: https://test-engineer.site/

2
Every effort has been made to ensure that the information contained
in this book is accurate at the time of going to press, and the
publishers and author cannot accept any responsibility for any
errors or omissions, however, caused. No responsibility for loss or
damage occasioned by any person acting, or refraining from action,
as a result of the material in this publication can be accepted by the
editor, the publisher, or the author.

© 2021 Anton Smirnov, Test Engineer Ltd

3
Table of Contents
Introduction. ............................................................................................................................... 5
Chapter 1. Theory of automated testing. ................................................................................... 10
Pyramids of automated testing. .......................................................................................................... 23
Understanding the testing pyramid is best in practice. ........................................................................ 27
Chapter 2. Automated testing strategies. .................................................................................. 32
1.1 Strategy «Let's try»........................................................................................................................ 32
1.2 Strategy «Here the target» ............................................................................................................ 34
1.3 Strategy «Operation Uranum» ....................................................................................................... 36
2. Parallelization of tasks. .................................................................................................................... 38
3. Create a test plan. ........................................................................................................................... 41
3.3 Testing strategy and planned types of testing on the project. ........................................................................ 42
3.5 Test completion criteria .................................................................................................................................... 43
4. The definition of a primary task......................................................................................................................... 43
5. Writing test cases for selected tasks. ................................................................................................................. 45
7. Selection of tests for automation. ..................................................................................................................... 48
4. Take into account the synchronization features of the browser and the application running the tests.
........................................................................................................................................................... 51
5. It is not necessary to prescribe hard values in the test case. ............................................... 52
6. Automated test cases should be independent. ............................................................................ 52
7. It is necessary to carefully study the documentation on the tools used. ...................................... 52
8. Preparation of test data. ............................................................................................................. 54
Development and maintenance of the testing automation process. .................................................... 56
1. Evaluation of the effectiveness of automation.................................................................................................. 56
9. Estimation of task execution time. .............................................................................................. 64
Features of test cases for automation. ....................................................................................... 67
Chapter 3. Organization of automated process on the project. .................................................. 73
Conclusion. ................................................................................................................................ 83

4
Introduction.
«Software testing has become a critical and an ever-growing part of
the development life-cycle. Initially, it relied on large teams
executing manual test cases. This has changed in recent years as
testing teams have found a way to facilitate a faster deployment
cycle: test automation. A cost-effective automation testing strategy
with a result-oriented approach is always a key to success in
automation testing. In this book let’s see how to build a good test
automation strategy. »
This book is based on more than 4+ years of experience in the field
of test automation. During this time, a huge collection of solved
questions has accumulated, and the problems and difficulties
characteristic of many beginners have become clearly visible. The
automated testing process was repeatedly created. It was obvious
and reasonable for me to summarize this material in the form of a
book that will help novice testers quickly build an automated testing
process on a project and avoid many annoying mistakes.
This book does not aim to fully disclose the entire subject area with
all its nuances, so do not take it as a textbook or Handbook — for
decades of development testing has accumulated such a volume of
data that its formal presentation is not enough and a dozen books.

Also, reading just this one book is not enough to become a "senior
automated testing engineer". Then why do we need this book?

5
First, this book is worth reading if you are determined to engage in
automated testing – it will be useful as a "very beginner" and have
some experience in automation.
Secondly, this book can and should be used as reference material.
Thirdly, this book — a kind of "map", which has links to many
external sources of information (which can be useful even
experienced automation engineer), as well as many examples with
explanations.
This book is not intended for people with high experience in test
automation. From time to time I use a learning approach and try to
“chew” all the approaches and build the stages step by step.

Some people more experienced in software test automation also


having may find it slow, boring, and monotonous.
This book is intended for people who first approach the study of
automation testing, especially if their goal is to add automation to
their test approach.
First of all, I wrote this book for a tester with experience in the field
of “manual” software testing, the purpose of which is to move to a
higher level in the tester career.

Summary:
We can safely say that this book is a kind of guide for beginners
in the field of automation software testing.

6
I have a huge knowledge of the field of test automation. I also have
quite a lot of experience building automation on a project from
scratch. I have repeatedly had to develop and implement the process
of testing automation on projects.

The learning approach focuses on a huge chunk of theory on


building the automation process. The book also discusses the theory
of test automation in detail.
However, the direction of automation to support testing is no longer
limited to testing, so this book is suitable for anyone who wants to
improve the use of automation: managers, business analysts, users,
and, of course, testers.
Testers use different approaches for testing on projects. I remember
when I first started doing testing, I was drawing information from
traditional books and was unnecessarily confused by some concepts
that I rarely had to use. And most of the books, to my great regret,
did not address the aspects and approaches to test automation. Most
books on testing begin by showing how you can test a software
product with basic approaches. But I do not consider the approaches
and implementations of test automation at the testing stage.
In this book, I will not consider how to create and structure
applications. This is useful knowledge, but it is beyond the scope of
this book.

My main goal is to help you start building an automation process


using a strategy and have the basic knowledge you need to do so.

7
This book focuses on theory rather than a lot of additional libraries,
because once you have the basics, building a library and learning
how to use it becomes a matter of reading the documentation.

This book is not an "exhaustive" introduction. This is a guide to


getting started in building a test automation strategy. I focused on
the examples.

I argue that in order to start implementing an automation strategy,


you need a basic set of knowledge in testing and management to
start adding value to automation projects.
In fact, when I started creating the test automation process, I used
only the initial level of knowledge in the field of testing and
management.

I also want the book to be small and accessible, so that people


actually read it and apply the approaches described in it in practice.

8
Acknowledgments.

This book was created as a “work in progress” on leanpub.com.


My thanks go to everyone who bought the book in its early stages,
this provided the continued motivation to create something that
added value, and then spends the extra time needed to add polish
and readability.
I am also grateful to every QA engineer that I have worked with
who took the time to explain their approach. You helped me observe
what a good QA engineer does and how they work. The fact that
you were good, forced me to ‘up my game’ and improve both my
coding and testing skills. All mistakes in this book are my fault.

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0


International License

9
Chapter 1. Theory of automated testing.

So, what is automated testing?


Automated testing – an analog of manual functional testing, which
is performed by a robot program, not by a human.
In turn.
Automation of software testing – this verification process
includes the examination of the basic functions and test steps as
running, initialization, execution, analysis, and results,
automatically by specialized tools. Consider the example in more
detail.
When we develop software, we certainly test it. If we are talking
about a function, we can call it with different arguments, and see
what it will return to us. Having created a website or a large portal,
we open it in a browser, click links and buttons, check that
everything is done correctly. We walk through it on pre-written
scripts. We conduct various types of testing (functional, smoke,
sanity, etc.) This process is called “manual” testing — a person
checks the operation of the program. A reasonable question is
whether this process can be shifted to the shoulders of robots? It is
usually possible, and this is what is called automated testing.

• The speed of execution of test cases can be many times and


orders of magnitude superior to human capabilities. If you
imagine that a person will have to manually reconcile several
files of several tens of megabytes each, the estimate of
manual execution time becomes frightening: months or even
years.

10
At the same time, 36 tests implemented in the framework of
smoke testing by automated scripts are performed in less than
five seconds and require only one action from the tester — to
run the script.

• There is no influence of the human factor in the process of test


cases (fatigue, inattention, etc.). let's continue the example
from the previous paragraph: what is the probability that a
person will make a mistake, comparing (symbolically) even
two ordinary texts of 100 pages each? And if such texts 10 or
20? And the checks have to be repeated over and over again?
We can safely say that a person is guaranteed to make a
mistake. The automated script is not wrong.

• Automation tools are able to perform test cases, in principle,


impossible for a person due to their complexity, speed, or other
factors. Again, our example of comparing large texts is
relevant: we cannot afford to spend years repeatedly
performing an extremely complex routine operation in which
we are guaranteed to make mistakes. Another excellent
example of test cases that are too much for a person is a
performance study, in which it is necessary to perform certain
actions at a high speed, as well as to fix the values of a wide
range of parameters. Can a person, for example, a hundred
times per second measure and record the amount of RAM
occupied by the application? No. But automation script can.

11
• Automation tools are able to collect, store, analyze, aggregate,
and present huge amounts of data in a human-readable form.
In our smoke-testing example of the "file Converter", the
amount of data obtained from the test is small — it can be
processed manually. But if you look at real-world design
situations, the logs of automated testing systems can take tens
of gigabytes for each iteration. It is logical that a person is not
able to manually analyze such amounts of data, but a properly
configured automation environment will do it itself, providing
accurate reports in 2-3 pages, convenient graphs, and tables, as
well as the ability to dive into details, moving from aggregated
data to details, if necessary.

• Automation tools are able to perform low-level actions with


the application, operating system, data channels, etc. In one of
the previous paragraphs, we mentioned such a task as "a
hundred times a second to measure and record the amount of
RAM occupied by the application." This task of gathering
information about the resources used by the application is a
classic example. However, automation can not only collect this
information but also affect the runtime environment of the
application or the application itself, emulating typical events
(for example, lack of memory or processor time) and fixing the
reaction of the application. Even if the tester is qualified
enough to perform such operations on his own, he will still
need a particular tool — so why not solve this problem
immediately at the level of test automation?

12
So, with the use of automation, we are able to increase the test
coverage by:

• execution of test cases, which previously were not worth


thinking about.
• multiple repetitions of test cases with different input data.
• freeing up time to create new test cases.

But is everything so good with the automation test?


Unfortunately, not. Very clearly one of the major problems can be
represented by figure:

Correlation of development time and execution of test cases in


manual and automated testing.

13
First of all, you should realize that automation does not happen by
itself, there is no magic button that solves all problems. Moreover,
a series of serious drawbacks and risks are associated with test
automation:

• The need for highly qualified personnel due to the fact that
automation is a "project within a project" (with its own
requirements, plans, code, etc.). Even if we forget for a
moment about the "project within the project", the technical
qualification of employees involved in automation, as a rule,
should be significantly higher than that of their colleagues
involved in manual testing.

• Development and maintenance of both automated test cases


and all necessary infrastructure take a lot of time. The situation
is aggravated by the fact that in some cases (with major
changes in the project or in the case of errors in the strategy)
all the relevant work has to be done again from scratch: in the
case of a tangible change in the requirements, the change of
the technological domain, the processing of interfaces (both
user and software), many test cases become hopelessly
outdated and require the creation of anew.

• Automation requires more careful planning and risk


management because otherwise the project can be seriously
damaged (see the previous paragraph about the alteration from
scratch of all developments).

14
• Commercial automation tools are significantly expensive, and
the available free analogs do not always allow you to
effectively solve the tasks. And here again, we have to return
to the question of errors in planning: if initially a set of
technologies and automation tools was chosen incorrectly, it is
necessary not only to redo all the work but also to buy new
automation tools.

• There are a lot of automation tools, which complicates the


problem of choosing a particular tool, makes it difficult to plan
and define a testing strategy, can entail additional time and
financial costs, as well as the need for training or hiring
appropriate specialists.
The scope of automation:

First, we look at the list of tasks that automation helps to solve:

• Execution of test cases, unbearable to man.


• Solving routine tasks.
• Speed up test execution. Release of human resources for
intellectual work.
• Increase test coverage.
• Improvement of the code by increasing the test coverage
and the use of special automation techniques.

Testing makes our software more reliable and life easier. But not
always. After all, agree, it is better when we ourselves find and fix
the error before the release, than when an angry customer or user
tells us about the problem.

15
Firstly, we lose time to correct the defect, sometimes during
overtime or on weekends. And, secondly, we are losing business
reputation, which negatively affects the business.
Testing is especially useful when developing large applications in a
large team when you can accidentally break some function that the
other person did, and which you did not know. Or, when it is
necessary to finalize a previously written complex project.
In large companies, there may be a separate group of people who
are engaged only in testing. Usually, they are called the testing
Department, or Department QA (quality assurance) in this book, I
immediately want to separate the concepts of testing and QA.
Testing — the process of product quality assessment, and QA-is the
formation of processes that provide high-quality software
(including development processes, Analytics, documentation).

Only some large companies have already started to manage the


processes: QA conducts system studies of all problems and time
delays, identifies necessary improvements, documents the process.
In such companies, all document templates are usually approved,
and employees know exactly who needs what to do and what are
the criteria for the effectiveness of tasks.
By the way, many believe that in such formal processes work is
boring: not so. Employees have no less level of freedom and
creativity.
The tasks they solve are only available at this level of maturity and
become harder and more interesting.

16
If we talk about testing, it is not easy to classify tasks. In some
(usually small) companies, the tester is provided with a workplace,
access to the Assembly of the product, leaving the organization of
work at his discretion.
In the companies, using more formal processes, there are clearly
defined roles of test designers (the designers of the tests), test
engineers (implementers of these tests), test automation
(developers), etc. Each of these roles requires its own unique skills
and abilities.

Test designers – they examine the product and determine which


tests to perform (after all, it is impossible to test everything — they
have a very important task to choose the tests that will be carried
out). To do this, they need to know the product, subject area, and
methodology of test design, so competent designers are rare.
Test automation engineers – they write scripts for automated
testing — and there are very different levels of tasks, from the use
of various means of recording user actions to the development of its
automation platform (Framework), which is often not inferior to the
complexity of the product under test. Therefore, the automation
engineer is primarily skilled developers.

Test engineers – either they check the tests previously designed by


the test designer, or when using the so-called research testing, they
study the product, design the tests and test it simultaneously.

17
But regardless of the role, testers every day stand guard over the
quality of the software, a lot and often use their product, come up
with ways to "break" it, and all this goes to the product for the
benefit.
The software can and should be tested at different levels:
Unit testing — these are white-box tests that test individual pieces
of code, such as functions, methods, and classes. In other words, it
is testing one code module (usually one function or one class in the
case of OOP code) in an isolated environment. This means that if
the code uses some third-party classes, then instead of the” slip "
classes-stubs (Mocks and stabs), the code should not work with the
network (and external servers), files, database (otherwise we test
not the function or class, but also the disk, database, etc.)
Stubs — these are stub classes, which instead of performing an
action return some data (that is, in fact, the function consists of a
single return). For example, the stub class works with the database
instead of the real database is accessed to return that the request
completed successfully. And when you try to read something from
it returns a pre-prepared array with data.
Mocks — these are stub classes that are used to verify that a
particular function has been called (I don't think they are needed
very often).
Typically, a unit test passes different inputs to a function and
verifies that it will return the expected result. For example, if we
have a function to check the correctness of the phone number, we
give it pre-prepared numbers and check that it will determine them
correctly.

18
If we have a function to solve a square equation, we check that it
returns the correct roots (for this we make a list of equations with
answers in advance).
Unit tests well-tested code like this, which contains some logic. If
the code has little logic, but mainly contains references to other
classes, it can be difficult to write unit tests (since it is necessary to
make a replacement for other classes and it is not very clear what to
check?)
Integration tests – these are black-box tests, they test some
component of the system, usually consisting of many modules
(classes or functions). For example, for a blog, we can test that when
the function of saving a post in the database is called, this post
appears, it is true that it is tagged, the number of comments is zero.
And when you add a comment, it increases by one.
At the same time, you can test, for example, that a post with an
empty name is not saved. The product under test must be in the
active phase and deployed to the test environment. Tests of the
service often are just testing of the integration level.
In order to avoid errors and not depend on external conditions,
integration testing is performed in a controlled environment. For
example, before each test, a temporary database is created with pre-
prepared records (for example, blog users), folders for storing
temporary files are cleared, and instead of requests to external
services, a stub is used that returns pre-prepared answers. If this is
not done, we may receive errors, for example, due to the fact that
we are trying to insert into the user database with the email already
in use, due to the lack of any file or due to an external service error.

19
Tests will fall more often, and we will spend time figuring out the
reasons. Also, a test site is often deployed on a separate server or
virtual host.
In order to speed up the execution of tests, usually use a database
that stores data in memory, not on disk.
If we draw analogies, for example, with the testing of an aircraft
engine, the unit tests are testing of individual parts, valves, dampers,
and integration testing is the launch of the assembled engine on the
stand.
You can't test any code. If, for example, the code hardcoded
parameters of connection to the database or the path to the folders
without the ability to change them, it is unlikely to use a temporary
database (Database) for tests.
The same is true if the classes in the code are strongly related and
dependency injection is not used, but global variables or static
methods are used everywhere. In General, to summarize we need to
say the following Code you need to write efficiently.
API (Application Programming Interface) - a set of predefined
classes, procedures, functions, structures, and constants provided
by an application (library, service) for use in external software
products. In other words, it is a set of functions that can be called to
get some data. Well, for example, Google-maps. This service has its
own geolocation API, which is available to all users. By sending a
request to it with a geographical address, in response we get the
coordinates of the point (and Vice versa), and the Central Bank has
its own API that returns the official exchange rate on a given day.
If your application or portal has an API, you can test it by sending
pre-prepared requests and comparing the response with the
expected response.
20
They can be considered as multi-step integration tests. Tests for
the web interface are often end-to-end tests.
UI scripts (user interface) — that test the black box, that is, what
the user sees on the screen. This is perhaps the most difficult thing
to test, if we are talking, for example, about checking the operation
of the site, then we have to somehow emulate the work of the
browser, which is quite difficult to arrange, to analyze the
information that is displayed on the page. But this kind of testing is
very important because it interacts with the application as well as
the user.
UI tests are also called End-to-End (E2E) or acceptance tests. They
can be considered as multi-step integration tests. Tests for the web
interface are often end-to-end tests.

Usually, UI is tested with scripts that describe the sequence of


actions and check the expected result.
For example, the test script of the registration form can work
according to this algorithm:

1. To go to the page http://www.gmail.ru


2. Click to register
3. Enter your valid email address in the account name field.
4. In the password field, enter a password that meets the security conditions.
5. Click the register button.
6. Make sure that after confirming the registration of the account, you can get
your email account.

21
To test a web application (site or portal), you need to simulate the
browser. There are different approaches to this. There are simple
tools that only know how to send HTTP requests to the server and
analyze the resulting HTML code, and more advanced ones that
either uses a real "browser" engine in "headless" mode (that is,
without displaying a window with a page on the screen), and the
most advanced ones provide drivers with which you can control a
real browser.
Simple HTML browsers are good because they work much faster,
but they don't interpret CSS and JS code and can't check, for
example, the visibility of a button or scripts. Important – take
note.
1) Automation scripts should be repeatable. For example, you
cannot get the original data from the random number generator,
because in this case, we will not be able to repeat the scripts.

2) Automation scripts must be running in a controlled environment.

3) The automation scripts should not use the same algorithm as the
code being tested (since in this case the same error can be made and
the results will be the same).

4) It is necessary to test both positive and negative scenarios. For


example, when testing the registration form, it is necessary to check
not only how it works when entering the correct data, but also how
it works with incorrect data (should give an error message).

22
5) When performing tests, it is necessary to monitor (automated)
the errors that occur. If you just get a message when an error occurs
(which the robot does not read) and the program continues to run,
this is a pretty useless automation script.
6) Automation scripts should be easy to run, ideally with a single
command. If you need to run it to perform a lot of actions, people
will be too lazy to do it. Companies usually set up a CI server that
downloads updates to their repository, runs tests, and sends error
messages to developers.

Pyramids of automated testing.

I think among you a huge number of people who have heard about
the pyramid of automatic testing. Why the pyramid? Just because
each level has a different number of tests. In fact, in the world of
automation, there are a huge number of pyramids. Below we look
at the 3 main pyramids, which are very common:
The ideal testing pyramid is shown in figure 1.
A little user interfaces automated testing. The average number of
integration tests. And a large number of unit tests.

23
Figure 1. The pyramid of testing from unit

The reverse testing pyramid is shown in figure 2. This pyramid is


also called ice cream anti-pattern, why can be understood in figure
3.

A lot of user’s interface auto tests. A little fewer integration auto


tests and a very small number of unit tests. The image in figure 3
shows a huge number of manual tests. Basically, this pyramid
prevails in most companies.

24
Figure 2. The inverted pyramid test from the user interface.

Figure 3. Ice cream anti-pattern

25
The two triangular testing pyramid is shown in figure 4.

Figure 4. Two triangular testing pyramids.

A huge number of user interface automation scripts. A very small


number of integration tests. And finally, a very large number of unit
tests. This pyramid is not the worst option for the development of
the testing process in the company. This pyramid is possible if
backend developers offer to perform all checks at the user interface
level.
By the way, the correct pyramid and the reverse check our product
equally. No matter how many defects were not they are all no matter
what the pyramid we will apply. This pyramid gives a similar result
as in the case of the first pyramid. that is, if we have few unit tests,
then we begin to overlap them at the level of user interface tests
and, accordingly, if there are few integration tests, we again try to
overlap them with user interface tests.

26
Understanding the testing pyramid is best in practice.

Let's say we have a team consisting of a developer, a tester, and an


automation engineer:

I think Agile is familiar to all of us. It is flexible and iterative


development.
Our team has a developer, tester, and automation engineer. This
team begins to develop a product. For example, we take the
calculator application. In our product – a calculator, for example, it
will be a kind of UI system. In our approach, we chose an iteration
of 2 weeks i.e., every 2 weeks we introduce new functionality (a
new button or a new function for calculation). Will the team be able
to create automation scripts adhering to the” correct pyramid «of
testing?

27
I remind you that the correct testing pyramid consists of UI,
integration, and unit and all these are automatic tests. Stop now
from reading for one minute and give yourself an answer to the
question.

Will the developer, automation engineer and tester be able to do


their job on this project in order to adhere to the correct automated
testing pyramid?

Input data for product creation.


Iteration: 2 weeks, i.e., every 2 weeks new functionality is
introduced. Every 2 weeks maintain old tests and write new ones.
Team: Developer, automation engineer, tester.

28
And so, on the unit layer, the developer will write automatic tests,
which will cover the basic functionality of our product.
On the integration layer, we can ask the developer to write
documentation about the integration of our product. We have a
tester who will use this documentation and with the help of test
design will create "correct tests".
And we have an automation engineer who will be able to view the
tests that our tester has written and automate them quickly. It is also
likely to interact with the developer.
On the UI test layer, our tester will create "correct tests" with test
design, and the automation engineer will create UI automated tests.
And it turns out that in our product all three layers we can create.
We won't have any layer of our automated tests not present in the
pyramid.

Based on the above in the development and testing of our product


calculator, we can use both the "correct" pyramid on the part of unit
tests and the inverted pyramid on the part of the UI.
The more and better you understand your product and write
capacious UI tests, the less accordingly you will have them.
On the part of the unit (subject to agreement with the developers),
developers write unit tests and independently maintain their code
coverage, which should approach 100 percent, only in this case we
will be sure that the parties of unit tests check all critical important
functionality.

29
On the part of integration tests due to the fact that code coverage on
the side of unit tests is very high, we can describe some top-level
integration tests in order to make sure that this functionality works
correctly. Also, at this level, we can go a little "left" or "right" to
make sure that some minor functionality can take advantage of our
UI functions without errors.
Finally, from the UI side, we can describe in such a way that we
will have very few UI tests and maintain the test design at the proper
level. Because in our product we have already checked from the
side of unit and integration that the data that come to us is correct
and, accordingly, on the UI side, we just check that they are
displayed correctly.

The correct pyramid is considered correct precisely


because it teaches how to properly distribute the forces
of each of the team members.
To consolidate the concept of coverage of the project automation
scripts on the correct pyramid, you need to understand.
Provided that the developer has written 1000-unit tests and code
coverage is equal to 100 percent. To do this, you need to write 300
integration and 30 user interface auto tests.

The formula is simple: X> Y> Z,


where X – Unit tests, Y – Integration, Z– user interface
automated testing. 1000> 300> 30

30
31
Chapter 2. Automated testing strategies.

There are several commonly used versions of the at (Automated


testing) strategy. The choice of a specific strategy depends on the
order and intensity of certain works. Choosing a strategy is not the
most important task, but it is the best place to start the automation
deployment process. Here are 3 options for strategies that are
typical for the very beginning of the deployment of automation. Of
course, there are more options for strategies.

1.1 Strategy «Let's try»

It is used in the case when the automation of testing on the project


in the company, in fact, has never been, and it is planned to
carefully start with a moderate allocation of resources.

The strategy should be applied in the case when:


• There are no exact automation goals (to cover 40% of the code
of a particular module by a certain date, to reduce the cost of
manual testing, etc.).
• test automation has never been used on a project before.
• The tester has no (or very little) automated testing experience.
• The resources allocated are moderate or low.

32
Description of strategy:

• Much attention should be paid to the preparatory stages of


testing (preparation of test plans, test cases, etc.).
• Much attention should be paid to tools that can be used as an
aid in manual testing.
• It is worth experimenting more with automated testing
technologies and methodologies. No one is waiting for urgent
results and you can experiment.
• Work with the project from the top level, at the beginning,
without going into the automation of specific modules.

33
1.2 Strategy «Here the target»

The peculiarity of the strategy is the orientation to a specific


result. The purpose of the new stage of automated testing is
selected/determined, and the tasks are focused on achieving this
result.

The strategy should be applied in the case when:

• When the project has already carried out preliminary work,


there is some background in the form of test plans, test cases,
optimally-automation scripts of the previous stage.
• There is a specific goal of automated testing (not global —
80% automation scripts for six months, but rather 50%
automation scripts specific module for the month)
• To achieve a specific goal, specific tools are selected,
optimally if specialists have a certain technical background for
working with tools.

34
Description of strategy:
• Progressive strategy, somewhat reminiscent of Agile
development methodology. Moving forward in stages.
Coverage with automations scripts module by module, until
the full implementation of the meta task type (80% for six
months).
• At each stage, a new goal is set (most likely continuing with
the last completed goal, but not necessarily), and tools are
selected to implement this goal.
• Deep focus on a specific goal, writing test cases, automation
scripts, not for the whole project, but only for a specific task.

35
1.3 Strategy «Operation Uranum»

In fact, the strategy is a constant and methodical work on the


automation of testing according to the priorities set every 2-3
weeks. Optimally-the presence of constantly working on the
automation of man, not particularly distracted by third-party tasks.

The strategy should be applied in the case when:


• There are no specific goals, there is only a General wish "that
everything was good." If "Here the target" resembles the
principle of Agile, then this strategy is close in spirit to the
Waterfall methodology.
• There is a resource in the form of at least one person constantly
working on the project, tightly engaged in the task of
automation.
• There are no clearly defined goals of test automation, but there
are wishes (priorities) that can be set for a fairly long period of
time (these modules are more important, more errors
traditionally in the backend/frontend, therefore a lot of effort
should be directed at it).

36
Description of strategy:

• The idea of the strategy is described above, constant and


methodical work taking into account the priorities.
• In the beginning, you need to focus on the basic part, because
one way or another within the framework of this strategy, the
whole project is automating without fully focusing on specific
modules.

Summary:
It is necessary to consider the General logic and strategy of
automation, but I would suggest the following option: at the
beginning for 1 month (3-4 weeks) to use the strategy "Let's try", to
prepare the base for further work, not particularly deeply immersed
in writing code, the automation scripts and deep specifics of the
modules. Upon completion of this stage, we will have a ready basis
for further work. And then you need to choose how it will be
convenient to work on-roughly speaking, waterfall or agile. And
continue to act in accordance with the chosen strategy.

37
2. Parallelization of tasks.

This item makes sense if several people are working or will be


working on testing the project. Then there is an essential point of
parallelization of tasks in the team. If one person will work on test
automation in your team-you can safely skip this point.

From the point of view of competencies and knowledge close to


each other, the process of test automation can be divided into roles
that encapsulate different tasks of a similar type.

Roles

Architecture

• Tool selection
• Choice of approaches

Development

• Test development and debugging


• Support, update
• Error correction

Test design

• Test selection
• Test design
• Designing test data
38
Management

• Planning
• The collection of metrics
• Training

Testing

• Error localization
• Error institution
• Preparing test data

If several people are working on testing the project, it is logical


to parallelize the roles described above for specific people. In
this case, it makes sense to assign the role of "Management" to
one person, to divide the roles of "Test design" and "Testing" at
all, and the roles of "Architecture" and "Development" to one or
two heroes.

39
The logic is as follows.
1. There is a clear test Manager for this project, who plans,
determines the timing, and is responsible in case of non-
compliance.
2. There are two common types of testers — manual testers and
automated testers. In this case, the tasks of the roles "Test
design" and "Testing" are equally relevant for both types.
Accordingly, all testers write and design tests that can be used
in “manual” testing and automation.

3. Further, manual testers on the created test plans and test cases
conduct manual testing, automated testers also choose and
modify the necessary tests a suitable for automation.

However, if you have a man-orchestra, he will do everything at


once, but will not be a professional in everything.

40
3. Create a test plan.

After choosing an automated testing strategy, the next important


point will be the starting point of the work — creating a test plan.
The test plan must be agreed with the developers and product
managers, because errors at the stage of creating the test plan can
be backfired significantly later.

Test plan should be for any relatively large project, which employs
testers. I describe a less formalized test plan than the option that is
usually used in large projects yet for internal use the formalities is
not necessary.

The test plan consists of the following items:

3.1 The object of the testing.

A brief description of the project, the main features ( web/desktop,


UI on iOS, Android, works in specific browsers/OSes, and so on).

3.2 Part of the project.

A logically broken list of individual, isolated from each other


components and modules of the project (with possible
decomposition, but without going into details), as well as
functions outside of large modules.

In each module to list a set of available functions (without going


into details). This list will be used by the Manager and the test
designer when determining the tasks for testing and automation
for a new sprint (for example: "changes were made to the data

41
editing module, the file upload module was affected and the
function of sending notifications in the client was completely
redesigned").

3.3 Testing strategy and planned types of testing on the project.

Strategies are described in paragraph 1. in the case of automation,


usually only one type of testing is used — regression testing (deep
testing of the entire application, run of previously created tests).
By and large, automation scripts can be used in other types of
testing, but as long as they do not come to at least 40% of the
coverage of the principal benefit from this will not be.

However, if the test plan is planned to be used not only by


automated testers, but also by manual testers, then you need to
consider the entire testing strategy (not automation), select or
mark the used/desired types of testing.

3.4 The sequence of the testing efforts.

How will be the preparation for testing, assessment of deadlines,


collection and analysis of statistics on testing.
If you cannot imagine what to write in this paragraph — it can be
safely skipped.

42
3.5 Test completion criteria

Briefly describe-when testing is considered complete within this


release. If there are any specific criteria — describe them.

Summary:

Write a test plan is necessary, without it all further automation will


be chaotic and haphazard. If in manual testing (in very poor manual
testing) you can do without a test plan, test cases, and use monkey-
testers with relative success, then it will not work in automation.

4. The definition of a primary task.

After choosing a strategy and drawing up a test plan, it is necessary


to choose a set of tasks from which to start testing automation.

The most common types of tasks that are set before automation:

• Full automation of acceptance testing (Smoke testing) is a type


of testing that is carried out first after the build is received by
the testing Department. As part of smoke-testing, the
functionality that should work always and in any conditions is
checked, and if it does not work — by agreement with the
developers, it is considered that the build cannot be accepted
for testing.

43
• Maximize the number of defects found. In this case, it is
necessary to select first those modules (or aspects of
functionality) of the system that are most often subject to
changes in the logic of work, and then select the most routine
tests (that is, tests where the same steps with small variations
are performed on a large amount of data).
• Minimization of the "human factor" in manual testing. Then
we select, again, the most routine tests that require the most
attention from the tester (at the same time, easily automated).
For example, testing the user interface (for example, checking
the names of 60 columns in a table), checking the content of a
combo box with 123 elements, checking the export of a table
on a web page to Excel, etc.
• Finding the most crash the system. Here you can apply
"random" tests.

At the very beginning of automation deployment, I recommend to


set the task of acceptance testing automation as the least time-
consuming. In this case, the solution of the problem will allow you
to run acceptance testing on the next adopted build.

44
The main criterion of smoke tests should be their relative simplicity
and at the same time mandatory verification of critical functionality
of the project.

It is also assumed that smoke-tests will be positive (checking the


correct behavior of the system, while the negative — check whether
the system will not work correctly), so as not to waste time on
unnecessary checks.

Summary:
Make a list of primary tasks to automation testing logically will be
the primarily automate smoke-tests. In the future, they can be
included in the project and run at each build. Because of their
limited number, these tests should not slow down the build much,
but each time you can know for sure whether the critical functions
are still running.

5. Writing test cases for selected tasks.

With regard to test cases, it is customary to divide the testing


process into two parts: testing by ready-made scenarios (test cases)
and exploratory testing.

With regard to research testing, everything is quite clear, it exists in


two variations, or the study of new functionality without special
preliminary training, or in the form of a banal monkey-testing by
monkey-testers.

45
Scenario testing implies that the time has was spent and on the
functionality of the project created test scenarios that cover the
largest possible amount of it.

The most reasonable, from my point of view, is a combination of


approaches, in which new functions and modules are tested in a
research style, trying to check possible and unlikely scenarios, and
at the end of testing, test cases are created, further used for
regression testing.

Three options for further use of test cases, except for the obvious:

• Generate from test cases checklists for the modules of the


project, so the check will be accelerated, but the main problem
areas will be checked.
• Training beginners-the tester who came to the project can
study the project from the point of view of test cases, as they
capture many not obvious moments of the application.
• Further use as a basis for automation scripts. If you deploy test
automation using a systematic approach-the writing and
further use of test cases is quite logical-in fact, the test case-a
ready-made automated script.

46
I will not describe in detail the principles of describing test cases,
there are a lot of materials on this topic on the web, I will describe
briefly.

A good test case consists of the following items:


1. Title (description) — a very brief description of what the test
is checking.
2. Preliminary state of the system-a description of the state of the
system in which it should be at the beginning of the test case.
3. Sequence of steps-consistently described actions that check
the stated goal in the Title.
4. The expected result is the state of the system, which we wait
after passing the sequence of steps of the test case.

For easy storage of test cases, there are many solutions, but of
those that I used, a very good application showed itself Test link,
and the best-the system sitechco.ru. convenient free system for
creating/storing and tracking test cases.

Summary:
For further automation of testing, it is necessary to write test cases
on the tasks set in item 4. They will serve as both the beginning of
normal regression testing and serve as a basis for further
automation scripts.
47
As a recommendation to the tester who plans to write test cases, I
recommend reading about the technique of pairwise, equivalence
classes and test design techniques. Having studied at least
superficially these topics to write good and useful test cases will be
much easier.

7. Selection of tests for automation.

So, by the current stage, we have formed a test plan and described
part of the functionality of the modules as test cases. The next task
will be to select the necessary tests from the available variety of
test cases. Right now, you have only test cases prepared for
smoke-testing, but after a few iterations of test case development
in the project it will become much more, and not all of them make
sense to automate.

It is very difficult to automate the following things:


1. Check the opening of a file in a third-party program (for
example — check the correctness of the document sent to
print).
2. Checking the image content (there are programs that allow you
to partially solve this problem, but in a simple slice of tasks, it
is better not to automate such tests, but leave them for manual
testing)

48
3. Checks related to ajax scripts (different applications have their
own solutions, but in General ajax is much more difficult to
automate).

Getting rid of the monotonous work.


As practice shows, checking only one function may require
several test cases (for example, we have an input field in which
you can enter any two-digit number. It can be checked by 1-2
tests," 2 characters", "1 character". If you check carefully-then add
a test for the absence of a value, zero, the boundary value, and a
negative test with the input of characters). The advantage of
automation scripts before manual testing, in this case, is that if we
have one automation script that checks the data input in the field
— we can easily increase their number by changing the input
parameters.

By and large, automation scripts should cover the most tedious


and monotonous part of testing, leaving the testers room for
exploratory testing.

Accordingly, when choosing test cases for automation, it should


also be taken into account.

49
The simplicity of the tests.

And the last important criterion for the selection of test cases for
automation is the relative simplicity of the tests. The more diverse
the steps in the test – the worse the test case, the more difficult it
will be to automate and the more difficult it will be to find a defect
in the event of the fall of this automation script at startup.

Try to choose small test cases for automation, gradually gaining


experience and automating more and more complex test cases,
until you decide which test length is optimal for you.

8. Test design for automation.

Test cases selected for automation will most likely need to be


completed and corrected, since test cases are usually written in a
simple human language, while test cases for further automation
should be supplemented with the necessary technical details, for
ease of their translation into code (over time, an understanding
will come, which tests need to be described in a living language,
and which-to describe in detail and clearly at the stage of creating
test cases).
Accordingly, it is possible to form the following recommendations
on the content of test cases intended for automation:

50
1. The expected result in automated test cases should be
described very clearly and concretely.

• Bad: Result-Forms page opens.


• Good: the Result – opens the Forms page, the page has a
search form <input type= "text" placeholder="Search».>,
there is a CSS=div element.presentations_thumbnail_box
and link=Notes.

4. Take into account the synchronization features of the browser


and the application running the tests.

Let's say you click a link in a test and then take the next step of an
action on a new page. In this case, the page can be loaded for a
long time, and the application, without waiting for the download
of the desired item to run, will fall out with an error. Often this can
be easily solved by setting the parameter waiting for the item to be
loaded.

• Bad: click on the "Forms" link in the top menu. Confirm the
changes.

• OK: click on the "Forms" link in the top menu. Wait for the
form to appear with the text " Do you want to save changes?".
Click on the " OK " button.

51
5. It is not necessary to prescribe hard values in the test case.

Only if it's not necessary. In most cases, when creating a test


environment, the appropriate data is determined, so it is
optimal to select values when creating automation scripts.

• Bad: open slide " slide 1_11»


• Good: the first slide of the presentation is Open.

6. Automated test cases should be independent.


There are exceptions to any rule, but in most cases, it should
be assumed that we do not know which test cases will be
executed before and after our test case.

Bad: from the file created by the previous test…

7. It is necessary to carefully study the documentation on the


tools used.

This way you can avoid the situation when the test case
becomes false positive due to the wrong command, i.e., it
successfully passes in the situation when the application is
not working correctly.

52
Summary:
A correctly written test case designed for automation will be
much more similar to a miniature technical task for the
development of a small program than a description of the
correct behavior of the application under test, understandable
to a person.

53
8. Preparation of test data.
In this context, test data refers to the state of the application at the
time the tests start. Given that the values and parameters used in
automation scripts are almost always "hard-code", and are very
rarely flexible, the logical conclusion would be to assume that
they are unlikely to be executed in any state of the application. For
example, it is unlikely that you will run an automation script that
checks the editing of common articles in the production system,
where these articles are seen by customers, or in a completely
clean system, where there are no articles to edit.

In the first case, the automation script can mess up the excess, in
the second — just cannot run.
Accordingly, for the proper use of automation script are requested
to bring the application in these tests the appropriate condition.
There are no special rules here, everything is intuitively clear, if we
proceed from the tests, the only remark — usually, automation
scripts are run by isolated sets of independent tests. In this case,
tests of one set are often run-in random order. Accordingly, when
writing automation scripts, try to write them so that any other test
in the Suite can still run when one test completes.

• Bad-the test refers to the originally prepared file and deletes it


in the process. Another test that runs next should also access
an already deleted file.

54
An error occurs.

• Well test deletes a file or creates it in the beginning of his work,


or creates upon completion. Therefore, the file exists before
and after the test is completed.

After completing all the above recommendations, configuring,


writing and running automation scripts you have done the most
important part of the work-began to deploy test automation on the
project.
There will be a lot of tasks ahead to create test cases, configure
tools, generate appropriate and informative reports, and much more
until auto tests become an integral part of the testing process on the
project, but the first step is taken.
The following will describe the tasks faced by the automation
engineer each new test cycle, maintenance or sustain tasks. You will
have to return to them quite often, until the thinking and analysis of
automation scripts becomes a habit.

55
Development and maintenance of the testing
automation process.

1. Evaluation of the effectiveness of automation.

At some point, almost all professionals involved in the automation


of testing are faced with the question of the effectiveness of
automation in individual modules, functions and the project as a
whole. Unfortunately, it often happens that the costs of
development and support of test automation do not pay off and the
usual manual testing is more useful. In order to avoid this at a
critical moment, it is better to start considering the effectiveness of
automation scripts starting from the second or third cycle of their
development.

Performance evaluation is considered in two logical fields:

2. Evaluation of the effectiveness of automation in General.

Evaluation of the effectiveness of test automation in comparison


with manual can be very roughly calculated by the following
algorithm:
1. It is necessary to estimate the time required for testers (or
programmers, if they are engaged in automation) to develop a
set of automation scripts that cover a specific module or
project functions — this will be TAuto.

56
2. Estimate the time required for testers to develop test cases and
checklists that will be used in testing this functionality — it
will be TMan.
3. Calculate (or estimate if the functions have not been developed
yet) the time that will be spent on one — time testing of
functions manually-it will be TManRun.
4. Estimate the time that will be spent on the alteration of auto
tests in the case of changes in functions — it will be
TAutoRun.
5. Calculate the time that will be spent on the analysis of the
results of the automation scripts — it will be TAutoMull.
6. Very tentatively, the planned number of iterations in the
product will be calculated until it is completed (if there is
accurate data on the number of development cycles — of
course, use this data) -it will be N.
7. Approximately estimate the number of builds of the product
demanding re-testing within the same release. Take the
average for R.

57
Now, we derive the following formula:

TManTotal = N*Tman + N*R*TManRun


TAutoTotal = TAuto + N*TAutoRun + N*R*TAutoMull

Accordingly, in the case that TManTotal >= TautoTotal


automation makes sense.

This study can be done when planning work in a new module or a


new large functionality for which you do not yet have data on the
effectiveness of automation, in order to determine whether these
costs will pay off.

3. Evaluating the effectiveness of automation scripts.

Periodically (ideally — once a test cycle) it is necessary to


evaluate the effectiveness of individual automation scripts.

The reasons why you need to conduct such a study, several:

1. Dynamic changes in functionality.

It often happens that developers undertake to change one of the


modules of the project, already covered by your automation
scripts. It is logical that when you change the logic of the
functionality of this module, the tests begin to return errors when
checking. And you start to rewrite them to work in the new
environment. And then the logic changes again. And so on.

58
Here in this place, you need to stop and evaluate (communicating
with the developers), what is the probability of further changes in
the logic of the module functionality in the near future. What will
change and what is not planned yet?

Accordingly, what will dynamically change is not necessary to


redo before the completion of all work and instead of automation
temporarily testing this area manually.

If the work is no longer planned and the module to be stable for


some time it will be correct to change the test under the new con-
ditions.

2. Duplication of work.

Sometimes it happens that new functionalities are added to the


long-standing module, and new test cases and automation scripts
are written on it. And also, sometimes, it happens that the new
tests can greatly overlap, and sometimes duplicate the existing
ones. You need to keep this thought in mind and sometimes check
whether there not meaningless duplicates that only take time to
complete each build.

3. Optimize execution time.

At first, when automation scripts is still a little bit, they are exe-
cuted quickly enough with each build, but with the increase in the
numbers, the time for each test build is increasing. In case of find-

59
ing errors or the appearance of broken tests on the new functional-
ity, you have to restart the test Assembly again and again, each
time waiting for its completion.

Periodically it is worth stopping and review whether executable


automation scripts so necessary.

When developing tests, a good solution is to put markers severity.


For example, critical, minor, trivial. And configure the tools to run
specific groups of tests for specific tasks. For example-for full
regression testing it is necessary to run testing with all the marks,
in case of finding an error and fixing it, you can run automation
scripts of a specific set, so as not to wait for too much time.

4. Test run logic.

For improving the efficiency of the automation will be correct in


detail and to carefully consider the mechanisms to start
automation scripts.

Most popular models:

The priority of the tests:

• Critical
• Major
• Minor
• Trivial

60
The model described above is used to manage test sets.

Modular accessory tests

• Module 1
• Module 2
• Module 3
• ...

If you have written new automation scripts or rewritten the old


ones for a particular module — it makes no sense to run all the
scripts at each run.

Need to run:

• Run
• Not start

61
Sometimes we know for sure that this automation script will
not work properly (the function has been changed, but the
script has not been rewritten, scripts do not work correctly),
or we know for sure about the error that this script catches,
but we do not plan to fix it soon. In this case, the script
falling at each run may be inconvenient for us. To do this,
you can embed a label in the script, if you specify it in CI, the
script will not run. After fixing the problem, you can run the
script again.
Start time:

• According to the schedule — for example-build the current


test / stable build at 5 am every day. By the time you come to
work report on the passage of tests will be ready.
• If you change the application — restart automation scripts
when a new change set appears in the current test repository
branch.
• When changing tests-similarly, when updating automation
scripts in the repository.
• On demand-standard run-on demand.

62
Summary:

If there is a great desire to implement test automation on the project,


after a few development cycles it will be useful to spend time and
calculate the effectiveness of the ongoing automation. And
periodically recheck the results. As well as the habit to periodically
assess the effectiveness of their implementation.

63
9. Estimation of task execution time.

Before starting testing work before each test cycle (release),


managers make an estimate of the time that is planned to be spent
on manual testing and automation. The time that is planned to be
spent on automation in the test cycle becomes all the more
predictable if the project has a large automation script coverage.
From the point of view of estimating time costs, it is common to
divide tests planned for automation into two types:

10. Research.
Research tests are those tasks for which it is very difficult for
us to estimate the execution time. This can be due to various
reasons: the introduction and research of a new tool for
automation, a new type of tests not used previously in the
project, the beginning of automation on the project or the
evaluation of the time of an inexperienced in automation
person.

If the task has the characteristics of research, to assess it, you


should ask the following questions:

• Is it possible to automate this task at all? Perhaps the answer


will be no and it will just need to be returned for manual
testing.

64
• What is the best tool to use for automation? You may need a
new tool and need to devote time to its development, or old
enough, but you will have to look for workarounds to use it —
which will also take time.

By and large, an accurate assessment of such tasks is impossible, it


will always be approximate. You can use the following techniques:

• Approach to the question of evaluation from the point of view:


"How much time are we willing to spend on this task?". It is
necessary to set the time frame, which should not be crossed.
If the problem is clearly not solved in the allotted framework,
perhaps it should not be solved at all.
• Determine the criteria for achieving which the task will be
considered completed and should stop doing it.

11. Replicated.
If we can collect statistics on the implementation of similar to
the tasks set before us-then the tasks are replicated. Typically,
these tasks include creating automation scripts without using
new script types, extending coverage, and regular tasks to
support scripts and infrastructure.

65
Such tasks are quite simple to evaluate, because similar to
them have already been performed and we know the
approximate time of their execution. Help us:

• Collect preliminary statistics on the execution time of similar


automation tasks.
• Collection of statistics on the risks we faced when performing
such tasks.

Summary:
The wider the coverage of automation scripts functions of the
project, the more accurate will be your estimates of the time planned
for automation.

66
Features of test cases for automation.
Often (and in some projects and "as a rule") automation is subjected
to test cases, originally written in simple human language (and, in
principle, suitable for manual execution) — i.e., the usual classic
test cases.
And yet there are several important points that should be taken into
account when developing (or refining) test cases intended for
further automation. The main problem is that the computer is not a
human being, and the corresponding test cases cannot operate with
"intuitive descriptions", and automation specialists quite rightly do
not want to spend time to Supplement such test cases with the
technical details necessary to perform automation - they have
enough of their own tasks. It follows a list of recommendations for
the preparation of test cases for the automation:

• The expected result in automated test cases should be


described very clearly with the indication of specific features
of its correctness.
Bad Good
The standard search page is The search page is loaded:
loaded. title = "Search page»,
there is a form with fields "input
type="text"", and
"input type="submit" and value="Go!"",
present the logo "logo. jpg" and no other
graphic elements ("image").

67
• Since the test case can be automated using various tools, it is
necessary to describe it, avoiding solutions specific to a
particular tool.

Bad Good
1. Click on the "Search “link. 1. Click on the "Search “link.
2. Wait for the page to load.
2. Use clickAndWait to synchronize
timing.

• In continuation of the previous point: the test case can be


automated for execution under different hardware and
software platforms, so you should not initially prescribe
something specific to only one platform.

Bad Good
1. Send a WM_ CLICK message to 1. Pass the input focus to any of the
the application in any of the visible non-minimized Windows of the
Windows. application (if not — expand any of
the Windows).

2. Emulate the event "click with the


left mouse button" for the active
window

68
• One of the unexpected problems is still synchronization of the
automation tool and the tested application by time: in cases
when the situation is clear for a person, the test automation tool
can react incorrectly, "without waiting" for a certain state of
the tested application. This leads to the failure of test cases on
a correctly running application.

Bad Good
1. Click on the link "Expand data". 1. Click on the link "Expand data".
2. Select " Unknown "from the list 2. Wait until the data is loaded into
that appears. the Extended data list (select
id="extended_ data"): the list will
enter the enabled state.

3. Select "Unknown" in the


"Extended data" list»

• Do not tempt an automation specialist to enter constant values


(i.e., "hardcoding") into the code of the test case. If you can
clearly describe the meaning and/or meaning of a variable, do
so.

Bad Good
Open http://application/. Open the main page of the
application.

69
• If possible, use the most versatile ways to interact with the
application under test. This will significantly reduce the time
to support test cases if the set of technologies used to
implement the application changes.

Bad Good
To pass into the "Search" a set of Emulate the input of the "Search"
events WM_KEY_DOWN, field value from the keyboard
WM_KEY_UP, resulting in a field (inserting a value from the buffer or
must be entered search query. assigning a value directly is not
appropriate).

• Automated test cases should be independent. There are


exceptions to any rule, but in most cases, it should be assumed
that we do not know what test cases will be executed before
and after our test case.

Bad Good
From the file created by the previous 1. Set the "Use stream buffer file"
test checkbox to the checked state.
2. Activate the data transfer process
(click on the "Start “button).
3. From the file buffering to read.

70
• It is worth remembering that the automated test case is a
program, and it is necessary to take into account good
programming practices at least at the level of absence of
"magic values", "hardcoding" and the like.

Bad Good
if (dateValue == ‘2015.06.18’) if (dateValue == date(‘Y.m.d’))

• You should carefully study the documentation for the used


automation to avoid a situation when from-for incorrectly
chosen team's test case becomes a false-positive, i.e., passes in
a situation when the app is not working properly.

The so-called falsely positive test cases are perhaps the


worst thing that happens in test automation: they give the
project team false confidence that the application is
working correctly, i.e., they actually hide defects instead
of detecting them.

71
Since for many novice testers, the first educational tool for test
automation is Selenium IDE, I will give an example of its use. For
example, in some steps of the test case, it was necessary to check
that the checkbox with id=cb is selected (checked). For some
reason, the tester chose the wrong command, and now at this step,
it is checked that the checkbox allows you to change its state
(enabled, editable), and not that it is selected.

Bad (incorrect command) Good (the correct command)


verifyEditable id=cb verifyChecked id=cb

And finally, consider the error that for some mystical reason makes
a good half of novice automation engineers — is the replacement
of verification by action and Vice versa. For example, instead of
checking the value of a field, they change the value. Or, instead of
changing the state of the checkbox, check its state. There will be no
examples of "good/ bad", because there is no good option — this
simply should not be, because it is — a gross mistake.

72
Chapter 3. Organization of automated process on
the project.

How many automation engineers do you need for the


project?
In fact, the question is quite complex. It is often asked in interviews.

Let's try to understand and answer it.

If the automation process is created at the very beginning


of the project.
And so, imagine this picture we have 5 developers on the project. 2
backend developers and 3 front-end developers. All developers are
constantly writing code for the project. We decide to implement test
automation in this team. There is some N number of testers who are
engaged in manual testing and write test scripts for automation.
If we stick to the right testing pyramid, we will only need 2
automation engineers, one on the backend and the other on the
frontend and ideally, they will share the knowledge among
themselves.
The five developers are a large number, and they will quickly create
new functionality. Automation engineers should watch how they do
it. If the automation engineers are involved in the product, they will
be inside the project, and it will be great to work directly with the
development team.

73
For example, consider some situation:
Imagine a Popup that appears on the web page and for front-end
automation tester will be great if he learns how this Popup is created
by the developer.

And at the same time, for example, he can learn from the developer
that he created as many as 3 different Popup.
And automation engineer then it will be clear that it is not necessary
to bind to one popup provided that it will automate another popup,
as there may be another business function, another location,
elements, and locators. Similarly, with the backend automation
engineer if he starts working with the development team, he learns
how developers create integrations, what methods they have, what
they throw, what are the pitfalls it is from the backend. Thus, by
separating automation engineers we can guarantee that we will be
able to create the right testing pyramid and maintain it.

74
75
The process is as follows, a business analyst who communicates
with the tester, a disgruntled customer and a Product Owner who
stands and looks and does not understand what is happening. The
development manager is in charge of everything, the backend
developer and the front-end developer are arguing over how and
what and can't find a compromise. In General, a complete mess.
We put 2 developers and one automation engineer at the beginning
of the project. Accordingly, developers and automation engineers
can start writing automation scripts together. Developers write unit
tests. And automation engineers write integration and user interface
tests.

We need to understand one thing. The higher we are in


the testing pyramid increase the complexity of creating
tests, as well as execution time and cost.

76
Provided that automation starts at the beginning of the project,
developers and automation engineers can divide the creation of
automated tests among themselves, agreeing on the division of
responsibility.

77
If the automation process is created in the middle of the
project.

Automation starts from a certain timeline on the project is already


functional and the n-th number of tests. These tests are called
regression.
And then there is an important question you need to decide what
functionality should be automated in the first place, regression or
new functionality. The question is not easy as the project already
has implemented functionality and development continues.

78
1) Regression.

1. QA team throws all their efforts to test the new functionality,


automation engineer is engaged in writing automation scripts
on the established functionality.

The complexity of this approach is that:


• Automation engineer doesn't know the product well.

• Regression tests accumulate, in other words, the regression


will always increase.
For example, we have some test scripts that were created by manual
testers. Automation engineer creates based on their automation
scripts and it turns out that the regression slows down. New
functionality appears quickly enough. The team quickly creates
new tests for validation.

79
At some iteration, it turns out that everything that is included in the
regression and automation scripts takes some part and if you
compare, it becomes clear that automation scripts take a smaller
part than the new functionality. Because the new functional tests are
written more than written automation scripts. Accordingly, it turns
out that there are more new tests written for new functionality than
automation scripts. It becomes clear automation engine will not
have time to cover all regression tests. You should try to create
automation scripts more than you create tests for new functionality,
which later will go into automation.

80
2) New functionality.
Let's say we have a testing team that writes test scripts and
immediately passes them to automation. The automation engineer
begins writing automation scripts. There's only one question left.
Who will do regression testing? Accordingly, if regression testing
is not covered by automation, it will increase more and more. Here
will help rule if we create on a new functional test less, than on
automation, then our automation engineer can have time, as write
automation scripts on new functionality, so and on the old, and quite
possibly they reach moreover, that automation scripts will be
created in parallel designing.

81
To understand where to start testing automation with
new functionality or regression. It is necessary to answer
the question. Where you find more defects in new
functionality or in regression?

The main task of automation, as well as the QA Department, to


deliver a quality product.
Accordingly, if you find more defects in the new functionality is
there and should start the automation exactly there and vice versa.
A collective approach to test automation is very important.
Programmers should be interested in this approach. It is not
necessary to focus on one person on the project, one person cannot
manage the process, cannot be all managed by one person. This is
a very important axiom.

82
Conclusion.
I hope that if you have read this book to the end, you have an
understanding of the basics of test automation. In this Chapter, I
offer you books and websites that you can visit to continue your
education.
I recommend reading Java books that I have personally read and
refer to them very often:
"Effective Java" By Joshua Bloch
"Implementation patterns", Kent Beck
"Growing object-oriented software driven by testing" Steve
Freeman and Nat Price
"Core Java: Volume 1-Basics" Kay. Horstman and Gary Cornell
"Covert Java: techniques for decompiling, putting and reverse
engineering" Alexey Kalinovskiy
"Competition Java in practice" by Brian Goetz
"Mastering regular expressions" Jeffrey Fridley to justify my
choices.
"Design Patterns: Elements of Reusable Object-Oriented Software"
Erich Gamma, John Vlissides, Richard Helm, Ralph Johnson

Implementation Patterns: Kent Beck


Another book that covers a similar topic and I highly recommend
reading it:

Clean Code: A Handbook of Agile Software Craftsmanship Robert


C. Martin

Code Complete: A Practical Handbook of Software Construction


Steve McConnell

Recommended web sites

83
For General Java news and modern video conferencing, I
recommend the following web page:

https://www.infoq.com/java/
Let me remind you that I have a website http://test-engineer.site,
and I plan to add more information there, and links to other
resources over time. I will also add more exercises and examples on
this site.

84

You might also like