0% found this document useful (0 votes)
38 views20 pages

2021-04-26 Automated Testing Devbridge

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views20 pages

2021-04-26 Automated Testing Devbridge

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

A ROADMAP

FOR TEST
AUTOMATION
Know the right shortcuts to take on
the journey to quality software

Aurimas Adomavicius Ignas Bobinas Vytautas Paulauskas Nikolaj Tolkaciov


President Senior Software Engineer Director of Engineering Testing Practice Lead
DATA-INFORMED
STRATEGY.
DEDICATED
TEAMS.
PRODUCTS
THAT DELIVER
RESULTS.
01
Reduce friction running automated tests

02
Boost confidence in the test suite

03

Why end the develop-first, test-late approach

04

Require a test-first approach

05

Benefit from two proven shortcuts

06

Maximize test automation efforts


REDUCE FRICTION RUNNING
AUTOMATED TESTS
Various principles and practices contribute to software quality. For larger teams and complex
products, test automation becomes a central focus. Automated testing holds the promise of
expediting software validation and increasing testing coverage. The effort is not without its
challenges with teams struggling with complexity, ill-defined requirements, a dynamic code
repository, and the leap from manual to automatic tests.

Test automation includes suites of verification, functional, regression, and performance tests that run
with little or no manual input from the development staff. Automated testing can be done in each
development context (i.e., unit, integration, system, delivery.) As software becomes expansive and
complex, test automation becomes one of the most effective practices to increase the functional and
structural quality of the application.

As with many aspects of programming and configuration, it can be exasperating to implement


and maintain the suite of automated tests properly. Many teams experience difficulties justifying
the additional effort for test automation. Frustrations arise, with energy diverted away from
programming and manual testing onto automation, which requires extra effort and produces limited
results. Motivation wains, with teams feeling as though they’re wasting time on work that only seems
to hurt, not help the velocity of the product/service pipeline.

Some software managers suffer from the misconception that


automated testing always equals better testing, and they mandate
that all testing must be automated. Such mandates have harmed
many projects.

– Bret Pettichord, Lessons Learned in Software Testing

If you’ve had some of these negative experiences, it may be difficult to believe that there is hope.
Some teams have actually done it well. The backbone of the initial successes involves careful
thought, a defined plan, and a progressive approach to implementation. Such teams take additional
steps to build upon the early wins. Today, they enjoy building better quality software in less time
than before.

What’s their secret? Everything hinges on the choices made in the approach to test automation.
While not easy, it is quite feasible for most teams to automate software tests successfully. The answer
lies in a shift in mindset and approach, adopting the guidelines followed by teams who have had
significant success in automating their software tests.
Rather than attempting to dispense test automation tips blindly, this paper presents guidelines
for successful automation. We’ll provide an overview of common setbacks paired with a test-first
approach to software development and a series of intermediate steps to try a test implementation.
The methods covered provide a roadmap for teams to progressively transition to a test-first
approach and move forward with confidence.
BOOST CONFIDENCE
IN THE TEST SUITE
A useful suite of tests must reduce the overall effort in verifying that a product exhibits high
structural and functional quality. A primary driver for any development team is that the test suite
verifies high quality for each successive build. The test suites must be readily maintainable to
prevent future regression failures.

Consider a common approach to testing. During feature development, a team runs a particular test
case only a few times. After integrating the feature into a build, the test case is run after each code
change to verify functional integrity and compliance aligned to a business rule or use case. Whether
maintained by the build team or handed off externally in the future, the feature set will expand
gradually with the regression suite expanding accordingly. Maintaining the regression test suite
benefits the team, boosting the level of confidence in the build’s short-term and long-term quality.

A test suite defines the behavior expectations of the system when used in various test cases.

• System-level test cases define user stories.


• Unit tests define details on business rules.
• Integration tests define contracts and integration flows that indicate all major dependencies.

While conventional requirements specification often becomes dated quickly, test cases are more
dynamic and correspond more closely with all the aspects of the software design. As the team
maintains the tests, they remain current with each build. For this reason alone, a good test suite
should readily provide accurate specifications for a software system.

BUILDING FASTER BY TESTING EARLY


System changes may include functionality that supports new use cases, additional steps in the
workflow, and integrations with third-party software. The effort to manually verify software changes
depends on scope and complexity. The time spent varies greatly, taking anywhere from a few
minutes to hours. It’s best to find bugs or problems soon after completing development changes
since less effort will be needed to find a remedy earlier in the build.

Commonly, there is a deferral of testing. Instead of a developer, a tester eventually identifies,


reproduces, and logs defects. Comparatively, most of the manual effort to identify, isolate, and fix
the bug often doubles, at least for both testers and developers.
However, what if it was feasible to get feedback while still working on the code? If a developer
could get immediate feedback, the adjustment could be part of the initial coding change effort,
with no defect ever arising. The preliminary testing would alleviate much of the context-switching
from development to QA. The feedback would be applicable to functional, structural, and business
use-case testing.

In addition to the QA effort, the development effort to refactor will significantly increase if quality
feedback is deferred to future code review. Accordingly, the test suite needs to offer the ability to
provide immediate feedback, leading to more effcient use of resources.

TESTING FOR EFFICIENCY AND RESILIENCY


Many tests are code or pseudo-code, for which there may be a high ongoing cost to maintain. With
many aspects to test maintenance, the coupling of tests to system internals tends to be the most
problematic. Tightly coupling a test suite to the internals of the system causes much fragility. Each
change to the code base necessitates a change to one or more corresponding tests, even with no
externally detectable behavior changes. Enforcing a high degree of test coverage becomes a big
ask. Not surprisingly, most teams tend to avoid a high number of tedious test changes.

As complexity increases in a tightly-coupled test environment, many reach a point where they
consider deleting extraneous test cases. Say, an incorrect identification of module boundaries
needs testing, or such modules are merely leaky abstractions that expose all of the internals
through the APIs.

The thinking is this: it takes more time to fix the tests than implement the code changes. It is, of
course, quite impossible to completely decouple all tests from the software system. However, the
result in the suite of end-to-end tests would not give enough confidence.

Ideally, it is best to pursue a loosely-coupled test suite that exhibits the minimum amount of
coupling in critical areas.

The characteristics of an ideal test suite include:

all functionality works according to expectations

existing functionality works (even when features are extended or new features are added)

a sustainable system architecture

an accurate specification of the system

a quick feedback loop

minimal maintenance effort


WHY END THE DEVELOP-FIRST,
TEST-LATE APPROACH
How can test automation be worthwhile and effective? To help answer the question, let’s first
examine the commonly used develop first, test-later approach with tests written after everything
else is done.

The process is as follows:

1. Implement a change, either as a new feature or a fix to an existing feature.


2. Manually test the change and identify bugs.
3. Debug and fix.
4. Repeat as necessary for all features in a build.
5. Build or extend an automated test suite.

The conventional test-late approach seems (to the uninitiated) to be a natural extension of a
conventional development process. Though seemingly easier, the build-then-test approach in
software development has significant disadvantages.

The first impact is on new tests, which would be run manually at first. As the team looks to automate,
updates are made to existing test cases. Little time is spent on additional edge cases. Tests are
written at the end of the development phase cover the exact functional footprint of the new code. In
reality, it’d be far better to reassess what the code should or shouldn’t do. The results give the team
a false sense of confidence that the code base changes work well while inadvertently overlooking
the edge cases.

It’s important to realize something more subtle. Immediately after the new tests are run manually
and pass, confidence increases as the changes work according to expectations. The inclination here
is to focus only on functional coverage and minimize verification against business requirements.

When pausing to think carefully, risks jeopardizing test automation success become apparent at this
point.

The implementation is likely to have an undue influence on test construction


and maintenance.

No other tests are likely to be written beyond what is necessary to reach the coverage goal.

There will be a tendency to selectively write tests only for trivial or well-designed parts of
the system.
IMPLEMENTATION-DRIVEN TEST
INADEQUACIES OCCUR
When deriving tests directly from implementation, there is a tendency to focus on each code class
and function instead of on the module boundaries. Such tests tend to rely on implementation
details rather than behavior expectations at the module level. The reason is that the test writer wants
to verify the correct functionality. However, there is often a lack of concern for the corresponding
business requirement(s), with tests driven mainly by the code instead of business requirements. The
approach results in a test suite tightly coupled to the system.

Problems arise as it becomes challenging to understand test design and the tests don’t represent an
accurate system specification. Moreover, tightly-coupled tests may fail in the future when the code
changes—even if user or integration behavior doesn’t change. False positives increase and multiply.
Taken together, all of these issues lead inevitably to an increase in maintenance effort and a bloated,
sluggish test suite. Likewise, stopping to write tests later decreases confidence in the test suite’s
efficacy since there is no assurance that testing covers all of the critical cases.

The result is an incomplete regression test suite and an incomplete test-based specification, which
can only offer an incomplete assessment of the system design. In addition, there is a ubiquitous
tendency to do what’s easiest, especially for teams with a heavy workload. Frequently, the easy
option is taken to test only the code that requires the least effort. However, such code is quite often
relatively simple or of sound design. Consequently, taking the easy route leaves the most complex
and poorly designed modules without viable regression tests. The test-based system specification
will be deficient in these areas, and there will be no identification of opportunities for
design improvements.

ISSUES ARISE ADDING TESTS AFTER


FEATURE DEVELOPMENT
Yes, there is a strong tendency to write tests only after a complete effort to develop the feature.
Then, if any of the tests indicate design flaws, it is likely that some of the issues won’t be fixed.
Any non-trivial redesign would require non-trivial effort, which would require adding more tests,
rewriting existing tests, and more testing.

Consequently, there is a tendency to rely on tools that enable the testing of flawed designs. There
is already a supply and demand for tools that will test private methods and mock private fields,
for example. Tests written at the end of the development cycle tend to result in no significant
improvements to system design.

Consider the implications for new feature development, in which test automation begins only when
a team achieves sufficient confidence in system quality primarily via manual testing. Automating a
test suite has no material impact on reducing manual testing effort and results in doubling of the
testing effort that only prepares the test suite as an investment for the future (which may
never materialize).

With automated testing, the team may receive quality feedback more quickly for changes to existing
functionality. However, as already noted, likely, the team won’t have much confidence in the test
suite. When considering the extra effort with no significant increase in confidence, many teams
begin to think of automation as a poor investment of effort.

When a team begins to automate testing, it is likely to see that they are working inefficiently if they
have to wait to write tests only after the developers write and test the code. Both QA testers and
developers tend to delay until the last practicable moment—after doing all other development work,
including manual testing. Typically, it is only after development work is entirely complete. The work
begins on the unit and integration tests.

Realizing that significant effort will be necessary to automate tests written after development is
finished, the team perceives a high risk that it won’t achieve significant benefits. Commonly, the
hope is that a test suite that is built (or extended) only after the development phase may provide
all the benefits of automated testing. The reality is that many teams only experience a few benefits
while expending an excessive amount of effort.
REQUIRE A TEST-FIRST
APPROACH
There are several names by which a test-first approach is known: test-first development, test-
driven development, and acceptance test-driven development, among others. No matter the
nomenclature, test-driven development is the method in which a developer writes the tests before
writing the code.

APPLYING RED, GREEN,


REFACTOR METHODOLOGY
The Red-Green-Refactor cycle is a long-standing practice in software development. The general
practice includes three steps.

1. Create a unit test that fails.


2. Write production code that will pass that test.
3. Clean up the mess you just made.

First, a test is written to check for potential failure. Next, only a minimal amount of code is
created—enough to have the test pass. The developer proceeds to revise the test to fail against new
code that will be included to meet business requirements. Then, code is written to pass the test. If
the test does not pass, then update the code or the test until it does.
The test-first approach offers many advantages that improve the development process, such as:

Developers can use the method to build high confidence in the test suite with up-front,
immediate feedback on the functional and structural quality of the system.

Writing a test in small, incremental iterations provides frequent, instantaneous feedback


minimizes both time and effort in fixing design or implementation flaws. (See Martin Fowler
on the Developer Feedback Loop.)

Confidence in quality quickly rises and remains high during the entire effort.

Since there is no risk of a regression failure, the increased confidence further motivates
developers to refactor or write new code.

Developers have plenty of freedom and incentive to write clean code. Subsequently, there
remains no need for extensive manual testing.
There is no significant increase in the net overall effort, yet there is continuous incentive to
pursue incremental improvements.

System design, test-based specification of the system, and descriptive accuracy of


tests improve.

Writing the test first drives developers to focus on behavior expectations. The method ensures that
both the tests and code are driven primarily by business requirements. If all requirements are met,
and all tests pass in tandem with the new code changes, then there is maximum confidence that all
known test cases have been written.

TEST-FIRST VS. TEST-LATE


Yes, it’s possible to eventually write good test suites with either approach. However, the test-first
approach offers many clear benefits, while the test-late approach commonly produces a test suite
that can only provide a fraction of those benefits.

TEST-FIRST: Test-driven development that focuses on the most important elements of the
application. Software is built for users with development managers incentivized to ensure
compliance with business requirements.

TEST-LATE: Testers have to discern how best to focus the testing and how much effort to expend
on the testing. The major risk is that the testing will be insufficient (i.e., tests likely aren’t adequately
descriptive, with the team wasting effort automating test cases for the incorrect module boundaries).
BENEFIT FROM TWO
PROVEN SHORTCUTS
While the path to automation is never a short one, there is a path that is less steep. There are
shortcuts to take on the path to automation. While there are many viable options, we’ll share
two shortcuts that have been put to use in our varied experiences at Devbridge. These shortcuts
aim to provide a gradual, incremental pathway that should induce less frustration. Though these
shortcuts defer some benefits to make it easier to adopt the test-first methodology, they are still
more beneficial than remaining with the test-late method.

Consider the simple case in which a team has no automation and takes a test-late approach.

The two shortcuts are:

1. Write automated tests first.


2. Write the test case descriptions first.

It’s important to view these shortcuts not as definitive testing approaches but rather as
intermediate steps on the road to even more effective methodologies, such as test-driven
development.

SHORTCUT #1: BUILD AUTOMATED TESTS FIRST


A major inefficiency in the test-late approach is that many test cases are written once and then
rewritten, once manually and again when they are automated. To prevent the wasted effort, one
shortcut to take is to write the code and then immediately write the automated tests.

The first shortcut entails the following:

1 + 2 + 3
Write Write the Supplement
the code. automated test. with manual testing.

One key to this shortcut’s overall success is to strictly avoid any sort of manual verification of the
automated test(s) build. Indeed, before completing the automated tests for a single iteration,
only fix the issues that arise during compilation or static analysis (since neither of these requires
additional work to get feedback). Another important consideration is to keep the focus on user
requirements, not coverage metrics. Don’t stop writing tests until you gain full confidence in the
correct system behavior.

We strongly recommend working incrementally. Make some code changes and test those. Make
some more changes. Then repeat the tests and adjust as necessary. An iterative approach is much
more productive than writing code for a few days and testing large segments of functionality. A
primary benefit is the repetitive, instantaneous feedback by which it is possible to make many
successive improvements quickly.

What happens when you hold off?

If you postpone refactoring, more effort will be necessary to test, refactor, and retest. If testing
feedback is delayed too much, it becomes tempting to ignore any feedback on the design.

Moreover, it’s important to consider test automation for levels of testing. It’s relatively unproductive
to focus only on unit tests. Before taking this shortcut, it might be best to manually verify a few
automated test cases to ensure that the automated testing is correct. After the tests have been
automated, some manual verification helps verify that tests you have just written work. You’ll also
come to understand better which tests provide value and which don’t.

Taking these steps will enable a gradual improvement of test automation skills to the point at which
manual verification is no longer necessary. Though at first the automated test suites might not give a
high level of confidence, it will get better as the team gains more experience. Ultimately, it’s vital to
decrease the amount of manual testing gradually.

With all things considered, this shortcut can help combat the frustration of writing automated tests.
A team can be effective more quickly than with conventional approaches to software testing. For
most teams, it is likely to help produce high-coverage test suites and enable
frequent refactoring.

SHORTCUT #2: WRITE THE TEST CASES


BEFORE CODING
A big hurdle in adopting a test-first approach is the difficulty in writing tests for code that has not
yet been written. Another shortcut is to begin by writing test labels. This method is a different way to
put immediate focus on the tests while postponing the complexity of writing the complete tests.
The second shortcut entails the following:

1 + 2 + 3
Describe test Write code that Write tests that align
cases as test labels. satisfies the test cases. to each test label.

First, explicitly yet succinctly list out the test cases for all of the functionality. Write these out as
simple, empty test methods that contain only test case descriptions and failing assertions. When
attempting to define tests before writing any code, think of test cases that would demonstrate
that the software satisfies the user requirements. Include enough detail in each description to
define the scope of the implementation and the acceptance criteria of the test case.

Each description should cover only a single business case without any ambiguity. A test case with
a general description. Writing something such as “should handle login” isn’t descriptive enough.
Even a simple feature like a login often consists of complex authorization requirements and likely
hides edge test cases. Using a description that is too simple will not clarify which edge cases the
implementation should cover.

Avoid giving ambiguous descriptions since the result may be the possibility of neglect
in implementing and testing some of the functionality. Waste of effort may result from
unnecessarily over-building the code and writing extra cases. When writing each description,
focus on defining a single case. Some features may not always be user-facing, so it’s necessary to
identify the level to provide a sensible description.

Examples of test-cases descriptions for a login feature:

should authenticate the user with correct credentials.

should not authenticate a locked-out user with correct credentials.

should trust a valid existing user session without prompting for authentication.

Next, write code that satisfies each of the test cases. At this point, avoid an attempt to verify any
changes manually since this would introduce the disadvantages stemming from the test-late
approach. The task here is to ensure that the code will compile and begin to implement the test
cases. While writing the test cases, various functional and structural issues will arise in the code.
Be sure to fix each of these immediately and incrementally build a suite of passing test cases.
After implementing all the test cases to the point of passing, the developer attains high confidence
in the code’s quality. If there is some distrust remaining, it’s entirely appropriate to manually verify
some of the test cases. Eventually, it will be possible to identify and automate all of the essential test
cases.

ADVANCE INCREMENTALLY AND ITERATE


A logical next step is to proceed by elaborating the test cases incrementally and iteratively, along
with small code changes. Working in short iterations generally results in a complete test case in
smaller bits instead of working linearly to build the entire test suite all at once.

With the first shortcut, code is written earlier while maintaining a focus on writing tests iteratively
with coding incrementally. Applying an iterative approach to a single test case is similar to test-
driven development. The scope of each change is considerably smaller, and it’s sensible to make
adjustments to the code immediately after writing it. The feedback is nearly as quick as with test-
driven development.

A proven best practice is starting with as many or as few tests in a single iteration. Gradually shorten
each iteration until the point at which it is possible to work within the iterations of a single test case.
The first few attempts may require additional time because of the effort to learn how to be efficient
with a continuous stream of small refactorings.

With the second shortcut, the test suite design is driven purely by thinking about requirements and
edge cases. Since the test case definitions occur before writing any code, the test cases should
cover the expectation of what the code should do—rather than covering the implementation. It’s
impossible to predict test coverage until the code is complete. The second shortcut method helps
avoid the effects of any coverage requirements and mitigates the risk that too few tests will be
written. The important cases will be omitted. The code is likely to have some effect on writing the
tests. But it will still be easier to identify module boundaries and refactor code when necessary.

Over time, perhaps the most significant benefit is that the tests become an integral part of the
development effort. No longer will testing be the last step in a process delaying delivery. In addition,
a keen focus on requirements ensures that tests specify system behavior, which is preferable to a set
of isolated details that bring very little system understanding and low levels of confidence.

A test-first automated test suite specifies all high-priority cases, instills high confidence in
quality, and keeps maintenance effort to a minimum.

One question that might be annoying to a team under pressure is this, “How much of an investment
can be given to structural changes?” An answer that may be surprising is that at first a test-
first method might not have any significant impact on the design. However, after a team gains
experience working in short iterations, it may realize that structural improvements will develop
indirectly as a result of immediate feedback and subsequent refactoring.
MAXIMIZE TEST
AUTOMATION EFFORTS
While most teams strive toward automation of repetitive tasks, there remains for many of these
teams a continual struggle to achieve effective test automation. With many tools and techniques
available, the abundant options too often lead to confusion. It is possible to minimize frustration and
increase motivation for test automation by seeking and applying effective approaches.

This paper presents two shortcut methods for approaching test automation.

Shortcut #1 Shortcut #2

Write the code. Describe test cases as test labels.

Write the automated tests. Write code that satisfies the test cases.

Supplement with additional Write tests that align to each test label.
testing, as necessary.

The first method involves building an automated test suite before running any of the test cases
manually. Though it does not provide many of the benefits of the test-first approach, it significantly
reduces the effort necessary to build and maintain a test suite. We conclude that this method is a
good starting point for engineers who have been using a test-late method.
The second method involves defining test cases only with descriptions before writing any code. We
acknowledge that it does not improve system design as much as the test-first approach. However, it
substantially increases development efficiency by providing much quicker feedback and increasing
the quality of the test suite itself. The second method is also an excellent preliminary step toward
the test-first method.

A word of caution:

These shortcuts should be considered as intermediate steps on the way to complete


test automation.
MOVING FORWARD WITH A NEW
UNDERSTANDING
As you consider how best to pursue test automation with your team, keep the following key points
in mind.

1. A serious commitment to test automation can have a positive impact on testing effectiveness
and benefits of the resulting test suite.
2. Deferring the creation of automated tests in a software development lifecycle to any point after
manual testing of code changes is highly ineffective—and relatively unproductive. Typically, such
an approach results in an automated test suite that has a slight positive effect on the quality of
the software.
3. Writing automated tests in a software development lifecycle before manual testing can
significantly reduce the overall effort for ensuring high quality in the implemented changes.
4. Defining the test cases before writing any code will increase the number of benefits that result
from an automated test suite.
Identify, design, and build.
550+
Full-time
Leverage momentum to transform. employees

Devbridge builds mission-critical products to advance leading


companies in aviation, agribusiness, distribution & logistics,
financial services, healthcare, and manufacturing. Using our 9
proven methodology and tooling, we empower enterprises to Offices
define strategy, leverage data, build custom software, and achieve
organizational change. We’ve helped hospitals maximize staff
utilization, logistics
After a critical reviewcompanies increase
of the process, lane throughput,
Devbridge observedand
that:enabled
banks with automated loan decisioning tools.
Many significant overhead steps and delays had a direct correspondence
with various manual steps in the pipeline.
Product design and Service
development design
Typically, code changes were held up for at least a week before
Dedicated Identifying and
reaching customers.
cross-functional pursuing opportunities
teams, full product across your
lifecycle
Manual support
regression
testing took an entire service
day to landscape
complete, with the
team wasting valuable time waiting for results.

In each sprint, two team members would Legacy


Software work during off-hours to
deploy the new maturity
engineering version of an application modernization
to production.
Organizational change Predictable replatforming of
Due to time
through constraints, hotfixes didn’t get
mature the same
legacy systems and processes
level of attention
product as standard releases. Risks
development were
to meet the needs of your
best practices business today
taken, and quality was lower.

Chicago

Recognizing that there were opportunities to London


Data strategy and Automating Toronto
optimize the pipeline for higher productivity,
intelligence workflows Kaunas
we began our journey toward
continuousAllowing a single
deployment. Machine learning, Vilnius
source of truth to be a automated decisioning,
timely guide for your and workflow management Atlanta
business decisions to work smarter Denver
Warsaw
FASTER THAN
YOU’RE USED TO.

[email protected]
312.242.1642

You might also like