2021-04-26 Automated Testing Devbridge
2021-04-26 Automated Testing Devbridge
FOR TEST
AUTOMATION
Know the right shortcuts to take on
the journey to quality software
02
Boost confidence in the test suite
03
04
05
06
Test automation includes suites of verification, functional, regression, and performance tests that run
with little or no manual input from the development staff. Automated testing can be done in each
development context (i.e., unit, integration, system, delivery.) As software becomes expansive and
complex, test automation becomes one of the most effective practices to increase the functional and
structural quality of the application.
If you’ve had some of these negative experiences, it may be difficult to believe that there is hope.
Some teams have actually done it well. The backbone of the initial successes involves careful
thought, a defined plan, and a progressive approach to implementation. Such teams take additional
steps to build upon the early wins. Today, they enjoy building better quality software in less time
than before.
What’s their secret? Everything hinges on the choices made in the approach to test automation.
While not easy, it is quite feasible for most teams to automate software tests successfully. The answer
lies in a shift in mindset and approach, adopting the guidelines followed by teams who have had
significant success in automating their software tests.
Rather than attempting to dispense test automation tips blindly, this paper presents guidelines
for successful automation. We’ll provide an overview of common setbacks paired with a test-first
approach to software development and a series of intermediate steps to try a test implementation.
The methods covered provide a roadmap for teams to progressively transition to a test-first
approach and move forward with confidence.
BOOST CONFIDENCE
IN THE TEST SUITE
A useful suite of tests must reduce the overall effort in verifying that a product exhibits high
structural and functional quality. A primary driver for any development team is that the test suite
verifies high quality for each successive build. The test suites must be readily maintainable to
prevent future regression failures.
Consider a common approach to testing. During feature development, a team runs a particular test
case only a few times. After integrating the feature into a build, the test case is run after each code
change to verify functional integrity and compliance aligned to a business rule or use case. Whether
maintained by the build team or handed off externally in the future, the feature set will expand
gradually with the regression suite expanding accordingly. Maintaining the regression test suite
benefits the team, boosting the level of confidence in the build’s short-term and long-term quality.
A test suite defines the behavior expectations of the system when used in various test cases.
While conventional requirements specification often becomes dated quickly, test cases are more
dynamic and correspond more closely with all the aspects of the software design. As the team
maintains the tests, they remain current with each build. For this reason alone, a good test suite
should readily provide accurate specifications for a software system.
In addition to the QA effort, the development effort to refactor will significantly increase if quality
feedback is deferred to future code review. Accordingly, the test suite needs to offer the ability to
provide immediate feedback, leading to more effcient use of resources.
As complexity increases in a tightly-coupled test environment, many reach a point where they
consider deleting extraneous test cases. Say, an incorrect identification of module boundaries
needs testing, or such modules are merely leaky abstractions that expose all of the internals
through the APIs.
The thinking is this: it takes more time to fix the tests than implement the code changes. It is, of
course, quite impossible to completely decouple all tests from the software system. However, the
result in the suite of end-to-end tests would not give enough confidence.
Ideally, it is best to pursue a loosely-coupled test suite that exhibits the minimum amount of
coupling in critical areas.
existing functionality works (even when features are extended or new features are added)
The conventional test-late approach seems (to the uninitiated) to be a natural extension of a
conventional development process. Though seemingly easier, the build-then-test approach in
software development has significant disadvantages.
The first impact is on new tests, which would be run manually at first. As the team looks to automate,
updates are made to existing test cases. Little time is spent on additional edge cases. Tests are
written at the end of the development phase cover the exact functional footprint of the new code. In
reality, it’d be far better to reassess what the code should or shouldn’t do. The results give the team
a false sense of confidence that the code base changes work well while inadvertently overlooking
the edge cases.
It’s important to realize something more subtle. Immediately after the new tests are run manually
and pass, confidence increases as the changes work according to expectations. The inclination here
is to focus only on functional coverage and minimize verification against business requirements.
When pausing to think carefully, risks jeopardizing test automation success become apparent at this
point.
No other tests are likely to be written beyond what is necessary to reach the coverage goal.
There will be a tendency to selectively write tests only for trivial or well-designed parts of
the system.
IMPLEMENTATION-DRIVEN TEST
INADEQUACIES OCCUR
When deriving tests directly from implementation, there is a tendency to focus on each code class
and function instead of on the module boundaries. Such tests tend to rely on implementation
details rather than behavior expectations at the module level. The reason is that the test writer wants
to verify the correct functionality. However, there is often a lack of concern for the corresponding
business requirement(s), with tests driven mainly by the code instead of business requirements. The
approach results in a test suite tightly coupled to the system.
Problems arise as it becomes challenging to understand test design and the tests don’t represent an
accurate system specification. Moreover, tightly-coupled tests may fail in the future when the code
changes—even if user or integration behavior doesn’t change. False positives increase and multiply.
Taken together, all of these issues lead inevitably to an increase in maintenance effort and a bloated,
sluggish test suite. Likewise, stopping to write tests later decreases confidence in the test suite’s
efficacy since there is no assurance that testing covers all of the critical cases.
The result is an incomplete regression test suite and an incomplete test-based specification, which
can only offer an incomplete assessment of the system design. In addition, there is a ubiquitous
tendency to do what’s easiest, especially for teams with a heavy workload. Frequently, the easy
option is taken to test only the code that requires the least effort. However, such code is quite often
relatively simple or of sound design. Consequently, taking the easy route leaves the most complex
and poorly designed modules without viable regression tests. The test-based system specification
will be deficient in these areas, and there will be no identification of opportunities for
design improvements.
Consequently, there is a tendency to rely on tools that enable the testing of flawed designs. There
is already a supply and demand for tools that will test private methods and mock private fields,
for example. Tests written at the end of the development cycle tend to result in no significant
improvements to system design.
Consider the implications for new feature development, in which test automation begins only when
a team achieves sufficient confidence in system quality primarily via manual testing. Automating a
test suite has no material impact on reducing manual testing effort and results in doubling of the
testing effort that only prepares the test suite as an investment for the future (which may
never materialize).
With automated testing, the team may receive quality feedback more quickly for changes to existing
functionality. However, as already noted, likely, the team won’t have much confidence in the test
suite. When considering the extra effort with no significant increase in confidence, many teams
begin to think of automation as a poor investment of effort.
When a team begins to automate testing, it is likely to see that they are working inefficiently if they
have to wait to write tests only after the developers write and test the code. Both QA testers and
developers tend to delay until the last practicable moment—after doing all other development work,
including manual testing. Typically, it is only after development work is entirely complete. The work
begins on the unit and integration tests.
Realizing that significant effort will be necessary to automate tests written after development is
finished, the team perceives a high risk that it won’t achieve significant benefits. Commonly, the
hope is that a test suite that is built (or extended) only after the development phase may provide
all the benefits of automated testing. The reality is that many teams only experience a few benefits
while expending an excessive amount of effort.
REQUIRE A TEST-FIRST
APPROACH
There are several names by which a test-first approach is known: test-first development, test-
driven development, and acceptance test-driven development, among others. No matter the
nomenclature, test-driven development is the method in which a developer writes the tests before
writing the code.
First, a test is written to check for potential failure. Next, only a minimal amount of code is
created—enough to have the test pass. The developer proceeds to revise the test to fail against new
code that will be included to meet business requirements. Then, code is written to pass the test. If
the test does not pass, then update the code or the test until it does.
The test-first approach offers many advantages that improve the development process, such as:
Developers can use the method to build high confidence in the test suite with up-front,
immediate feedback on the functional and structural quality of the system.
Confidence in quality quickly rises and remains high during the entire effort.
Since there is no risk of a regression failure, the increased confidence further motivates
developers to refactor or write new code.
Developers have plenty of freedom and incentive to write clean code. Subsequently, there
remains no need for extensive manual testing.
There is no significant increase in the net overall effort, yet there is continuous incentive to
pursue incremental improvements.
Writing the test first drives developers to focus on behavior expectations. The method ensures that
both the tests and code are driven primarily by business requirements. If all requirements are met,
and all tests pass in tandem with the new code changes, then there is maximum confidence that all
known test cases have been written.
TEST-FIRST: Test-driven development that focuses on the most important elements of the
application. Software is built for users with development managers incentivized to ensure
compliance with business requirements.
TEST-LATE: Testers have to discern how best to focus the testing and how much effort to expend
on the testing. The major risk is that the testing will be insufficient (i.e., tests likely aren’t adequately
descriptive, with the team wasting effort automating test cases for the incorrect module boundaries).
BENEFIT FROM TWO
PROVEN SHORTCUTS
While the path to automation is never a short one, there is a path that is less steep. There are
shortcuts to take on the path to automation. While there are many viable options, we’ll share
two shortcuts that have been put to use in our varied experiences at Devbridge. These shortcuts
aim to provide a gradual, incremental pathway that should induce less frustration. Though these
shortcuts defer some benefits to make it easier to adopt the test-first methodology, they are still
more beneficial than remaining with the test-late method.
Consider the simple case in which a team has no automation and takes a test-late approach.
It’s important to view these shortcuts not as definitive testing approaches but rather as
intermediate steps on the road to even more effective methodologies, such as test-driven
development.
1 + 2 + 3
Write Write the Supplement
the code. automated test. with manual testing.
One key to this shortcut’s overall success is to strictly avoid any sort of manual verification of the
automated test(s) build. Indeed, before completing the automated tests for a single iteration,
only fix the issues that arise during compilation or static analysis (since neither of these requires
additional work to get feedback). Another important consideration is to keep the focus on user
requirements, not coverage metrics. Don’t stop writing tests until you gain full confidence in the
correct system behavior.
We strongly recommend working incrementally. Make some code changes and test those. Make
some more changes. Then repeat the tests and adjust as necessary. An iterative approach is much
more productive than writing code for a few days and testing large segments of functionality. A
primary benefit is the repetitive, instantaneous feedback by which it is possible to make many
successive improvements quickly.
If you postpone refactoring, more effort will be necessary to test, refactor, and retest. If testing
feedback is delayed too much, it becomes tempting to ignore any feedback on the design.
Moreover, it’s important to consider test automation for levels of testing. It’s relatively unproductive
to focus only on unit tests. Before taking this shortcut, it might be best to manually verify a few
automated test cases to ensure that the automated testing is correct. After the tests have been
automated, some manual verification helps verify that tests you have just written work. You’ll also
come to understand better which tests provide value and which don’t.
Taking these steps will enable a gradual improvement of test automation skills to the point at which
manual verification is no longer necessary. Though at first the automated test suites might not give a
high level of confidence, it will get better as the team gains more experience. Ultimately, it’s vital to
decrease the amount of manual testing gradually.
With all things considered, this shortcut can help combat the frustration of writing automated tests.
A team can be effective more quickly than with conventional approaches to software testing. For
most teams, it is likely to help produce high-coverage test suites and enable
frequent refactoring.
1 + 2 + 3
Describe test Write code that Write tests that align
cases as test labels. satisfies the test cases. to each test label.
First, explicitly yet succinctly list out the test cases for all of the functionality. Write these out as
simple, empty test methods that contain only test case descriptions and failing assertions. When
attempting to define tests before writing any code, think of test cases that would demonstrate
that the software satisfies the user requirements. Include enough detail in each description to
define the scope of the implementation and the acceptance criteria of the test case.
Each description should cover only a single business case without any ambiguity. A test case with
a general description. Writing something such as “should handle login” isn’t descriptive enough.
Even a simple feature like a login often consists of complex authorization requirements and likely
hides edge test cases. Using a description that is too simple will not clarify which edge cases the
implementation should cover.
Avoid giving ambiguous descriptions since the result may be the possibility of neglect
in implementing and testing some of the functionality. Waste of effort may result from
unnecessarily over-building the code and writing extra cases. When writing each description,
focus on defining a single case. Some features may not always be user-facing, so it’s necessary to
identify the level to provide a sensible description.
should trust a valid existing user session without prompting for authentication.
Next, write code that satisfies each of the test cases. At this point, avoid an attempt to verify any
changes manually since this would introduce the disadvantages stemming from the test-late
approach. The task here is to ensure that the code will compile and begin to implement the test
cases. While writing the test cases, various functional and structural issues will arise in the code.
Be sure to fix each of these immediately and incrementally build a suite of passing test cases.
After implementing all the test cases to the point of passing, the developer attains high confidence
in the code’s quality. If there is some distrust remaining, it’s entirely appropriate to manually verify
some of the test cases. Eventually, it will be possible to identify and automate all of the essential test
cases.
With the first shortcut, code is written earlier while maintaining a focus on writing tests iteratively
with coding incrementally. Applying an iterative approach to a single test case is similar to test-
driven development. The scope of each change is considerably smaller, and it’s sensible to make
adjustments to the code immediately after writing it. The feedback is nearly as quick as with test-
driven development.
A proven best practice is starting with as many or as few tests in a single iteration. Gradually shorten
each iteration until the point at which it is possible to work within the iterations of a single test case.
The first few attempts may require additional time because of the effort to learn how to be efficient
with a continuous stream of small refactorings.
With the second shortcut, the test suite design is driven purely by thinking about requirements and
edge cases. Since the test case definitions occur before writing any code, the test cases should
cover the expectation of what the code should do—rather than covering the implementation. It’s
impossible to predict test coverage until the code is complete. The second shortcut method helps
avoid the effects of any coverage requirements and mitigates the risk that too few tests will be
written. The important cases will be omitted. The code is likely to have some effect on writing the
tests. But it will still be easier to identify module boundaries and refactor code when necessary.
Over time, perhaps the most significant benefit is that the tests become an integral part of the
development effort. No longer will testing be the last step in a process delaying delivery. In addition,
a keen focus on requirements ensures that tests specify system behavior, which is preferable to a set
of isolated details that bring very little system understanding and low levels of confidence.
A test-first automated test suite specifies all high-priority cases, instills high confidence in
quality, and keeps maintenance effort to a minimum.
One question that might be annoying to a team under pressure is this, “How much of an investment
can be given to structural changes?” An answer that may be surprising is that at first a test-
first method might not have any significant impact on the design. However, after a team gains
experience working in short iterations, it may realize that structural improvements will develop
indirectly as a result of immediate feedback and subsequent refactoring.
MAXIMIZE TEST
AUTOMATION EFFORTS
While most teams strive toward automation of repetitive tasks, there remains for many of these
teams a continual struggle to achieve effective test automation. With many tools and techniques
available, the abundant options too often lead to confusion. It is possible to minimize frustration and
increase motivation for test automation by seeking and applying effective approaches.
This paper presents two shortcut methods for approaching test automation.
Shortcut #1 Shortcut #2
Write the automated tests. Write code that satisfies the test cases.
Supplement with additional Write tests that align to each test label.
testing, as necessary.
The first method involves building an automated test suite before running any of the test cases
manually. Though it does not provide many of the benefits of the test-first approach, it significantly
reduces the effort necessary to build and maintain a test suite. We conclude that this method is a
good starting point for engineers who have been using a test-late method.
The second method involves defining test cases only with descriptions before writing any code. We
acknowledge that it does not improve system design as much as the test-first approach. However, it
substantially increases development efficiency by providing much quicker feedback and increasing
the quality of the test suite itself. The second method is also an excellent preliminary step toward
the test-first method.
A word of caution:
1. A serious commitment to test automation can have a positive impact on testing effectiveness
and benefits of the resulting test suite.
2. Deferring the creation of automated tests in a software development lifecycle to any point after
manual testing of code changes is highly ineffective—and relatively unproductive. Typically, such
an approach results in an automated test suite that has a slight positive effect on the quality of
the software.
3. Writing automated tests in a software development lifecycle before manual testing can
significantly reduce the overall effort for ensuring high quality in the implemented changes.
4. Defining the test cases before writing any code will increase the number of benefits that result
from an automated test suite.
Identify, design, and build.
550+
Full-time
Leverage momentum to transform. employees
Chicago
[email protected]
312.242.1642