Automation Test
Automation Test
Johan Andersson
and
Katrin Andersson
LITH-IDA-EX--07/046--SE
2007-08-17
Linköping University
Department of Computer and Information Science
Final Thesis
Johan Andersson
and
Katrin Andersson
LITH-IDA-EX--07/046--SE
2007-08-17
http://www.ep.liu.se/
Titel
Title
Automated Software Testing in an Embedded Real-Time System
Författare
Author
Johan Andersson
Katrin Andersson
Sammanfattning
Abstract
Today, automated software testing has been implemented successfully in many systems, howev-
er there does still exist relatively unexplored areas as how automated testing can be implemented
in a real-time embedded system. This problem has been the foundation for the work in this master
thesis, to investigate the possibility to implement an automated software testing process for the
testing of an embedded real-time system at IVU Traffic Technologies AG in Aachen, Germany.
The system that has been the test object is the on board system i.box.
This report contains the result of a literature study in order to present the foundation behind
the solution to the problem of the thesis. Questions answered in the study are: when to automate,
how to automate and which traps should one avoid when implementing an automated software
testing process in an embedded system.
The process of automating the manual process has contained steps as constructing test cases for
automated testing, analysing whether an existing tool should be used or a unique test system
needs to be developed. The analysis, based on the requirements on the test system, the literature
study and an investigation of available test tools, lead to the development of a new test tool. Due
to limited devlopement time and characterstics of the i.box, the new tool was built based on post
execution evaluation. The tool was therefore divided into two parts, a part that executed the test
and a part that evaluated the result.. By implementing an automated test tool it has been proved
that it is possible to automate the test process at system test level in the i.box.
Nyckelord
Keywords
Automated software testing, embedded systems, software test procedure, software testing, on
board integrated system.
ABSTRACT
Many people have helped us making this report to what is has be-
come. It has been a real pleasure to have gotten the splendid oppor-
tunity to carry out our master thesis at IVU Traffic Technologies AG
in Aachen, Germany, and at the same time been able to put ourselves
right into the adventure it was coming like aliens to another country.
We particularly want to direct our thanks to Dik Lokhorst for having
the confidence in us carrying out this interesting project, to Guido
Reinartz and Dieter Becker for guiding us through the maze of im-
plementing a new testing process, to Wolfgang Carius not only for
always helping us when we bothered him with our troublesome re-
quests, but also for always doing so with a smile, to Peter Börger for
always being able to spare some time and his willingness of using his
impressive programming skills to help us solve problems, to Andrea
Heistermann for her interest in our work, to Oliver Lamm for intro-
ducing us to the world of public transport software, to Andreas Küp-
per for friendly conversations that made us feel like home. We would
also like to thank our college Matthieu Lux for pleasant collaboration
and for many exciting discussions during the lunches, our examiner
Mariam Kamkar for giving us support and guidance, and our oppo-
nents Martin Pedersen and Johan Millving.
Please accept our apologies if we have not included anyone in this
acknowledgement that we should have.
Goethe
TABLE OF CONTENTS
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Problem description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Purpose. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.5 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.6 Delimitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
5 Automated testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.2 Benefits of automated testing . . . . . . . . . . . . . . . . . . . . 30
5.3 Drawbacks of automated testing . . . . . . . . . . . . . . . . . . 30
6 Automating a manual test procedure . . . . . . . . . . . . . . 33
6.1 Deciding when to automate . . . . . . . . . . . . . . . . . . . . . . 33
6.2 Creating test cases for automated testing. . . . . . . . . . . . 35
6.3 Test performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.3.1 Scripting techniques . . . . . . . . . . . . . . . . . . . . . 35
6.4 Test evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.4.1 Simple and complex comparison . . . . . . . . . . . 38
6.4.2 Sensitivity of test . . . . . . . . . . . . . . . . . . . . . . . 38
6.4.3 Dynamic comparison . . . . . . . . . . . . . . . . . . . . 38
6.4.4 Post-execution comparison . . . . . . . . . . . . . . . . 39
6.5 Test result. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
7 Automated testing in embedded systems. . . . . . . . . . . 41
7.1 Definition of an embedded system . . . . . . . . . . . . . . . . 41
7.2 Embedded software vs. regular software. . . . . . . . . . . . 42
7.3 Defining the interfaces. . . . . . . . . . . . . . . . . . . . . . . . . . 43
7.4 Signal simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7.4.1 Full simulation . . . . . . . . . . . . . . . . . . . . . . . . . 43
7.4.2 Switched simulation . . . . . . . . . . . . . . . . . . . . . 44
10 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
10.1 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
11 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
11.1 Measure the automated testing . . . . . . . . . . . . . . . . . . 97
11.2 Dynamic comparison . . . . . . . . . . . . . . . . . . . . . . . . . . 98
11.3 Subscripted tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
11.4 Test evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
11.5 Test of another real-time embedded system . . . . . . . . 99
11.6 Extending test focus. . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Appendix A Design diagrams . . . . . . . . . . . . . . . . . . . 101
A.1 The test performance . . . . . . . . . . . . . . . . . . . . . . . . . 101
A.1.1 Internal module collaboration . . . . . . . . . . . . 101
A.1.2 Detailed class diagrams . . . . . . . . . . . . . . . . . 108
A.1.3 Activity diagrams . . . . . . . . . . . . . . . . . . . . . 141
A.2 The test evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
A.2.4 Detailed class diagrams . . . . . . . . . . . . . . . . . 184
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
CHAPTER 1
INTRODUCTION
This first chapter presents the background of the master thesis and
important parts as the problem description, goal and aim of the work.
It also contains a description of the method that has been used and
the delimitations in the work.
1.1 Background
At the beginning of the era of manufacturing clothing it was truly
seen as a craft. Over years it has developed from the tailor sewing by
hand to large industries where machine play a major part in making
the process more efficient by automating wherever possible. This ev-
olution also reflects in industries like the software industry where
making the software development process as efficient as possible by
using the power of automated software testing where suitable.
This master thesis presents how the power of automated software
testing can be used. When one is finding out about how automation
can be used in a quality assurance process, an interesting issue that
may arise is how to get there, how should one go from manual testing
to automated testing? By reading this master thesis one will get in-
formation about all of the steps that needs to be passed in the process
of going from manual testing to automated testing. A practical exam-
ple of how it can be possible to switch from manual testing to auto-
mated testing is described. The system that is the test object is an
embedded real-time system.
2 Introduction
1.3 Purpose
The reason for wanting to automate a part of the testing process is a
desire to improve efficiency and secure a high quality of the product.
The improved efficiency can be achieved by running tests at night on
hardware that is not used, and by the time the test responsible spends
on managing automated testing being less than the time spent on run-
ning tests manually. A higher quality is hoped to be reached through
more structured and thorough testing.
1.4 Goal
This master thesis has resulted in a study of how to automate a test
procedure and an implementation of an automated testing process.
1.5 Method
The main idea behind the method has been to build a knowledge base
by analysing previous work within the area of automated testing to
have a stable foundation when implementing an automated test pro-
cedure. To be able to achieve the goal of this master thesis the work
has been structured in the following steps:
• literature studies within the area software testing and automated
software testing.
• a research upon how manual system testing is done;
• formulating test cases that should be automated;
• research in existing test tools that could be useful;
• designing and implementing an automated test process;
1.6 Delimitations 3
1.6 Delimitations
The process that is automated in the master thesis is a regression test-
ing process that is performed to verify correct functionality in parts
that have not been edited and to test different combination of soft-
ware and hardware. The automated testing process in the study is
performed at system testing level, and it should only include testing
positioning functionalities in the i.box product.
4 Introduction
PART I
INTRODUCTION TO SOFTWARE
TESTING
E x e c u te a c c e p ta n c e te s t
S p e c ify re q u ire m e n ts E x e c u te a c c e p ta n c e te s t
R e q u ire m e n ts S y s te m a c c e p ta n c e
re v ie w te s t p la n re v ie w /a u d it
S p e c if y /d e s ig n Code
S y s te m /a c c e p ta n c e te s ts
D e s ig n E x e c u te
in te g ra tio n te s ts
R e q u ire m e n ts S y s te m a c c e p ta n c e
re v ie w te s t p la n re v ie w /a u d it
S p e c ify /d e s ig n C ode
I n te g ra tio n te s ts
E x e c u te u n it
C ode te s ts
C o d e r e v ie w s U n it te s t p la n
re v ie w /a u d it
S p e c ify /d e s ig n C ode
U n it te s ts
When the units that build up the software are tested seperately during
the unit testing, it is time to put the units together and test their col-
laboration with each other. This testing is called integration testing
and is a testing procedure on a lower level than the testing this thesis
deals with, namely higher order testing.
the new message to be displayed. But for some reason, every second
attempt to create a new message causes the system to have a response
time of over one minute. In that case, the user would most certain
search for another e-mail client. The purpose of performance testing
is to find such defects in the system according to specified require-
ments. It is very important that those requirements are measurable
and not only vaguely specified as: “The system should respond in a
reasonable amount of time”. In such a case the test would have an
outcome that very much depends on the person running the test case.
cur” situation, it is also likely that the same deficiency also will show
in more realistic, less stressful situations [Myers 2004].
ATM. So the user never gets her money and still has a registered
withdrawal in her account.
Some systems rely on different input sources to calculate their
output. They can be designed to calculate a result of varied precision
depending on how many input sources are functioning. When testing
such a system's ability to recover, one can simulate loss of input on
some sources and check if the system detects the failing input, but
still produces a correct output.
TECHNIQUES FOR
CREATING TEST CASES
One important matter in testing is: how to find which input values to
test with in the test cases? There exist a number of techniques to use
when testing with black-box testing and white-box testing. This the-
sis concentrates on function testing on system level and therefore
presents theory for how to create test cases for black-box testing.
written more like the former example so in that case, the number of
test cases that has to be performed is reduced to only two.
if (x < 13)
y = 3;
if (x > 12)
y = 5;
...
Figure 4.2 Example of when equivalence class
partitioning is applicable.
if (x == -39)
y = 3;
if (x == -38)
y = 3;
...
if (x == 12)
y = 3;
if (x == 13)
y = 5;
if (x == 14)
y = 5;
if (x == 15)
y = 5;
...
To be able to use the decision table when testing a system, one will
have to fill it with values according to the business rules. The first
step is to put the system inputs in the condition cells, one for each in-
put. From here you continue with putting all outputs in the “Action
1” to “Action m” cells. Once one have all the conditions and actions
representing the business rules in this general form, it is time to fill
the table with the actual values for the rules. For each condition, you
want to combine each of its possible values with every possible value
for each other condition.
To clarify the theory, below is an example of a filled decision ta-
ble for a system that is used in a auto insurance company that gives
discount to people that are married and good students. It also only
gives insurancies to people that are either married or a good student.
4.5 Pairwise testing 21
The system in this example has two conditions: “married” and “good
student”, which both take the inputs “yes” or “no”. The actions for
this system is “discount”, which can have numerical values, and “in-
surance”, which can only have the values “yes” or “no”.
In the example above, all inputs only had binary values. This
makes things a lot easier when creating the decision table, while it
can be filled with every possible combination of input values. But
what about when the inputs can take any integer as a input? Then it
is certainly impossible to take every combination of inputs and put
them into the decision table. In this case, one have to analyse the sys-
tem’s business rules before creating the decision table.
For finding test cases that test all combinations of pairs, something
called orthogonal arrays can be used, which originate from Euler as
Latin Squares [Copeland 2004]. An orthogonal array is a table where
each column represents the input variables, the rows represent test
cases and the values in the cells represents the values for each varia-
ble in the test case representatively. If all test cases are executed, it is
assured that all pairwise combinations of possible values for the in-
put parameters are tested. To find test cases using orthogonal arrays,
do the following steps:
1. Identify the variables.
2. Determine the number of choices for each variable.
3. Find an orthogonal array that fits the preconditions.
4. Map the problem on the orthogonal array.
5. Construct test cases.
Consider the following example from [Copeland 2004]:
A company is developing a web based application that should be
able to run in different browsers, with different plug-ins, different
4.5 Pairwise testing 23
AUTOMATING A MANUAL
TEST PROCEDURE
If the tests are seldom run, it will probably take a long time until the
automated test procedure results in time savings. For instance, a test
could be manually run once every month and every test execution
lasts 15 minutes. To automate that process it might require ten hours
or 600 minutes. That means that from a time perspective one will
benefit from the automated test first after three years and four
months.
When the test object constantly goes through major changes, it is
most likely that the test automation also needs to go through major
changes. When changes are time demanding so that it takes as long
as, or longer to maintain the test automation than testing manually is
then a common problem that the test automation will be abandoned.
In situations where the execution of tests are easily manually per-
formed and hard to automate one is discouraged from automating the
procedure. This situation occur for instance when a part of the test is
to estethical appeal, easy for a human to decide but hard to automate.
In cases when the test object is an embedded system or a real-time
system one is advised to rethink if one should automate because the
system can require specialised tools or features that can be hard to
implement.
Situations where physical interaction is involved, are situations
when one is recommended not to automate the test procedure, since
physical interaction can be hard to automate. It can fo instance be
hard to automate turning on/off power, unlock a lock with a key or
load a cd into a cd-player.
One important factor that should be considered before deciding
if one should automate testing is how good the manual test procedure
is. It is advised not to automate if the manual testing process has
problems as for instance that it is badly defined. This is expressed by
Mark Fewster and Dorothy Graham with: “Automating chaos just
gives faster chaos” [Fewster 1999, p.11].
Mark Fewster and Dorothy Graham also describe characteristics
that are positive when introducing automated testing. Characteristics
as: important test, can easily be automated, gives pay back quick and
test is being run often are factors that are good when automating
[Fewster 1999]. As mentioned earlier, maintenance can be a problem
for automated testing. Regression testing can therefore be specially
6.2 Creating test cases for automated testing 35
suitable for automated testing since the maintenance is low due to the
repeatedly running of tests.
• shared scripts;
• data-driven scripts;
• keyword-driven scripts.
The test log is as the name implies used to be able to log what have
occurred during the execution of the test. It should contain: a test log
identifier, description of the execution and the activity and event en-
tries that have occurred during the test.
The test incident report is used to be able to store information
about unexpected events during test execution. It should contain: a
test incident report id, summary of the execution, incident descrip-
tion and impact of the incident.
The test summary report is used to sum up the test execution and
presents any incidents that may have occurred during execution and
the result of the test.
CHAPTER 7
AUTOMATED TESTING IN
EMBEDDED SYSTEMS
The electronic devices used in every day life such as wash machines,
cell phones, PDAs and car stereos are all more and more used
through out the society. The computer system in such a device is
called an embedded system.
plant. To interact with the system over these interfaces, special soft-
ware may have to be developed that simulates the devices that sends
and receives signals to and from the embedded system software.
If all hardware that sends input to and reads output from the embed-
ded system is simulated, the term full simulation is used. [Kandler
2000]
44 Automated testing in embedded systems
AUTOMATED TESTING
5.1 Introduction
According to Fewster and Graham [Fewster 1999], there are four at-
tributes which distinguish how good a test case is. The first is the de-
fect detection effectiveness of the test case, i.e. if it at all is able to
detect the error it was designed to detect. The second quality is that
one would want the test case to be as exemplary as possibly. An ex-
emplary test case will test many test conditions in a single test run.
The last two are cost factors that affect the quality of the test case:
how economic the test case is to perform, analyse and debug, and
how evolvable it is, i.e. how much effort is needed to adopt the test
case to changes in the software under test. It is often hard to design
test cases that are good according to all these measures. For example,
if a test case can test many conditions it is likely to be less economic
to analyse and debug and it may be require big adoptions to software
changes.
When a test case is automated, its value on the economic scale is
most likely to rise if it is to be performed several times. However, if
the test case tends to be run only occasionally it tends to fall since the
development cost of the test case is higher for an automated test than
it is for the same manual test. An automated test also tends to be less
evolvable than its manual correspondence [Fewster 1999].
30 Automated testing
A tool does not possess any imagination. If the test case has er-
rors, those errors can be found and corrected immediately during the
test run be a human, which leads to the test can be carried out suc-
cessfully anyway.
32 Automated testing
PART III
IMPLEMENTING AUTOMATED
TESTING
8.1 IBIS
An IBIS (Integrated on-Board Information System) is a general term
for an on-board computer used in public transport vehicles for con-
trolling all computer-aided service functions on the vehicle. The
IBIS controls for example passenger systems like electronic dis-
plays, automatic announcements of stops and printing tickets. On a
bus, it also controls traffic lights and changes to green when the bus
approaches the light. Through audio and text messages, the com-
mand centre has the ability to communicate with the driver via the
IBIS. For more information (in German) about IBIS, see [VöV
1984], [VöV 1987] and [VöV 1991].
8.2.1 Positioning
8.2.1.1 GPS-receiver
The vehicle has a GPS-receiver that every second sends a GPS mes-
sage to the GPS module in the i.box. The GPS messages that are sent
is of two different types, namely GPGGA and GPRMC messages.
Both contain the current GPS position along with some other infor-
mation. This information is in the GPGGA message for example
time, number of satellites used in the calculation of the position, al-
titude, and data for the geoid and ellipsoid used in the calculations.
For the GPRMS this information is among other speed, date and
time. The date and time is used by the i.box to adjust its clock peri-
odically.
8.2 The i.box 49
reason for not only having one system is that one wants the effect that
is reachable by using all of them. If one look into the four systems
starting with the GPS, one sees that the accuracy of only using GPS
is not sufficient in all situations. GPS can therefore not be used alone.
Looking into the calculations based on the distance pulse it can nei-
ther be used alone, since the calculations cannot be exact since the
distance varies when driving the same route different times. Because
stops can be of different size, i.e. they can be designed to allow one
ore many buses to stop at the same time, there is defined areas for the
distance pulse meter and the GPS within which the i.box should syn-
chronise, i.e. set the position to a stop. There exist three different
synchronisation areas: GPS area, distance pulse area before the stop,
and distance pulse area after the stop, see ”Figure 8.1: The three dif-
ferent areas that are important for synchronisation.” on page 51.
The GPS area is used all the time, the distance pulse area before the
stop is used when the vehicle is approaching the stop and the distance
pulse are before the stop is used when the vehicle is leaving the stop.
The distance meter synchronisation areas can be set different for dif-
ferent stops and the GPS area is setup for each type of i.box in a con-
figuration file.
8.2 The i.box 51
Stop 3
GPS area
Stop 2
Stop 1
Stop 2
Stop 2
Stop 1
The automated test procedure was divided into four different steps:
to construct test cases, to execute the test, to execute an evaluation of
the result and to generate result.
The first part, to construct test cases for automated testing, was
performed by hand based on the test cases that existed for manual
testing and on the methods described in “4 Techniques for creating
test cases” on page 17. Since the test procedure deals with regression
testing, which leads to that new test cases are not created often, there
is no need for a tool to generate test cases.
The last three parts of the process to automate testing was per-
formed by specifying requirements on the different parts, doing an
survey of some of the tools that where available, and after evaluating
the result of the investigation deciding to develop tools that fulfilled
the specified requirements.
A set of key-words to use when writing the test cases have been de-
fined, and are in this thesis referred to as the actions in the test case.
A control script along with a support script for each action have been
implemented, see “9.4.3.4 Test case execution” on page 74.
S Y N C H R O N IZ A T IO N A T T H E S T A R T N E T P O IN T
T h a t th e s ta r t-n e t p o in t o f a tr ip is r e a c h e d is
d e te r m in e d b y th e fo llo w in g c r ite r a :
1 ) T h e r e c e iv e d G P S c o o r d in a te s a n s w e r to (w ith a
m a x im a l d iv e r g e n c e o f a s e tu p p a r a m e te r ) th e G P S
c o o r d in a te s o f th e G P S c o o r d in a te s o f th e s ta r t-n e t
p o in t.
2) T h e d o o r is o p e n e d w ith in th e s to p a r e a o f th e s ta r t-
n e t p o in t.
3) T h e d r iv e r c o n fir m s th e s ta r t n e t p o in t m a n u a lly .
4) A n e w tr ip is c h o s e n b y th e d r iv e r .
T e s t c a s e 1 .1 .3 / 1 : C h o s in g a v a lid trip . T h e in s e tu p
p a ra m e te r 5 2 1 1
G P S _ S ta n d a r d fa n g b e r e ic h
d e fin e d to le ra n c e a re a (2 0 m ) is
fu lfille d .
E x p e c te d o u tp u t: T h e s ta rt n e t p o in t is
a u to m a tic a lly s e t in th e d riv e r’s
te rm in a l.
A c tu a l o u tp u t:
F u n c tio n c o rre c t.
F u n c tio n h a s e rro rs
F u n c tio n n o t u s e a b le
9.2.2 Preconditions
9.2.2.2 Time
The time precondition is one of the settings that are needed to set all
i.boxes in the same state during test execution. Depending on the
time settings, different timetables will be shown in the GUI of the
i.box. Therefore, the time needs to be set so the actions that are exe-
cuted coincide with how the actions were meant to be executed by
the test case creator.
9.2.3 Actions
The actions describe the course of events that should be executed be-
fore the output should be read. The biggest difference between these
actions and the actions in test cases for manually testing is that both
the actions and the parameters that belong to the actions are more
specifically specified. In order to automatically perform the events
that are described in the test cases for manually testing, the following
fourteen actions have been created.
different from the state another i.box would be in after the same se-
quence of button pushes. Since one of the requirements of the system
is that different systems should be able to be tested with the same test
case, one must have the possibility to also specify what one would
like to do with the button pushes. Therefore, it is also possible to
have a string as a parameter to this action. This string describes what
should be performed and it corresponds to different button pushes in
different i.boxes. For instance “confirm trip” answers to button se-
quence 1 2 3 in one i.box and button sequence 7 2 8 in another.
9.2.3.3 Wait
The wait action makes the test system to pause in the execution the
number of seconds that is given as argument. It can be used to give
the i.box more time to execute its commands and it is also useful if a
test person wants to manually control something in the i.box during
test execution.
er hand, if the argument is zero the internal GPS status will be set to
not active, and if GPS coordinates are currently being sent the send-
ing will cease.
the GPS coordinates will be fetched from the database with informa-
tion about the trip in the other case the GPS coordinates that are giv-
en as arguments will be used.
The different expected values that existed in the test cases for manu-
ally testing were translated into specified expected values. At least
one expected value most be specified in the test case to have a com-
plete test case. The name of the expected value describes what output
is of interest. The expected values are:
1. Vehicle is at specified net point
2. Vehicle is between net points
3. The status of the trip
4. I.box has been synchronized
5. Next net point is at specified net point
6. Vehicle is off-route
An expected value could be having more than one thing that one is
interested in to see the result for, for instance if one would like to
check that the vehicle is between two net points, that means that one
is testing what stop point that was the last one that was passed, which
is the next one and how far has the vehicle driven since last stop
point. To be able to test different things at one expected value, at
64 Automating the test process
least one parameter were added to each expected value. What param-
eter values that needed to be added, was controlled by what could be
read as output from the i.box. The expected values and their param-
eters are:
1. Vehicle is at specified net point
1.1. Last net point
1.2. Distance since last net point
2. Vehicle is between net points
2.1. Last net point
2.2. Next net point
2.3. Distance since last net point
3. The status of the trip
3.1. Trip status
4. I.box has been synchronized
4.1. Distance since last synchronisation
5. Next net point is at specified net point
5.1. Next net point
6. Vehicle is off-route
6.1. Off-route status
Three parameter values have special conditions. The parameter val-
ue “Distance since last net point” to the expected value “Vehicle is
at specified net point” should always be zero. That means that the test
case user must not specify the parameter value when creating a test
case with the expected value “Vehicle is at specified net point”. This
is done automatically when storing the test case. Another parameter
value with a special condition is the parameter “Distance since last
net point” in the expected value “Vehicle is between net points”. The
expected value should always be larger than zero. That means that it
is not possible to set a value that is correct, since all values that are
larger than zero means that the vehicle is between the two stop
points. The last parameter value with a special condition is the pa-
9.3 Analysis of existing test tools 65
Requirement that was common for all of the three parts (execute test,
evaluate the result and generate test report) is:
• costs should be low.
The tool for executing the tests should be:
• able to test embedded systems;
• compatible with the i.box, so it can read outputs and send inputs;
• able to simulate GPS coordinates in DM format and send it to the
i.box;
• able to simulate distance pulses and send it to the i.box, alterna-
tively be able to use existing test tool to send distance pulses, see
“9.4.2 B4 test system” on page 70;
66 Automating the test process
This is in fact not really a test tool, but it is a tool for automating any
task that is performed with a keyboard and a mouse. The reason for
looking into this tool was, that it might be used together with existing
software, see “9.4.2 B4 test system” on page 70 and “9.4.1 Auster”
on page 68, for sending input to the i.box.
The first reason for not choosing this tool was that it would be a
long error-prone task to create test cases, because all test cases would
have to be recorded by doing them manually. If one would make a
mistake when recording the test case, the whole test case would have
to be recorded once again from the beginning. Since the test case ex-
ecution is quite long, it would take much time to record a test case,
and because of the length, it would be quite likely that errors would
occur during recording.
The other reason, which is the main reason for not choosing this
tool in the automation process is that it would be hard if not even im-
possible to synchronize the use of all tools that would have to be used
for simulating all kinds of input.
Some other tools for automated testing have also been investigated.
These tools include:
• Mercury WinRunner
• Mercury TestDirector
• Compuware TestPartner
• National Instruments’ Labview
• Android
68 Automating the test process
What they all have in common is that they either could not be used
with embedded systems, were not compatible with the i.box or that
they could only perform GUI testing. Therefore, they were all reject-
ed to be used in our implementation of automated testing.
Even if several tools on the market offer products which have pow-
erful testing functionalities, did none of the analysed tools fulfilled
all the requirements from “9.3.1 Requirement on tool” on page 65.
That leads to the decision to develop on application specifically for
testing the location functionalities at system level testing in the i.box.
9.4.1 Auster
As written earlier, the test execution tool was designed using module
based concepts. The following main functionalities that could be di-
vided into different modules was identified as: fetching test cases
and test input, controlling the execution of test cases and their ac-
tions, sending input signals to the i.box, reading output from the
i.box and storing the output for later evaluation.
72 Automating the test process
T est M anager
E x te r n a l T est C ase
T o o ls E x e c u tio n
E x te r n a l D a ta
D a ta
E n v ir o n m e n t
C o m m u n ic a tio n
Auster +
B4 test system
part. After the set up procedure the test manager sends a command to
external data manager to fetch the test cases that should be executed.
As a last part the test execution module is called to iterate over the
test suite and execute each test case. The flow of events that occur in
this module can be seen in ”Figure 9.5: Flow of test manager.” on
page 73.
Functionalities in
external data
Set up environment manager are
invoked
Execution of unprocessed
test cases Ends when all
actions are
1. executed
Initialisation process
2. 3.
Execution of unprocessed
actions in test case
4.
The logging part is the part that is mostly connected to the other
parts of the application. In general it is used for storing outputs from
the i.box and to write error in a log file. That means that it handles:
storage of actual data, logs different outputs from the i.box that are
used during program execution and error handling when it is not pos-
sible to store the error in the database.
The conversion part handles all the necessary conversions in the
program. The most advanced conversions are conversions between
GPS coordinates in formats as DMS, DM, DD and CC.
The timer contains functionalities that could be found in a stop
watch: it facilitates possibilities to start, stop, restart and set a maxi-
mal amount of time that can elapse and check if there is time left.
Within the creational design patterns Singleton has been used, when-
ever needed. It is needed to assure that there exists only one object of
each of the following: internal storage, output log, GPS sender, con-
nection to Auster and connections to data bases.
Behavioural patterns as iterator has been used to iterate over, and
get information about test cases, actions, expected values and net
points.
For descriptions of the design patterns see [Gamma 1995].
9.4.5 Concepts
The concepts in this section are concepts that are specifically inter-
esting to look more into. Some of them are issues that are applicable
only for embedded systems, some of them regard only testing in real-
time system, some of them are interesting only regarding testing the
i.box and some are of interest when using automated testing in gen-
eral. Although most of them are applicable for testing in several of
the mentioned systems.
often in embedded systems. The core behind the issue is that one
wants to be sure that the value that is read as the actual value really
is the actual value and not an old value for a response in the system.
One needs to be sure that all the actions that have been executed in
the test application also have been executed in the object that is test-
ed. The best solution would be to read the output exactly from the
time point when the last action in the test case has been executed in
the test object. Unfortunately this situation is hard, and perhaps even
impossible, to achieve in many test executions. The underlying rea-
sons that make it impossible are different depending on the test ob-
ject. When testing the i.box the reasons depends on i.box specified
characteristics, that it is a real-time system and that it is an embedded
system.
When testing a real-time system, the time to fetch output could be
decided based on a defined response time, that would assure that ac-
tion that has been executed in the test application also have been ex-
ecuted in the test object. In the case of testing the i.box, there does
not exist a well specified response time for executing each of the ac-
tions, which means that it is not possible to be totally certain when
the test object is finished with the execution.
There will also be a delay from the time point when the execution
in the i.box is finished until the output has reached the test applica-
tion. The reason for that is that the output is only possible to be read
through the Auster and not directly from the i.box. To be able to min-
imize the delay one would have to read the outputs directly from the
software at the test object, but since it is an embedded system the
hardware is not meant for having a test application running; it is de-
signed for the software of the main application. The next question
that perhaps then comes to ones mind is: why not move the software
out of the environment and test it in an environment that it is suita-
ble? But that is unfortunately not that easy to perform, since then it
would not be a system test. One wants to test the behaviour of the ap-
plication in the environment that it should be run on when the appli-
cation is in use.
Another issue that one needs to consider when designing an auto-
mated test application is that the output value could change from a
correct value to an incorrect value just after the value is read. It can
therefore be of interest to not only read a single value, rather to also
78 Automating the test process
investigate the values over a period of time. All the scenarios that are
viewed in ”Figure 9.7: Example of different output situations.” on
page 79 would be possible to get as actual values. If the output would
be read at different time points, different actual values could be
fetched, which could lead to false results. The execution of the test
case could be seen as a success, even if it contained errors.
In this implementation, actual values are read during a period of
time, to be able to fetch any output values that may differ. The time
point when to start reading the outputs as actual values is when the
last action has been executed in the test application plus response
time that is specified for each different i.box system. As mentioned
earlier there does not exist a well defined response time for all the
different actions that should be executed, the response time that is
used here is a response time that is a time that is the same regardless
what events that is performed in the i.box. The response time that is
defined must therefore be the longest time that any action requires.
That means that the system will be waiting too long after some exe-
cution and the situation described in ”Figure 9.7: Example of differ-
ent output situations.” on page 79 could unfortunately be missed in
cases, when the execution requires shorter time than the specified re-
sponse time. The time point for when to stop reading actual values is
calculated so the duration of the fetching period always is ten second,
which is a value that have been selected based on testing.
9.4 Design of test execution 79
A is th e tim e w h e n a c tio n A is s e n t, w h ic h
s h o u ld re s u lt in o u tp u t 1 . R is th e tim e p o in t
w h e n th e s y s te m re s p o n d s to th e in p u t.
1 fail
0
A R
1 fail
0 A
R
1 pass
0
A R
1 fail
0 A
R
1 fail
0
A R
pass
1
0 A R
have preceded and what i.box system that is tested. That means that
something needs to be performed to set the i.box in the same start
state before the new test case can be executed. One solution would
be to have an ending procedure that could be executed after the actu-
al value is read, which would set the i.box in the correct start state.
But since the i.box is in different states depending on which actions
are in the test case that is run, what i.box system is tested, and that
the same test cases are used for all different systems, this is not pos-
sible in this study. Also, if for some reason the test case execution
fails, either because of malfunction of the i.box or that some input
signals can get lost on its way to the i.box, the i.box would be in a
completely unknown state. Which would set the i.box in one of a
large number of different states at the ending of a test case. This is
also the reason for not being able to have dependent test cases, that
is, the next test case is dependable of the outcome of the last test case
and can continue to run directly where the last ended, since the state
of the i.box is very different from test case to test case.
Another solution is to reboot the i.box system after every test case
execution. That results in that the i.box is always in the same start
state before running a new test case, independent of the result of the
previous execution and what actions that has been executed in the
i.box. It will affect the execution of the tests in the way that every ex-
ecution requires more time. The time it requires to reboot is approx-
imately two minutes. This last solution is the way it is implemented
in this study.
values, or in an error log file. The first choose on where to store the
error should always be the database, which makes it possible to in-
clude the error in the test report. If the errors occur when it is not pos-
sible to write in the database, the error should be written into the
error log file.
The implementation was done using C++ in Visual Studio. Class di-
agram that describes the implementation can be found in “Appendix
A Design diagrams” on page 101. Four threads are used during pro-
gram execution. One thread continuously logs outputs on a TCP port,
one triggers the i.box to dump specified output information, one
sends GPS coordinates and also the main thread that executes the
program.
82 Automating the test process
only on the southern hemisphere, and not everywhere as for the east-
ern coordinate.
9.4.7.3 CC
The CC format is an extension of the UTM, that is a mapping to a
cartesian coordinate system that is applicable around the globe. This
system uses a global coordinate system so that calculating distances,
finding coordinates along a line, and other coordinate calculations
are simplified. This system is not a universal standard, but instead a
specialisation for this implementation made by the authors of this
thesis. This format is used since it is easier to calculate a line between
two points that in the UTM belong to a different zones, than it is with
the UTM format.
A p p lic a tio n
Automated
T e s t S ta rte r Test case +
A c tu a l v a lu e s
A u to m a te d T e s t
E v a lu a tio n
E rro r
lo g Test
file R e p o rt
9.5.1 Evaluation
P aram eter
value passed
E xpected
value
P aram eter passed
value passed
T est case
passed
P aram eter
value passed
E xpected
value
P aram eter passed
value passed
Param eter
value failed
E xpected
value
Param eter failed
value passed
T est case
failed
Param eter
value passed
E xpected
value
Param eter passed
value passed
are larger than zero means that the i.box has not been synchronized,
and should result in pass in the evaluation procedure.
Since the actual values are saved during a period of time, it is pos-
sible to make evaluation based on several values. The reason for
wanting to do so is discussed in “9.4.5.1 When to read the output?”
on page 76. As viewed in “9.7 Example of different output situa-
tions.” on page 79 the only situation that should result in a passed re-
sult is when the actual value do not fluctuate during the time period.
All result are saved in the test report when there exist a fluctuation,
otherwise only one result is written in the test report.
9.6.1 Format
The test report was made according to the IEEE Std 829-1998 stand-
ard [IEEE 1998] with some modifications. The two documents, test
log and test incident report, where concatenated into the test report.
Information about who has run the test case are omitted, even though
it exists in the IEEE standard. The main parts of the report consists
of information about the test execution, the test case, its precondi-
tions, actions, expected result and the actual result.
The XSL file that was built views a summary of the test case execu-
tion with a picture describing the result. In the summing up part any
errors that may have occurred during execution of a test case are also
showed. Further down in the file is all information corresponding to
each test case showed. An example of a test report visualized by us-
ing XSL is shown in ”Figure 9.13: Example of a test report as it can
be presented in a web browser formatted with an XSL-style sheet.”
on page 91.
9.6 Generate result 91
This ending part of the master thesis presents an analyse of the work
that has been done by transforming the manual test procedure into an
automated test procedure.
The work of this thesis is not comprehensive within the large area
of how a manually testing procedure in a real-time embedded system
can be automated; there remain many interesting fields to explore.
Some of the fields that deserve extra attention are described in this
part.
94
CHAPTER 10
ANALYSIS
10.1 Result
The objective of this master thesis was that it should automate a man-
ual testing procedure in an embedded real-time system, which has
been attained. The master thesis has shown that it is possible to au-
tomate the testing process, by implementing a suggestion that is
ready for being used in an automated testing procedure. However
many things have to be taken into account when automating testing
in an embedded system. For example, it could require special test
hardware and software, which was the case for this implementation.
If this hardware and software were not already implemented it would
have been much harder and would have costed a lot more. Thanks to
that these tools were already used for the manual testing, the possi-
bility for being able to implement automated testing increased a lot.
What can be seen at this point is that the implementation of the
tool was succesful, but what about the implementation of the auto-
mated test process? When this analysis is written, the new automated
test tool has only been used for a couple of weeks, and the testers are
still exploring its facilities and supposed advantages. To be able to
evaluate if the implementation of the new test process was successful
or not, the tool has to be used for some months in order for the testing
to start benefit from it. This evaluation could be a part of the future
work and is shortly described in “11.1 Measure the automated test-
ing” on page 97.
96 Analysis
CHAPTER 11
FUTURE WORK
takes to have test cases with partly duplicated contents. This question
of issue has been left out in the work with this master thesis.
DESIGN DIAGRAMS
TestManager
Route RelativeAttr
<<Usage>>
<<Usage>>
<<Usage>> <<Usage>>
DriveAction
GPSStatusAction
WaitAction
<<Usage>>
<<Usage>> DoorAction
<<Usage>>
InitPositionAction
<<Usage>> <<Usage>>
<<Usage>>
TestCaseExecutor
TestCase
<<Usage>> 1
ConfigFile <<Friend>> TestCaseList
1 <<Friend>> <<Friend>> 1
<<Usage>>
<<Usage>>
DataManager
<<Friend>> <<Usage>>
InternalData
TelnetClient
AusterCommunicator DLLCommunicator
1
<<Usage>>
<<Usage>>
AusterStarter GPSCommunicator
TestException
-string errorMsg
+TestException(string error)
+TestException()
+~TestException()
+what():string
ConvertException LoggerException
-string errorMsg -string errorMsg
DataManagerException TestManagerException
-string errorMsg -string errorMsg
ActionExecutionException TestCaseExecutionException
ExternalDataManagerException EnvironmentCommunicationException
-string errorMsg -string errorMsg
Database
TestCaseDB TripDB
1 <<Usage>> 1
<<Friend>>
<<Friend>>
<<Usage>>
ExternalDataManager
<<Usage>>
ConfigFileReader
Convert
Timer
LoggerBlock LoggerBlockParam
<<Usage>> <<Usage>>
LoggerBlockList
<<Usage>>
<<Usage>> <<Usage>>
1
Logger
IRunnable
1
<<Usage>>
<<Usage>>
ThreadException Thread
TestManager
-ExternalDataManager * externalDataManager
-DataManager * dataManager
-TestCaseExecutor testCaseExecutor
-AusterStarter * austerStarter
-Thread * austerThread
-Logger * logger
-Thread * loggerThread
+TestManager()
+~TestManager()
+executeTests(int noArgs,const char * startInfo[]
TestCaseList GPSCommunicator
<<Usage>>
<<Usage>>
<<Usage>> <<Usage>> Thread
DLLCommunicator <<Usage>>
<<Usage>> <<Usage>>
<<Usage>> TripDB
AusterStarter
<<Usage>> <<Usage>> 1
<<Usage>> <<Usage>>
<<Usage>>
<<Usage>>
<<Usage>>
<<Usage>>
ActionExecutor
-tcId : int
+ActionExecutor() <<Usage>>
+~ActionExecutor()
+executeActions(int testCaseId):voi
<<Usage>>
TestCaseExecutor
<<Usage>> <<Usage>>
<<Usage>> <<Usage>> <<Usage>>
<<Usage>>
ButtonAction
+ButtonAction()
+~ButtonAction()
+executeAction(int testCaseId,int actionId):void
+sendButtons(string buttons):void
-extractFirstButton(string & sequence):string
-removeBlanks(string & sequence):void
-isButtonList(string buttonList):bool
<<Usage>>
ActionExecutor
<<Usage>> <<Usage>>
<<Usage>>
DoorAction
+DoorAction()
+~DoorAction()
+executeAction(int testCaseId,int actionId):void
<<Usage>>
ActionExecutor
DriveAction
+DriveAction()
+~DriveAction()
+executeDriveDistance(int testCaseId,int actionId):void
+executeDriveToPosition(int testCaseId,int actionId):void
+executeDriveDistanceToPosition(int testCaseId,int actionId):void
+executeDriveRelGPSArea(int testCaseId,int actionId):void
+executeDriveRelBSAreaBefore(int testCaseId,int actionId):void
+executeDriveRelBSAreaAfter(int testCaseId,int actionId):void
+executeDriveRelGPSAreaBSAreaBefore(int testCaseId,int actionId):void
+executeDriveRelGPSAreaBSAreaAfter(int testCaseId,int actionId):void
-getRoute(RelativeAttr relativeAttr):Route
-getRoute(int testCaseId,int actionId,bool fetchDistance):Route
-getRelativeAttr(int testCaseId,int actionId,bool bs,bool bsAfter,bool GPS):RelativeAttr
-simulateJourney(Route driveInfo):int
-calculateGPSCoordinates(Ibox startGPS,Ibox endGPS,double wiDistance):list<NMEA>
-calculateSpeed(double distance,int numberGPS,double sendingTimeGPS):double
<<Usage>>
DataManager GPSCommunicator
<<Usage>> <<Usage>>
<<Usage>> <<Usage>> <<Usage>>
GPSStatusAction
+GPSStatusAction()
+~GPSStatusAction()
+executeAction(int testCaseId,int actionId):void
<<Usage>>
ActionExecutor
<<Usage>>
InitPositionAction
+InitPositionAction()
+~InitPositionAction()
+executeAction(int testCaseId,int actionId):void
<<Usage>> <<Usage>>
<<Usage>>
GPSCommunicator DataManager
ActionExecutor
<<Usage>> <<Usage>>
<<Usage>> <<Usage>>
WaitAction
+WaitAction()
+~WaitAction()
<<Usage>>
ActionExecutor
WIStatusAction
+WIStatusAction()
+~WIStatusAction()
+executeAction(int testCaseId,int actionId):voi
<<Usage>>
ActionExecutor
1 <<Friend>>
<<Friend>> <<Friend>> 1 1 <<Friend>> 1
<<Usage>>
<<Usage>>
<<Usage>>
<<Usage>>
DataManager
-DataManager * dataManagerInstance
+clearInstance():void <<Usage>>
<<Usage>> +getTrip():Trip * <<Usage>>
+getConfigFile():ConfigFile *
<<Usage>>
+getTestCaseList():TestCaseList * 1 <<Usage>>
<<Usage>> +getInternalData():InternalData *
1 +getInstance():DataManager * 1
+~DataManager()
#DataManager() <<Usage>>
#DataManager(const DataManager & arg1
<<Usage>>
<<Usage>>
<<Usage>>
<<Usage>> <<Usage>> <<Usage>>
ButtonAction GPSStatusAction
Trip
-map<int, map<string,string,std::less<string> > , std::less<int> > netPointList
-map<int, int, std::less<int> > orderNetPoint
-map<int, int, std::less<int> > idNetPoint
-map<string,string,std::less<string> > tempNetPointInfo
+getNetPointOrder(int netPointId):int
+emptyTripData():bool
+getNetPointId(int netPointOrder):int
+getNetPointInfo(int npId,string name):string
+setNetPointInfo(int npId,string name,string value):bool
+~Trip()
+printValues():void
#Trip()
#Trip(const Trip & arg1)
#operator=(const Trip & arg1):Trip &
InternalData
-map<string,string> internalData
1
<<Usage>> <<Usage>>
<<Friend>>
<<Usage>>
<<Usage>>
<<Usage>>
DataManager GPSCommunicator
DLLCommunicator
<<Usage>>
Convert DataManager
<<Usage>> <<Friend>>
<<Usage>>
TestCaseList
-vector<TestCase> testCaseList
-map<int, int, std::less<int> > testCaseIndex
-activeTestCaseIndex : int
-activeActionIndex : int
#TestCaseList(const TestCaseList & arg1)
#operator=(const TestCaseList & arg1):TestCaseList &
#TestCaseList()
+~TestCaseList()
+addActionToTestCase(int tcId,int aId,string type):bool
+addParameterToAction(int tcId,int aId,string name,string value):bool
+addParameterToTestCase(int tcId,string name,string value):bool
+addExpectedValueToTestCase(int tcId,string type):bool
+getActionParam(int tcId,int aId,string name):string
+getTestCaseParam(int tcId,string name):string
+getNextExpectedValue(int tcId,int eId):string
-createTestCase(int tcId):int
-getTestCaseIndex(int tcId):int
+printValues():void
+getFirstTestCaseId():int
+getNextTestCaseId():int
+getFirstActionId(int tcId):int
+getNextActionId(int tcId):int
<<Usage>> <<Usage>>
<<Usage>>
<<Usage>> <<Usage>>
<<Usage>>
DoorAction WaitAction TestCaseDB ButtonAction
<<Usage>>
DriveAction TestCaseExecutor WIStatusAction
TestCaseList
<<Usage>>
<<Friend>>
TestCase
+map<string, string, std::less<string> > testCaseInfo
+map<int, map<string, string, std::less<string> >, std::less<int> > actionList
+map<int,int, std::less<int> > actionOrder
+map<string, string, std::less<string> > actionParameterList
+vector<string> expectedValueList
#TestCase()
+TestCase(const TestCase & original)
+~TestCase()
ConfigFile
-std::map<string, string, std::less<string> > configInfo
+printValues():void
+getConfiguration(string name):string
+setConfiguration(string name,string value):bool
+~ConfigFile()
#ConfigFile()
#ConfigFile(const ConfigFile & arg1)
#operator=(const ConfigFile & arg1):ConfigFile &
1
<<Usage>> <<Usage>> <<Usage>>
<<Usage>>
<<Usage>>
TestCaseExecutor GPSCommunicator
TelnetClient
-SOCKET s
+TelnetClient()
+~TelnetClient()
+connectToHost(int PortNo,const char * IPAddress):void
+closeConnectionToHost():void
+sendMessage(string messageToSend):void
+sendMessage(const char messageToSend):void
+receiveMessage():void
<<Usage>>
TestCaseExecutor
IRunnable
1 1
<<Usage>> <<Usage>> <<Usage>> <<Usage>>
AusterStarter
#_continue : bool
-austerActive : bool
-string iboxComType
-string austerPath
-string iboxIP
-string iboxType
-AusterStarter * austerStarterInstance
-austerStartStatus : bool
+startAuster():void
+clearInstance():void
+setAusterActive(bool status):void
+setIboxComType(string type):void
+setIboxType():void
+setIboxIP():void
+setAusterPath(string path):void
+austerStarted():bool
+closeAusterPopUp():void
+closeAuster():void
+run():unsigned long
+stop():void
+setAusterStartStatus(bool status):void
+~AusterStarter()
+getInstance():AusterStarter *
#AusterStarter()
#AusterStarter(const AusterStarter & arg1)
#operator=(const AusterStarter & arg1):AusterStarter &
1
<<Usage>> <<Usage>>
TestCaseExecutor TestManager
Convert DataManager
1
<<Usage>> <<Usage>>
AusterCommunicator
-CMutex * socketMutex
-string incommingMsg
-connected : bool
-SOCKET s
-AusterCommunicator * austerCommunicatorInstance
#AusterCommunicator(const AusterCommunicator & arg1)
#operator=(const AusterCommunicator & arg1):AusterCommunicator &
+getInstance():AusterCommunicator *
#AusterCommunicator()
+~AusterCommunicator()
-connectToAuster(int PortNo,const char * IPAddress):void
+closeConnectionToAuster():void
+sendCommand(string austerCommand):void
-sendMessage(const char messageToSend):void
+getLine():string
+clearInstance():void
1 1
Convert DataManager
<<Usage>> <<Usage>>
DLLCommunicator
-DLLCommunicator * dllCommunicatorInstance
-HMODULE dllHandle
-LONG error
+clearInstance():void
+~DLLCommunicator()
+getInstance():DLLCommunicator *
+configWegimpuls(double calibrationFactor,double speed,double distance):int
+configDLL():int
+setDoor(string doorStatus):int
+resetWegimpuls():int
+startWegimpuls():int
+simWegGet():int
#DLLCommunicator()
#DLLCommunicator(const DLLCommunicator & arg1)
#operator=(const DLLCommunicator & arg1):DLLCommunicator &
-simLoadConfiguration(LPSTR filePath):int
-simDigIOHandler(DWORD reg,DWORD operation,LPVOID param):int
IRu n n a b l e
In te rn a l Da ta NM E A
Da ta M a n a g e r DL L C o m m u n i ca to r Co n ve rt Ib o x
< < Usa g e > > < < Usa g e > > < < Usa g e > > < < Usa g e > > < < Usa g e > > < < Usa g e > >
G P S Co m m u n i ca to r
# _ co n ti n u e : b o o l
-C M u te x * g p sL i stM u te x
-sta rtW I : b o o l
-L o g g e r * l o g g e r
-se n d Do n e : b o o l
-se n d i n g A cti ve : b o o l
-l i st< NM E A > g p sL i st
-H A NDL E h a n d l e
-n o S e n t : i n t
-T i m e r ti m e r
-n o Co o rd i n a te s : i n t
-o fstre a m * g p sP a th Fi l e
-o fstre a m * g p sP o i n tsFi l e
-G P S Co m m u n i ca to r * g p sCo m m u n i ca to rIn sta n ce
# G P S Co m m u n i ca to r(co n st G P S Co m m u n i ca to r & a rg 1 )
# o p e ra to r= (co n st G P S Co m m u n i ca to r & a rg 1 ):G P S Co m m u n i ca to r &
+ g e tIn sta n ce ():G P S Co m m u n i ca to r *
# G P S Co m m u n i ca to r()
+ ~ G P S C o m m u n i ca to r()
+ ru n ():u n si g n e d l o n g
+ sto p ():vo i d
+ se tS e n d S ta tu s(b o o l sta tu s):i n t
+ co n fi g S e n d (NM E A n m e a ):i n t
+ co n fi g S e n d (l i st< NM E A > g p sL i st):i n t
+ se n d i n g Do n e ():b o o l
-se n d (stri n g m e ssa g e ):i n t
-co n stru ctG P G G A (NM E A n m e a ):stri n g
-co n stru ctG P RM C (NM E A n m e a ):stri n g
-ca l cu l a te C h e ckS u m (stri n g m e ssa g e ):stri n g
-ca l cu l a te T i m e ():stri n g
-ca l cu l a te D a te ():stri n g
-o p e n CO M ():b o o l
-cl o se CO M ():i n t
+ sta rtW e g Im p u l s():vo i d
+ cl e a rIn sta n ce ():vo i d
+ o p e n K M L Fi l e (i n t tcId ):vo i d
+ cl o se K M L Fi l e ():vo i d
< < Usa g e > > < < Usa g e > > < < Usa g e > > < < U sa g e > >
<<Friend>>
1
<<Usage>> <<Usage>>
<<Usage>>
ConfigFileReader
-string filePath
+setFilePath(string newFilePath):void
+read():void
#ConfigFileReader()
#~ConfigFileReader()
#ConfigFileReader(const ConfigFileReader & arg1)
#operator=(const ConfigFileReader & arg1):ConfigFileReader &
1 <<Usage>> 1
1
<<Usage>> <<Friend>>
<<Friend>>
<<Friend>> <<Usage>>
<<Usage>>
ExternalDataManager
-ExternalDataManager * externalDataManagerInstance
+getInstance():ExternalDataManager *
#ExternalDataManager()
+~ExternalDataManager()
+getConfigFileReader():ConfigFileReader *
+getTestCaseDB():TestCaseDB *
+getTripDB():TripDB *
<<Usage>> <<Usage>>
1
TestCaseExecutor TestManager
TestCaseDB
-ODatabase connection
-OSession session
-TestCaseDB()
-~TestCaseDB()
+connect():bool
+disconnect():bool
+retrieveTestCases(list<string> tcIds):bool
-retrieveActions(int tc_i_id):bool
-retrieveParameters(int tc_i_id):bool
-retrieveExpectedData(int tc_i_id):bool
-retrieveActionsParameters(int tc_i_id,int a_i_id):bool
-retrieveDataParameters(int d_i_id):bool
-retrieveLongName(string shortName):string
+storeOutput(time_t begin,time_t end,int tc_i_id):bool
+storeError(int tc_i_id,string error):bool
-retrieveLastInsertedId(string table,string idField):int
1
TripDB
-ODatabase connection
-OSession session
-TripDB()
-~TripDB()
+connect():bool
+disconnect():bool
+retrieveInfoForTestCase(int line,int journey,string domain,string version):bool
-retrieveBusStopInfo(long idStopPoint,int type,TripDB::BusStop * busStop):bool
-getDistanceToNext(int domain,string version,TripDB::BusStop * busStopBegin,TripDB::Bu
-retrieveRouteId(int journeyId,int line,string version):int
Database
#string name
#string login
#string password
+connect():bool
+Database()
+Database(string newName,string newLogin,string newPassword)
+~Database()
+disconnect():bool
+setName(string newName):void
+setLogin(string newLogin):void
+setPassword(string newPassword):void
TripDB TestCaseDB
Timer
-running : bool
-clock_t start_cl
-time_t start_tim
-acc_time : dou
-spaceOfTime :
+setSpaceOfTi
+start():void
+restart():void
+stop():void
+isTimeLeft():bo
+Timer()
+~Timer()
1 1
<<Usage>> <<Usage>>
<<Usage>>
<<Usage>>
<<Usage>>
IRunnable
1
<<Usage>>
Thread
#_started : bool
#HANDLE _threadHandle
+Thread(IRunnable * ptr)
+~Thread()
+start(IRunnable * ptr):void
+stop():void
+suspend():void
+resume():void
+join(unsigned long timeOut):void
+isAlive():bool
#run():unsigned long
#checkThreadHandle():void
#checkAlive():void
1 2
<<Usage>> <<Usage>>
TestCaseExecutor TestManager
LoggerBlock LoggerBlockParame
<<Usage>> <<Usage>>
LoggerBlockList
<<Usage>> AusterCommunicator
1 1
IRunnable <<Usage>>
<<Usage>>
Logger
#_continue : bool
-logFileExists : bool <<Usage>>
-map<string, vector<string>, std::less<string> > expectedValues
-CMutex * loggerBlockListMutex
-list<LoggerExpectedValue > regExpToLog
-ofstream * errorLog
-RegularExpression * re
-Logger * loggerInstance
+getInstance():Logger *
#Logger()
+~Logger()
+run():unsigned long
+stop():void
+read():void
+getLoggerBlockList():LoggerBlockList *
+addLogRegExp(string regExp,string expectedData,vector<string> paramNames):void
+matchesSetup(string line):void
+isEmptyLine(string line):bool
+empty():void
+storeMatch(RegularExpression * re,Logger::LoggerExpectedValue expectedValue):bool
+createTimestamp():time_t
+createTimestamp(string hours,string minutes,string seconds):time_t
+isLoggedOnce(string loggedType):bool
+write(string message):void
+closeErrorLogFile():void
+removeLogRegExp(string name):int
+getExpectedValues():map<string, vector<string>, std::less<string> >
+clearInstance():void
+createErrorLogFile(string exePath):void
<<Usage>>
1 1 <<Usage>>
<<Usage>>
ConfigFileReader
<<Usage>> <<Usage>> <<Usage>>
TestCaseDB
LoggerBlock LoggerBlockParame
<<Usage>> <<Usage>>
LoggerBlockList
-list<LoggerBlock *> * listBlocks
+LoggerBlockList()
+~LoggerBlockList()
+addNewBlock(string valueType,time_t timestamp):void
+addParameterToBlock(string name,string value):void
+deleteLines():void
+getListBlocks():list<LoggerBlock *> *
1 <<Usage>>
Logger
InitPositionAction GPSCommunicator
<<Usage>> <<Usage>>
<<Usage>>
<<Usage>> <<Usage>>
Convert
+Convert()
+~Convert()
+sdeToIbox(SDE sde):Ibox
+iboxToCC(Ibox ibox):CC
+ccToIbox(CC cc):Ibox
+iboxToNMEA(Ibox ibox):NMEA
+nmeaToIbox(NMEA nmea):Ibox
+stringToInt(string str):int
+intToString(int i):string
+stringToDouble(string str):double
+doubleToString(double d):string
<<Usage>> <<Usage>>
DoorAction TestCaseDB
IRunnable
+run():unsigned
+stop():void
NMEA
+string x
+string y
SDE
+string x
+string y
CC
+x : double
+y : double
Ibox
+string x
+string y
A.46:
Figure
A
c
t
i
vity diagram for the logger thread.
147
EvaluateException
-string errorMsg
+EvaluateException(string error)
+EvaluateException()
+~EvaluateException()
+what():string
Evaluator
-createdResultFile : bool
-createdErrorFile : bool
-ofstream * xmlFile
-ODatabase connection
-map<string,string,std::less<string> > names
-map<string,Parameter,std::less<string> > expectedValues
-map<string,Parameter,std::less<string> > actualValues
-ofstream * errorLogFile
Exploratory testing To sit down and try out the software under test
in order to trying to figure out its functionalities
and what to test.
OS Operative System.
In English
The publishers will keep this document online on the Internet - or its possible
replacement - for a considerable time from the date of publication barring excep-
tional circumstances.
The online availability of the document implies a permanent permission for any-
one to read, to download, to print out single copies for your own use and to use it
unchanged for any non-commercial research and educational purpose. Subse-
quent transfers of copyright cannot revoke this permission. All other uses of the
document are conditional on the consent of the copyright owner. The publisher
has taken technical and administrative measures to assure authenticity, security
and accessibility.
According to intellectual property law the author has the right to be mentioned
when his/her work is accessed as described above and to be protected against
infringement.
For additional information about the Linköping University Electronic Press and
its procedures for publication and for assurance of document integrity, please
refer to its WWW home page: http://www.ep.liu.se/
© Johan Andersson & Katrin Andersson