Essentials
Essentials
Hosse
Safety of the
Intended
Functionality
Guide
Refining the safety of the
target function on the way to
autonomous driving
essentials
essentials provide up-to-date knowledge in a concentrated form. The essence of
what matters as "state of the art" in the current professional discussion or in
practice. essentials informs quickly, uncomplicatedly and understandably
The books in electronic and printed form present the expert knowledge of
Springer specialist authors in a compact form. They are particularly suitable for
use as eBooks on tablet PCs, eBook readers and smartphones. essentials:
Knowledge modules from business, social sciences and the humanities, from
technology and the natural sciences, as well as from medicine, psychology and
the health professions. By renowned authors from all Springer publishing brands.
The German National Library lists this publication in the Deutsche Nationalbiblio- graphie;
detailed bibliographic data are available on the Internet at http://dnb.d-nb.de.
Springer Vieweg
© Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2019
The work including all its parts is protected by copyright. Any use not expressly permitted by
copyright law requires the prior consent of the publisher. This applies in particular to
duplications, adaptations, translations, microfilming and storage and processing in electronic
systems.
The reproduction of common names, trade names, product designations, etc. in this work does
not justify the assumption that such names are to be considered free in the sense of trademark
and brand protection legislation and may therefore be used by anyone, even without special
identification.
The publisher, the authors and the editors assume that the data and information in this work are
complete and correct at the time of publication. Neither the publisher, nor the authors, nor the
editors make any warranty, express or implied, as to the content of the work, any errors or
statements. The publisher remains neutral with regard to geographic assignments and area
designations in published maps and institutional addresses.
V
Foreword
VII
VIIIVoreword
IX
XIndex
5.2.3
Measures of return of responsibility
for the driving task to the driver ................................................ 30
5.2.4 Measures to control the
reasonably foreseeable misuse .................................................. 31
5.3 Verification and validation of SOTIF requirements ............................... 32
5.3.1 Proof of concept - preliminary verification and
Validation of SOTIF requirements ............................................34
5.3.2 Property validation - final verification
and validation of
SOTIF requirements .................................................................. 35
5.3.3 Product monitoring - confirmation of the
Verification and validation through field experience ................. 36
6 Conclusion and Outlook ................................................................................ 39
Literature43........................................................................................................
Everything can be explained
more easily with an example 1
The aim of this essential is to provide a practical guide for the application of the
Safety of the Intended Functionality (SOTIF) methods in the development of
safety-relevant electronic control systems for motor vehicles. In order to achieve
this goal, the theoretical and normative principles of the safety of the intended
function are first presented in the chapters presented, and these are then illustrated
again in a practical manner using a real accident example that occurred due to an
uncertainty in the intended function. Due to the comprehensive reappraisal of a
fatal accident with a Tesla Model S from May 07, 2016 by the National Traffic
Safety Board (NTSB) in the United States of America, this accident is used to
illustrate the previously presented concepts (NTSB 2017):
The investigation report (NTSB 2017) makes it clear that many different
factors contributed to the accident. The various causes contributing to the
accident (including the design of the sensor system, the design of the human-
machine interaction, and a foreseeable misuse or improper use of the vehicle
automation system by the driver) have many points of relationship to the design
of a robust and safe target function. This is exactly the subject of the SOTIF
approach.
Fig. 2.1 Axes of safety for safety-related electronic control systems for motor vehicles
Functional safety is the part of safety that depends on the correct functioning of
the safety-related system and other risk-reducing measures (i.e., the identified
safety functions). The goal is to evaluate the potential functional failure and
implement appropriate measures to avoid systemic failures and random failures.
In this way, the residual risk is reduced to an acceptable level. Functional safety is
comprehensively described in ISO 26262 and represents the industry-wide
standard in the development of safety-related electronic control systems for motor
vehicles. With the completion of the revision of ISO 26262 (so-called 2nd
Edition), this objective of functional safety is sharpened. This focus is illustrated
in Table 2.1 by comparing the relevant text passages of the different editions of
the standard. Here, the definitions of the initial version (issue date 2011) are
shown on the left and the definitions of the revised standard (published in the
final draft in October 2018) on the right. In the comparison of the two
2.1 Distinction from the correctness of the safety function (ISO 26262)5
Tab. 2.1 Comparison of hazard and risk analysis requirements between ISO 26262:2011 and
ISO 26262:2016
ISO 26262:2011 ISO 26262:2018a
3-7.4.2.2.1 The hazards shall be 3-6.4.2.2 The hazards shall be determined
determined systematically by systematically based on the
using adequate techniques possible malfunctioning
behavior of the item
Definition components that are omitted from standards are shown in strikethrough
and those that are added are shown in bold.
In addition to the sharpening of the scope, clause 5.4.2.3 of ISO 26262:2018
additionally refers to the sub-disciplines of attack safety (cf. clause 2.2 here) and
safety of the intended function (cf. clause 2.3) as relevant for the development of
safety-relevant electronic control systems for motor vehicles.
The principles for implementing functional safety in automotive applications
are the subject of this essential only to the extent that it serves to delimit the
specific activities aimed at safety in use (see below).
Definition With the design goal of functional safety, methods for controlling
systematic errors and random errors are specifically applied in system design. For
this purpose, a risk analysis is first carried out in which various driving
maneuvers are structured according to their frequency of occurrence.
the reliability, controllability in the event of a functional failure, and the severity of possible
The risk analysis is used to assess the probability of damage. The results of the
risk analysis are used to derive safety integrity requirements (Auto- motive Safety
Integrity Level, ASIL) for individual safety functions, which describe the scope
of measures to be taken against systematic and random faults.
Systematic errors always have the same critical effect. They have their
causes in the system development. One example of this is incorrect or incomplete
implementation of safety requirements, which results in a
6 2 SOTIF - What it is and what it is not
• Attacks from the outside can leverage security functions that have been
implemented in a targeted manner and thus have a negative impact on
functional security.
• Attacks from outside can deliberately provoke previously unknown insecure
system behavior and thus have a negative impact on the security of use.
The safety in use considers the safety of the target function, or target functional
safety, to be provided by the system under consideration. In this context, the
system under consideration must not pose any intolerable hazards to persons if it
is used as intended or if it is expected to be used incorrectly. Hazards may arise
due to incomplete specification or expected (mis)use of the functions. The subject
of safety of intended use (SOTIF) is currently being developed in an international
standardization project.
The completeness of the safety function is at the heart of this essentials
and will be considered in more detail in the following sections.
The objective of safety of intended function (SOTIF) is to improve
knowledge about the possible system behavior even in previously unknown
application scenarios. A hazard in the sense of target functional safety occurs
when a system, while complying with all the predefined
specified requirements can nevertheless be transferred to an unsafe state.
and, as a result, an accident can occur.
Why do we need SOTIF?
3
The goal of SOTIF is to define a structured design process for the avoidance of
safety violations caused by a faulty target function. A target function is
considered to be faulty if the system behavior is not sufficiently known and
specified. Therefore, SOTIF elaborates dedicated sector 3, unknown and unsafe
events. After methods
Fig. 3.1 Differentiation of the event space according to familiarity and safety
are applied that reduces the amount of unknown and unsafe events, hazard control
measures can be taken that reduce the amount of unsafe events now known,
making the system safer.
In addition to vehicle manufacturers, SOTIF will also play an increasingly
important role in the future for suppliers of complex safety-related electronic
control systems for motor vehicles. Due to the increase in complexity of today's
control units, especially in the area of driver assistance systems and high
automation, the structured consideration of an unknown uncertain system
behavior becomes more important. As long as the systems only reach a level of
automation of Level 2 (partially automated, partial automation) according to SAE
J 3016, a higher safety relevance than the safety integrity level ASIL B for the
ECUs should not be reached the safety target ASIL B should not be exceeded.
However, as soon as the driver is removed from the driving process, the safety
targets for the auto- mation functions increase up to a safety integrity level ASIL
D. In contrast to ISO 26262, the SOTIF PAS does not yet contain a metric that
assigns a safety relevance to a function.
3.1 The risk of unknown and uncertain system states11
Example 2
In the future, new sensor technologies will be used in vehicles. The target
function can be negatively influenced at various levels of sensor data
processing. For example, it is possible for physical effects to falsify the raw
data from sensors. The ISO SOTIF document gives the example of a limited
detection performance of an imaging sensor due to deposits on the road
surface. Even the next higher level of sensor data processing - the extraction of
object data - can lead to unsafe system states, for example, through the use of
faulty object hypotheses, which do not necessarily have to be known in
advance. For example, an object hypothesis valid for pedestrians cannot be
applied to skateboarders (speed too high). As a result, an object is not
recognized as such, so that the safety-oriented reaction of the vehicle
automation system does not take place in this case.
Due to the large number of possible traffic situations, it is only possible to test all
possible situations in advance with extremely high effort. Therefore, one goal of
SOTIF is to reveal possible unexpected system behavior at an early stage through
targeted measures.
12 3 Why do we need SOTIF?
One of the main tasks of SOTIF is to ensure that the probability of a hazardous
event is sufficiently low, in which the system cannot safely handle a particular
use case and the people involved are not able to mitigate the hazardous event. If
such an unsafe use case is known, it can be targeted for hazard control. This
involves a structured treatment of recognized hazards through targeted system
improvements, measures to restrict the target function, a targeted return of
responsibility for the driving task to the driver, and measures to control reasonably
foreseeable misuse by the user.
The SOTIF procedure model4
This chapter presents the basic structure of the SOTIF process building. The
individual steps of the SOTIF process model are then explained. A presentation of
how SOTIF relates to other design disciplines (essentially functional safety
according to ISO 26262) concludes this chapter.
The SOTIF procedure is carried out in several phases that build on each other.
These are explained below.
SOTIF concept phase as the beginning of SOTIF activities:
• Creating the function and system specification: The creation of the function
and system specification is the starting point for the development of
automation functions. As a document accompanying development, it is always
adapted to new findings. The main contents are the description of the goals of
the intended function and the dependencies of the function on other vehicle
functions and systems, relevant environmental conditions and interactions in
terms of the design of the human-machine interface.
• Identification of SOTIF risks: This is followed by a systematic identi
fication of the hazards caused by the malfunction of the function under
consideration. Analogous to the procedure of ISO 26262, an evaluation of the
recognized hazards can also be made here according to frequency (Exposure,
E), severity (Severity, S) and controllability (Controllability, C). The
difference here, however, is that for the classification of hazardous events, a
delayed or absent reaction of the driver to control the critical driving maneuver
is also taken into account.
• Identification and evaluation of hazardous use cases The aim of the ana-
lyse is a targeted identification of system weaknesses. System weaknesses are
triggering events that can result in unintended system behavior. Triggering
events result, for example, from sensors, algorithms or actuators that are not
appropriate for the application. However, SOTIF also considers humans as a
possible triggering event in the sense of non-intended use of the developed
system (foreseeable misuse).
• Verification of SOTIF: The system and its components (sensors, algo- rithms,
and actuators) must be verified to show that they behave as expected in known
uncertain scenarios and are adequately covered by the tests performed.
SOTIF validation phase for final proof of the safety of the target function:
• Validation of the SOTIF: The system and its components are validated to show
that they do not cause any unreasonable risk in real test cases. For this
purpose, a suitable cumulative test length is calculated on the basis of
empirically determined accident figures of current vehicle systems. An
appropriate (i.e. realistic and representative) distribution of the cumulative test
length among different test scenarios (e.g. driving on the highway, driving in
the dark, driving in the rain) is determined on the basis of the current
distribution of annual mileage among specific driving scenarios.
As can be seen in Fig. 4.1, SOTIF has reference points in particular to the concept
phase and the safety validation of ISO 26262. The results ofthe SOTIF concept
phase are incorporated into the definition of the object under consideration (so-
called item according to ISO 26262). The specification of the safety function
outlined in the preliminary result of the SOTIF process is then implemented
"functionally safe" by following ISO 26262.
An approach to structuring the adjacent neighboring disciplines of functional
security and cybersecurity is shown in Fig. 4.2. While each discipline performs an
analysis of the undesired behavior at the vehicle level, this is used to define
requirements at the vehicle level for the item under consideration. For example, the
ISO provides for the safety goals of the item. Functional requirements are then
also defined at vehicle level. These functional requirements are specified at the
component level in the form of technical requirements. The technical requirements
are then used to separate the requirements for the hardware and software of the
relevant object of consideration.
4.3 Relations to neighboring disciplines17
Fig. 4.2 Relationships to the neighboring disciplines of functional security and cyber security
The "X" shown in Fig. 4.2 as a placeholder in the requirements hierarchy can
be replaced by the terms "Safety", "Security" and "SOTIF". Therefore, a system
will have a "Functional Safety Concept" as well as a "Functional Security
Concept" and a "Functional SOTIF Concept".
It is the responsibility of the manufacturer to design the development
processes in such a way that the requirements from the three disciplines can be
managed, or that a system is able to map a dedicated concept for safety, security
and SOTIF.
Case study on the design of SOTIF5
The design of SOTIF is illustrated below with an example. For the sake of
simplicity, reference is made to a published report of an independent accident
investigation of an accident involving a vehicle with automated driving functions
in the sense of a retrospective view; compare this with the example of the Tesla
Model S accident from 2016 briefly presented in chapter 1.
The goal of SOTIF is, of course, a prospective system analysis and subsequent
design of driver assistance and vehicle automation. However, the results of the
accident investigation and the concrete design recommendations listed therein are
well suited to provide an understanding of the concrete implementation of SOTIF.
In the concept phase, the hazards and risks are methodically derived from a
description of the vehicle automation function. This is the starting point for the
subsequent system implementation.
In the example
Example 3 of the Tesla Model S accident, it involved a driver-
supporting (L2) automation system. (Gasser et al. 2012) The driver still has the
responsibility for safety:
• On the one hand, the driver takes in information about the vehicle
environment directly through his own sensory perception. On the other
hand, they also receive information indirectly via the display functions of the
vehicle automation system.
This generic action diagram, shown in the following figure, already illustrates
which hazards can contribute to a failure of the target function (sensors,
algorithms, actuators, foreseeable driver error).
Driver
indirect Indirect perception
operating action via displays in the
of the vehicle
driver Direct perception of the
Vehicle with assistance function traffic environment
Sensitization of the
Immediate
traffic environment
execution of
driving
maneuvers
The function and system description is the starting point for the design of an
automation function. It is constantly adapted to new findings during development.
In addition to the desired behavior of the vehicle automation in control operation,
it describes in particular the existing limits of the functionality. Special
importance is given to the consideration of
5.1 SOTIF concept phase 21
Malfunctions, i.e. the system behavior in the event of detected limitations (so-
called degradation) and corresponding warnings or takeover requests to the
driver.
The functional description according to SOTIF can be made in the usual way
Example 4
as already provided for in the item definition according to ISO 26262. The use
case of the system plays a particularly important role here, since it must be
known exactly under which conditions the system is to be used.
enables mode
overrules/deactivates disables objects
set speed target speed
signs
Autopilot
Autopilot Process Model
Autopilot HMI
Control Actions: - Enable/Disable Autopilot
• Provide Trajectory- Target Speed
Missing • Status Mode
Feedback • ObjectsFeedbacks :
Driver Operative • SignsData Fusion and Assessment
Process Awareness • Target Speed- Assessed Environment Model
Accident Causation
Trajectory Follow- Data Fusion and Assessment
Up Controller Process Model
camera radar
vehicle dynamics
Data Data
The SOTIF DPAS (ISO/SAE J 21448) lists a number of possible hazard analysis
methods that are capable of identifying hazards in terms of target functional
safety. In order to reduce the risk of the unknown in the development of safety-
relevant electronic control systems for motor vehicles, a method is required that
captures the driver-vehicle-driving environment system as a socio-technical
system (Hosse et al. 2012).
In principle, various methods can be used to identify and analyze unknown
unsafe events at the system level. One method is the HAZOP (Hazard and
Operability Study), which tries to identify basic possible failure mechanisms by
using key words (e.g. too early, too late, too little, too much). In the following
explanations, the STAMP/STPA method is used to reveal cybernetic principles
(control loops). Often it is the violation of basic cybernetic structural principles
that leads to the occurrence of accidents.
The STAMP/STPA method (Systems-theoretic accident model and pro-
cesses, STAMP/Systems-theoretic process analysis, STPA) is a model-based
hazard analysis method developed by the US safety researcher Nancy Leveson,
which analyzes a safety-relevant system in a structur- ed manner by means of a
semi- formal model (the so-called Safety Control Structures) (Leveson 2011).
STAMP is a prospective approach to system analysis that aims at the conscious
safety-oriented design of SOTIF.
The goals of STAMP are:
The system-level
Example 5 hazards have already been defined, see
Section 5.1 The Safety Control Structure must then be defined. For the Model S
system, this looks as follows:
24
Driver
enables,
performs steering
disables, overtake request
angleengagetargetspeed
set speed
Driver
Steering Accelerator Control
Other tasks Brake pedal Autopilot HMI
Wheel pedal Inputs
enables, mode,
disables, objects,
overrules/deactivates target speed,
set speed
signs
Autopilot
Trajectory Follow-Up
Data Fusion and Assessment
Controller
provides delta/
target action
5
Vehicle
Ca
Steering
system
Brake System Drivetrain Dynamics INS Camera Radar se
Controllers st
controls controls controls ud
y
Vehicle Dynamics on
th
e
de
sig
Vehicle in its Environment
n
of
SO
TIF
5.1 SOTIF concept phase 25
Control action Required but not Unsafe action Incorrect Stopped too soon/
provided provided timing applied too long
Overrule/ Driver inputs do Driver inputs
Deactivate not overrule Auto- deactivate
pilot Autopilot too
late
Enable Autopilot is ena-
bled unintended
Send mode Autopilot does Autopilot
status not send mode sends mode
status status when not
enabled
Provide asses- Environment Environment Environment (Same) model
sed environ- model not pro- model provided model provi- provided too long
ment model vided when not ded too late
(not updated) required
In order to identify the UCAs, the Control Actions are reasoned through by
means of a guide word method of the error cases of the Control Action (here:
required but not provided, unsafe action provided, etc.). Subsequently, the
combinatorics of the Control Actions are evaluated in terms of whether one of
the described failure cases leads to a potentially dangerous state. This results
in the UCAs. The inversion of the UCAs then results in the SOTIF
requirements for compliance with the safety of the target function.
In this example, the safety control structure of the driver must also be
analyzed in addition to the safety control structure of the driver, since the
driver also acts as a controller in the system due to the L2 automation and
assumes safety relevance:
26
Tesla Motors
update
status logs
enable functions
driving data sets goals
sent changelogs defines customer
authoring defines customer
coverage expectations
Tesla Model S defines image
Control action Required but not Unsafe action Incorrect timing Stopped too
provided provided soon/applied
too long
Steer Driver does not Driver steers
steer Model S Model S too late
when required
Enable Driver enables
autopilot when
not allowed
During the analysis of the Tesla Model S accident carried out here, some
UCAs could be identified which were not complied with. These are listed in
the following table:
The accident report showed that the primary cause of the accident was the
driver's lack of reaction, but it also showed that the autopilot was not able to
execute the correct hypothesis during object detection or to keep the driver
sufficiently engaged in the driving process (e.g. by detecting steering wheel
movement).
28 5 Case study on the design of SOTIF
After identifying potential missing system functions that may lead to unsafe
system behavior, the SOTIF implementation can be performed.
In the following, four different measures are presented that can be used to control
detected errors. Together, these measures ensure that the SOTIF risk can be
reduced to an acceptable level. As a rule, several measures must be combined to
achieve the desired safety-oriented effect.
One of several
Example 6 possible starting points for system improvement in the
The control loop is the sensor system (measuring element). The example of
the investigation report (NTSB 2017) mentions the networking of vehicles in
the sense of cooperative assistance as a starting point for system improvement.
V2V (vehicle-to-vehicle) systems transmit warnings and safety-related
information between vehicles. If all vehicles are equipped with these systems,
this results in an improved perception of the vehicles' surroundings. A new
data source with reliable and accurate data is opened up, so that collision risks
that are beyond the range of the vehicles' own sensors can be detected. If this
data is fused with existing vehicle data, this can lead to a significant
improvement in active safety functions.
5.2 SOTIF Implementation Phase 29
Hazards resulting from incorrect execution of the target function can be avoided
or their impact reduced by functionally restricting the target function. In
assistance systems, for example, a driving task is continuously automated. The
driver performs either the longitudinal or lateral guidance of the vehicle himself,
while the other driving task is automated within limits. It is important to
understand that even the automated driving task is only performed by the driver
assistance system to a limited extent ("within limits"). The system limits can be
clarified using examples in the accident report (NTSB 2017). Different
limitations are distinguished here:
Example 7
In the case of the vehicle involved in the accident, it was envisaged that in the event that
the valid-
If the maximum permitted speed on the section of road cannot be detected, a
maximum ACC speed of 45 mph is set as the fallback level. In this case, the
responsibility for accelerating above this threshold is returned to the driver. If
the driver takes his foot off the gas pedal, the system brakes the vehicle to the
threshold value of 45 mph.
The intended use of a product results from its function. In contrast, the use of a
product in a manner not intended by the vehicle manufacturer, but which may
result from foreseeable human behavior, is referred to as foreseeable misuse. The
reasons for foreseeable misuse can be differentiated on the basis of the three basic
processes of human information processing:
• Perception: For example, the driver does not understand and cannot operate
the system due to complicated user guidance. Also, the driver may not be able
to understand the information relevant to him or her because of a
"Information overload" does not recognize.
• Evaluation: As a consequence of previously mentioned erroneous perceptions,
the driver may make a wrong decision.
• Action: The driver's action may fail because he or she may make a mistake due
to poor concentration. A driver may also deliberately ignore traffic or behavior
rules. In addition, the system may be so difficult to operate that it is impossible
for him to perform the action correctly.
In the case
Example 8 of the vehicle involved in the accident, there was already at the level of
perception
an error on the part of the driver. In this case, the driver did not understand the
limitations of the automation system. He did not know about the systematics of
the protective function. This led to an erroneous evaluation ("overestimation of
the automation"). The driver relied on the automation or overestimated its
effectiveness.
32 5 Case study on the design of SOTIF
This in turn led to an erroneous action in that he did not pay attention to the
traffic environment. This human error - facilitated by the inadequate
recognition of the driver's attention being diverted from the driving task - led
to the accident.
The proof that the system to be developed fulfills its intended target function is
based - as in functional safety - on the fundamental concepts of verification and
validation. In the context of SOTIF, this is understood to mean the following:
5.3 Verification and validation of SOTIF requirements 33
• Verification: Proof that the specified target function has been implemented
correctly. For example, requirements-based tests are used to prove that the
system has been implemented in accordance with the specification. However,
this requires a complete and correct system specification. Verification aims to
show that the developed system can handle known uncertain events.
In the concretely
Example 9 discussed example, a verification of the requirements would have
z. For example, a test case could have been used to test the reaction time of the
driver with the current system design in different driving maneuvers. Although
the specific case that led to the accident would certainly not have been
uncovered, the time span between detection of the driver's distraction and
feedback from the system to take over the steering could have been shorter.
A concrete
Example 10test case for validating the system could be created, for example, by Varia-.
The validation of the system could have been carried out outside the
predefined traffic infrastructure environment. In the concrete example, the
system could have been validated outside the predefined traffic infrastructure
environment. For example, it is not yet intended that automation systems
cannot be activated for certain infrastructures, nor on inappropriate
infrastructures (a highway pilot should also only be activatable on a highway,
but not on a country road - although the speed profile is comparable). The
validation according to SOTIF thus provides for the deliberate extension of the
previously defined operational environment of the system and the use of the
system in a context that has not yet been taken into account.
34 5 Case study on the design of SOTIF
• Variation: Since the different parameters of driving scenarios can be easily varied,
a higher test coverage will in principle be easier to implement.
• Reproducibility: Since vehicle automation via its sensor technology factors
such as road users, weather conditions and dynamic obstacles into their
maneuvers, there is no sufficient reproducibility of all possible test scenarios
in real driving tests. Since simulations take place under controlled conditions,
repeated test execution will lead to the same test results.
• Acceleration of test execution: The increasing complexity from the
Vehicle automation has resulted in a sharp increase in the effort required to
test assistance systems. The number of variations of different influences is so
high that not all possible scenarios can be reproduced in a real test
environment. A simulation-based approach allows the driving maneuver under
consideration to be performed faster than in real time. Tests can also be carried
out in parallel in several instances of the simulation models.
5.3 Verification and validation of SOTIF requirements 35
Before a system is released, it must be proven that it reliably fulfills the intended
target function. During development, simulation-based test methods are applied
to the various integration stages of the product before tests can be carried out
under real conditions.
the traffic environment can be simulated reproducibly and no real vehicles are
required. Furthermore, in this early test phase, any risk to test drivers and the
system environment can be ruled out (Wagener and Katz 2018).
Tests under real conditions Once the systems have been sufficiently tested in
the laboratory, they are then tested under real conditions.
The life of a product does not end when it is placed on the market. Unfortunately,
problems resulting from incorrectly or incompletely implemented functions often
only become apparent in the field, as the real example used for this essential
clearly shows. Although SOTIF is supposed to prevent exactly this, due to the
complexity and the large solution space, it will never be possible to avoid this
completely. Therefore
5.3 Verification and validation of SOTIF requirements 37
Since the ruling of the German Federal Court of Justice in the "Honda case" at the
latest, automobile manufacturers have been subject to the obligation of product
monitoring. Although this aspect is not considered in the current ISO document
on SOTIF, it is generally discussed for the introduction of increasingly highly
automated vehicles in road traffic (Federal Ministry of Transport and Digital
Infrastructure 2017). Methodological approaches are therefore also required for
the structured collection and evaluation of field experience, which will be
outlined below:
• Understanding the importance of fully specifying safety functions for the safe
design of increasingly highly automated road traffic.
• Knowledge of which methods of system analysis can contribute to the completion of a
system.
The safety functions can be maintained at a constant level.
• Knowledge of the contribution of system improvements in the sense of
optimized system-technical chains (sensors, algorithms, actuators), as well as
the functional restriction of the considered function on the robustness of the
target function.
• Knowledge of the role played by the human factor in the design of a robust
The aim is to ensure that the human-machine interaction is appropriately
designed and that reasonably foreseeable misuse is systematically taken into
account.
• Knowledge of how a structured demonstration of robustness by measures
verification and validation along the life cycle of safety-relevant electronic
control systems for motor vehicles.
Beglerovic, Halil, Steffen Metzner, and Martin Horn. 2017. challenges for the validation
and testing of automated driving functions. In Advanced microsystems for automotive
applications 2017, ed. Zachäus et al. Wiesbaden: Springer.
Federal Ministry of Transport and Digital Infrastructure. 2017. final report of the ethics
committee on automated and connected driving. 2017.
Cacilo, Andrej, Sarah Schmidt, Philipp Wittlinger, Florian Herrmann, Wilhelm Bauer, Oliver
Sawade, Hannes Doderer, Matthias Hartwig, and Volker Scholz. 2015. highly automated
driving on highways - industrial policy conclusions. Study commissioned by the Federal
Ministry for Economic Affairs and Energy (BMWi).
Gasser, T., Arzt, C., Ayoubi, M., Bartels, A., Bürkle, L., Eier, J., Flemisch, F., Häcker, D.,
Hesse, T., Huber, W., Lotz, C., Maurer, M., Ruth-Schumacher, S., Schwarz, J., Vogt,
W. 2012. Legal consequences of increasing vehicle automation - Joint final report of
the project group. In Reports of the Federal Highway Research Institute, Bergisch-
Gladbach.
Hosse, René Sebastian, Daniel Beisel, Eckehard Schnieder. 2012. analyzing driver assistant
systems with a socio-technical hazard analyzing methodology. 5th International. Conference
on ESAR ("Expert Symposium on. Accident Research"). Hannover 7th/8th September
2012.
IEC 61882: Hazard and operability studies (HAZOP studies) - Application guide. 2016.
International Electrotechnical Commission.
ISO 26262: Functional safety - Road vehicles. 2011. International Standardisation Organisation.
Leveson, N.G. 2011. engineering a safer world. Massachusetts: MIT Press.
National Transportation Safety Board (NTSB). 2017. highway accident report - Collision
between a car operating with automated vehicle control systems and a tractor-
semitrailer truck near Williston, Florida, May 7 2016 (NTSB/HAR-17/02). Washington:
NTSB.
Preuk, Katharina, Lars Schnieder, Claus Kaschwich, Daniel Waigand, Eva-Maria Elmen-
horst. 2016. stresses of employees in the driving service of public transport - validation
and acceptance analysis of a psychomotor vigilance test in the test field AIM. 17th
Symposium Automation Systems, Assistance Systems and Embedded Systems for
Transportation (AAET), Braunschweig, February 10-11, 2016, 61-78.
Reif, Konrad. 2010. driving stabilization systems and driver assistance systems.
Wiesbaden: Vieweg + Teubner.
SAE J 3061: Cybersecurity guidebook for cyber-physical vehicle systems. 2016. society for
automotive engineers.
Schnieder, Lars, and René S. Hosse. 2018. automotive cybersecurity engineering guide -
securing connected vehicles on the road to autonomous driving. Berlin: Springer.
Schnieder, Eckehard, and Lars Schnieder. 2013. road safety - measures and models,
methods and measures. Berlin: Springer.
Wagener, Andreas, and Roman Katz. 2016. automated scenario generation for testing
advanced driver assistance systems based on post-processed reference laser scanner
data. Proceedings Driver Assistance Systems 2016. Wiesbaden: Springer.
Winner, Hermann, Stephan Hakuli, Felix Lotz, and Christina Singer, eds. 2015. Handbuch
Fahrerassistenzsysteme - Grundlagen, Komponenten und Systeme für aktive Sicherheit
und Komfort. Berlin: Springer.