0% found this document useful (0 votes)
14 views78 pages

Risk Science (CHPT 2,3,4)

The document introduces the concept of risk, emphasizing its relation to activities and the potential for both undesirable and desirable consequences. It discusses various definitions of risk, the importance of uncertainties, and how risk can be measured and described through examples. The authors also highlight the distinction between the concept of risk and its measurement, advocating for a comprehensive understanding of risk in practical situations.

Uploaded by

Isidor Coma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views78 pages

Risk Science (CHPT 2,3,4)

The document introduces the concept of risk, emphasizing its relation to activities and the potential for both undesirable and desirable consequences. It discusses various definitions of risk, the importance of uncertainties, and how risk can be measured and described through examples. The authors also highlight the distinction between the concept of risk and its measurement, advocating for a comprehensive understanding of risk in practical situations.

Uploaded by

Isidor Coma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

Risk Science

An Introduction

Terje Aven and Shital Thekdi


Contents

About the authors viii


Preface ix
Acknowledgments xvii
Key symbols xviii

1 Illustrative examples 1
1.1 Personal case: management of time in an academically
rigorous environment 1
1.2 Organizational case: Total Business Management Inc.: supply
chain investments in response to major potential changes
in business and technology 2
1.3 Public health crisis: addressing a global pandemic 4

PART I BASIC CONCEPTS 7

2 What is risk? 9
2.1 Definitions of the risk concept 10
2.2 Illustrating examples 13
2.3 Risk and related concepts 16
2.4 Problems 21

3 Measuring and describing risk 24


3.1 The probability concept 26
3.2 Basic ideas on how to measure or describe risk 31
3.3 Common risk metrics 43
3.4 Potential surprises and black swans 49
3.5 Problems 53
CHAPTER 2
What is risk?

CHAPTER SUMMARY

Risk is related to an activity, for example, an investment, the operation of a technical


system or life on earth or in a specific country. Risk reflects the potential for undesir-
able consequences of the activity. When we drive a car, we face risk!– an accident could
occur, leading to injuries and/or deaths. However, the outcome could also be positive!–
a successful trip. Thus, risk is about both undesirable and desirable consequences of the
activity and uncertainties about what these consequences will be. The consequences are
with respect to some values (e.g., human lives, health, the environment, and monetary
values). Vulnerability is basically risk given the occurrence of an event. For example,
we can speak about the vulnerability of persons to various diseases. The chapter also
discusses the meaning of related concepts such as resilience, safety, safe, security and
secure.

LEARNING OBJECTIVES

After studying this chapter, you will be able to explain

• the basic ideas of the risk concept


• the meaning of the statement: uncertainty is a main component of risk
• the difference between risk and hazards/threats/opportunities

DOI: 10.4324/9781003156864-2
10 BASIC CONCEPTS

• that risk is about both negative (undesirable) and positive (desirable) outcomes
• what risk means in practical situations
• the relationship with related concepts such as vulnerability, resilience, reliabil-
ity, safety, security and sustainability
• how risk relates to time

This chapter looks into the risk concept; the coming Chapter!3 will study how to
describe the magnitude of the risk. A!distinction is made between the risk concept on
the one hand and measures and descriptions of the risk on the other. Section!2.1 dis-
cusses the definition of risk, Section!2.2 gives some illustrations of the definition and
Section!2.3 looks into related concepts such as vulnerability, resilience, safety and reli-
ability. The final Section!2.4 presents some problems. The chapter is partly based on
Aven (2020a) and Logan et!al. (2021).

2.1 DEFINITIONS OF THE RISK CONCEPT

We will present several definitions of risk, all based on the same ideas and illustrated in
Figure!2.1. The setting is this: We consider a future activity, for example, driving a car
from one place to another, the operation of a nuclear plant, an investment or the life on
earth or in a specific country. This activity will lead to some consequences seen in rela-
tion to some values, such as human lives and health, the environment and monetary val-
ues. The consequences could, for example, be some injuries and loss of lives, the current
state (no events occur), deviations from a planned profit level or failure to meet a defined
goal. There is at least one consequence or outcome that is considered negative or unde-
sirable. Looking forward in time, there are uncertainties what the consequences will be.
Think about the example of driving the car. The driver and the passengers face
risk, as there is a potential for an accident to occur, which could lead to injuries or
fatalities. The accident, injuries and fatalities are undesirable consequences of the activ-
ity. This leads us to the first intuitive definition of risk: the potential for undesirable
consequences of the activity. The term ‘potential’ relates to the consequences but points
also to the uncertainties!– an accident with some injuries/fatalities may be the result,

The risk concept


Positive

Activity Future
Consequences C Consequences
with respect to
(events, effects) some values

Uncertainty U Negative

FIGURE 2.1 The basic features of the risk concept (based on Aven and Thekdi 2020)
WHAT IS RISK? 11

but we do not know before the activity is realized. We are led to a second definition
of risk: the consequences of the activity and associated uncertainties. This definition is
appealing, as it explicitly incorporates both consequences and uncertainties, which can
be seen as the two key components of the risk concept. Returning to the car example,
the driver and the passengers face risk, as there are consequences of the activity (of
which some are undesirable) and uncertainties about those consequences. Using this
second definition of risk, we are encouraged to consider both undesirable and desirable
consequences, which is attractive, as risk is also something positive!– refer to discus-
sion in the preface!– and addressing risk, we need to take into account both positive
and negative effects of the activity. What is desirable or not may vary between different
stakeholders and be dependent on time. Consider as an example risk related to political
tensions between two countries. The tensions can lead to confrontation between the
countries and even a war. The immediate consequences related to this activity will typi-
cally be negative, but the activity could also lead to positivity in a longer-term horizon
for some stakeholders, depending on the outcome of the conflict. Applying the second
definition of risk, all types of relevant consequences are considered.
We denote by C the actual consequences of the activity and U the associated uncer-
tainties. Simplified, we write risk as (C,U). For definition 1, where risk is understood as
the potential for undesirable consequences, C is restricted to ‘undesirable consequences’.
Definition 2 extends C also to cover other types of consequences (i.e., non-negative con-
sequences). The consequences C could relate to all types of values, but often restrictions
are made, for example, by only considering loss of lives or monetary values.
To ease the understanding of this general definition of risk, it is common to relate
the consequences to a reference value (r), for example, the current state, a planned level,
or meeting an objective. This leads to a third definition of risk: the deviation D from a
‘reference value’ r, and associated uncertainties U. We write risk!=!(D,U). If C denotes,
for example, the number of fatalities of an activity, r is typically zero and D!=!C. If r
relates to a planned production level, D!=!C!– r, where C is the actual production level.
Risk captures the potential for a deviation from the planned level, often with a focus
on values below this level.
Instead of writing risk as (C,U), we often use the notation (A,C,U), where A!refers
to an event (or a set of events) and C then refers to the consequences or effects given
that A!has occurred. Think again about the car example. Here A!could be the occur-
rence of an accident and C the consequences given the accident. The event A!could also
represent ‘no accident’.

Definition 1:
Risk: The potential for undesirable consequences

Definition 2
Risk: The consequences C of the activity and associated uncertainties U
Risk: (C,U)
12 BASIC CONCEPTS

Risk: (A,C,U), where A is an event (or a set of events) and C the consequences given
the occurrence of A

Definition 3
Risk: The deviation D from a ‘reference value’ r, and associated uncertainties U
Risk: (D,U)

The event A! is referred to as a hazard, threat or opportunity, as well as a risk


source, which are terms we will come back to in Section!2.3 and Chapter!3.
When discussing risk, the activity is to varying degrees explicitly defined. For some
cases, the activity is tacitly understood, but in others, it is essential to clarify in detail
what the activity encompasses, for example, with respect to time. For instance, when
considering the risks related to the operation of a nuclear power plant, we need to
specify for what time interval, for example, the next ten years. In addition, we need to
specify for how long of a time we would consider the consequences of events occurring
in this interval. Say that an accident occurs after two years of operation. The effects of
this accident could, however, last many years, far beyond the time interval for which
the activity is considered. Should we include also these effects when considering risk in
the operation interval of ten years?
When considering an activity for a specific period of time, we refer to T as the
length of this interval. The maximum time from the occurrence of an event for which
the consequences are considered is denoted τ. In the previous example, T is equal to
10!years, and if we define τ!=!100, we consider effects of nuclear accidents for a period
of a maximum of 100!years if they should occur in the time interval [0,10].

REFLECTION
The previous discussion referred to several definitions of risk. Would it not be
better to introduce and use one definition only?
All the definitions capture the same basic idea, but there could be different ways
of expressing this idea depending on the situation studied. If you are to refer
to one scientific definition of risk only, it is (C,U). If you are to express risk in a
more commonly used way, you could say that risk is the potential for undesirable
consequences.

REFLECTION
It is common to refer to risk as probability times consequences (expected loss)
and as the combination of consequences (scenarios/events with effects) and
probabilities. Are these definitions wrong or inadequate?
WHAT IS RISK? 13

These probability-based expressions are ways of measuring or describing risk,


not risk as such. It is theoretically and practically sound to separate the concept
and its measurement (description). Risk as a concept exists without introducing
probabilities. We face risk when driving a car – no probabilities are needed to
make such a conclusion. How we measure or describe risk will depend on the sit-
uation considered. As will be thoroughly discussed in Chapter 3, the probability-
based metrics have strong limitations in capturing all relevant aspects of risk.
The distinction between the concept of risk and its measurement (description)
will stimulate a discussion about to what degree the measurements and descrip-
tions adequately reflect risk in various situations, which can lead to improve-
ments in risk characterizations.

2.2 ILLUSTRATING EXAMPLES

The following examples illustrate the ideas of the risk concept with its main compo-
nents, A, C, D and U. We will discuss the magnitude of the risks in Chapter!3.

2.2.1 A gamble
John offers you a game, only one game. He has two dice, which you are to throw. If
both dice show 6, you have to pay him $36,000; for all other outcomes, you will receive
$3,000 from John.
If you play this game, you face the risk of losing $36,000, but you can also win
$3,000. There is a potential for negative consequences: you lose this amount of money.
The activity considered is the gamble, and the consequences C is the outcome of the
activity, which is either a win of $3,000 or a loss of $36,000. Before throwing the dice,
the outcome is unknown; therefore, C is subject to uncertainty U. Hence you face risk.

2.2.2 Isabella’s exam


Isabella is to take an exam. For her, any grade worse than B is a failure. There is a
potential for an undesirable outcome, although she is well prepared. Here the conse-
quences C is the grade of the exam, or simplified, meeting the goal of grade B or not.
Before the exam, this grade is unknown, subject to uncertainty U. Isabella faces risk in
relation to the activity, taking this exam. The grade B can be seen as a reference value r.
The term D then expresses the deviation from grade B.
It is also possible to extend the consequences C beyond this particular exam. Fail-
ure of this exam may result in failure to study a specific program at the university,
which requires a grade of B or better. When considering the consequences C of this
exam, we need to clarify the values that we are relating the consequences. We may
choose to focus on the grades themselves or the access to the desired study program.
In the latter case, the exam represents a potential for a failure of access to this study
14 BASIC CONCEPTS

program. The failure to get a grade B or better at the exam can be viewed as an A!event
in the (A,C,U) representation of risk. Then C captures the consequences in relation to
the study program, where the main focus is on failure to get access to the university
program. The uncertainties relate to the occurrence of A!and the consequences given
that event A! has occurred. There could be uncertainties about getting access to this
program both in case Isabella gets the desired exam result or she does not.

2.2.3 The consequences of epidemics caused by viruses


Let us go back to summer 2019. We consider the risk related to virus disease outbreaks
in the coming five years. Let A!denote the occurrence of a virus disease outbreak in that
period and C the related consequences. These consequences relate to human health, but
also economic, social and security types of impacts, for a short- and long-term horizon.
Looking into the future, the occurrences of such outbreaks are not known, nor what
outbreaks that occur, what type of viruses and when. And, given an outbreak, the
spread and effects are unknown, subject to uncertainties. The consequences will depend
on efficiencies of the mitigation measures taken. In this example, T!=!5!years, but the
consequences are considered for a much longer period, although not yet specified.

2.2.4 Dose-response
We consider the health of a population exposed to some toxin. The people are exposed
for a time interval of length T days. We consider the consequences of the exposure for a
period of 10!years, that is, τ!=!10!years. The event A!is defined by the exposure to this
toxin of duration T days. The consequences C is defined as the percentage of the popu-
lation that develops lung cancer within the period of τ years since their exposure. Today
we do not know if they will develop lung cancer within the specified time intervals or
not!– which explains the U of the risk concept (A,C,U).

2.2.5 A community threatened by hazards


We consider a community over a period of one year; thus T!=!1!year. The community is
potentially impacted by a hazard A (for example, a hurricane or flooding). The conse-
quences considered relate to the residents within the city limits being able to have access
to food, up to the time that the conditions are basically as pre-disruption. Hence τ is of
an unknown length. The uncertainty U is explained by the fact that we today do not
know if a hazard will strike the community within the year, nor do we know what the
consequences will be. The community faces a risk, as there is a potential for a hazard
to occur with this type of negative effects.

2.2.6 Nuclear waste


We consider the risk related to events occurring which could disrupt a nuclear waste
repository, which its turn could lead to releases, severe health issues and environmen-
tal damage. Examples of such events include natural hazards (e.g., earthquakes) and
WHAT IS RISK? 15

security-type incidents (for example, a terrorist attack). The time period considered is
T!=!τ!=!10,000!years. Two levels of consequences C are considered: disruption of the
nuclear waste repository and impacts of potential releases in case of such a disruption.
Today we do not know if a disruption will occur within the specified time interval and
what the effects will be!– which explains the U of the risk concept (A,C,U).

2.2.7 Climate change


What are the effects of climate change, for a short- and long-term horizon? On your
business? Your life? Your children and grandchildren? Our country and society? And
the earth? When defining risk is relation to climate change, the answers to these ques-
tions form the consequences C. Today we do not know what these consequences will
be. We face uncertainties and hence risk.
Here a reference level r is often defined by the global temperature at ‘pre-industrial’
time. A!main goal of the 2015 Paris Agreement on climate change is to ensure that the
increases in global temperature are less than 2°C above ‘pre-industrial’ levels. Hence,
we can interpret C and D as the global temperature rise with reference to the ‘pre-
industrial’ level. The climate change risk is the size of C (D) with associated uncer-
tainty. The consequences can be related to physical quantities, such as the temperature
rise, but also other dimensions, such as societal damage and loss, societal justice, met-
rics of human well-being and economic indices. Using the (A,C) scheme, it may be more
natural to relate the global temperature rise to the event A, but, as will be discussed in
Section!2.3, these are relative concepts, and the choices depend on the purpose of the
conceptualization and analysis.
Risk can also be associated with specific goals, for example, a goal of becoming a
low-emission society. The consequences may then be related to deviations in relation to
this objective or deviations and surprises in relation to plans and knowledge we today
have about this transition, for example, as defined by the Paris Agreement. The associ-
ated risk is often referred to as transition risk.
These are references specified at the authority or government level. For a company,
other references will be more relevant, such as expected or assumed profitability. The
company will be concerned about whether and how climate change and the transi-
tion to a low-emission society will affect profitability. For example, will the authori-
ties introduce climate measures that will have serious consequences for the company’s
future income? The company faces climate-related risks. It is also common to talk
about transition risk in this sense.

REFLECTION
The consequences C relate to the values of interest and could be multidimen-
sional, covering aspects associated with, for example, human health, environ-
ment and material assets. Do you think it is a good idea to transform all of these
aspects into one dimension to ease the considerations of risk?
16 BASIC CONCEPTS

There exist several approaches for making such transformations, but as we will
see, they are not straightforward to implement. There are technical analysis
problems, and there are fundamental problems related to losing information.
We will discuss this topic in more detail in Chapter 9.

2.3 RISK AND RELATED CONCEPTS

In this section, we will look into concepts related to risk, starting with vulnerability.
Then we discuss the terms resilience, reliability, safety and security.

2.3.1 Vulnerability
The vulnerability concept is closely linked to risk; see Figure!2.2. Starting from the risk
representation (A,C,U), we can split risk into two main contributions:

Risk = ‘Event risk’ (A,U) & Vulnerability (C,U|A)

Let us return to the car example discussed in Section 2.1. Here the event A refers to an
accident, and the consequences C relate to the health impacts for the driver and pas-
sengers from the accident. The actual event A occurring could also be ‘no accident’,
reflected by writing (A,U), which states that we do not know today before the activity
is realized if the accident will occur or not. We refer to (A,U) as the ‘event contribution’
or ‘event risk’ – it expresses the risk of an accident to occur or, in more general terms,
the risk of event A occurring. This risk is not the same as the probability of the event
A occurring. Probability is a way of expressing the uncertainty U, but the existence of
the risk concept is not dependent on the uncertainty being quantified. We will explain
and discuss this in more detail in Chapter 3.
The accident is an example of a risk source, which we define as an element (action,
sub-activity, component, system, event) which alone or in combination with other ele-
ments has the potential to give rise to some undesirable consequences. For the car
case, examples of risk sources in addition to the accident are: a driver distracted by a
mobile device, health emergencies for the driver and/or passengers, a high stress level

Risk = ‘Event risk’ (A,U) & Vulnerability (C,U|A)


Risk sources
Hazards, Threats, Opportunities

Future
Activity Event A
Consequences
Consequences C with respect to
some values

Uncertainty U Uncertainty U

FIGURE 2.2 The basic features of the risk concept split into an ‘event risk’ contribution
and vulnerability
WHAT IS RISK? 17

of the driver, high speed, intense discussions among the passengers, active use of car-
audio system, slippery roads, brake failure and worn tires. Typically, the risk sources
are underlying factors of importance for the event A! to occur, but formally we also
refer to the event A!as a risk source. What is identified as the event A!in the (A,C,U)
representation of risk is not always obvious. In the car example, we could alternatively
have introduced A!as ‘brake failure’, which is seen as an underlying risk source for the
event accident. We will return to the process of identifying and selecting the A!events
in Chapters!3−5.
The terms hazard, threat and opportunity are examples of risk sources. A!hazard
is a risk source where the potential consequences relate to harm, that is, physical or
psychological injury or damage. In the car example, all the previous risk factors can be
referred to as hazards. The term threat is commonly used in relation to security appli-
cations, but it is also used more widely, for example, when referring to threat of an
earthquake. When used in relation to a malicious attack, the term threat is understood
as a stated or inferred intention to initiate an attack with the intention to inflict harm,
fear, pain or misery. When talking about opportunities, there is an aspect of positivity
associated with the risk source influencing the consequences. Think about a business
context where an opportunity arises as a result of the government increasing the funds
for industrial research and development. Or in the Isabella’s exam example in Sec-
tion!2.2.2, where an opportunity arises because of a misprint in the text leading to an
extension of the exam by 1 extra hour.
The vulnerability concept (C,U|A) is interpreted as risk given the occurrence of the
event A. The sign ‘|’ is read as ‘given’. Thus, vulnerability represents the potential for
undesirable consequences given an event (risk source). The focused values determine
what ‘undesirable’ means. In the car example, the vulnerability is related to what hap-
pens given the occurrence of the accident for the driver and the passengers. As for risk,
we can extend this definition by specifically addressing the consequences and uncer-
tainties, leading to the vulnerability representations (C,U|A) and (D,U|A).

Definition 1:
Vulnerability: The potential for undesirable consequences given an event (risk
source)

Definition 2
Vulnerability: The consequences C of the activity and associated uncertainties U,
given an event A (risk source)
Vulnerability: (C,U|A)

Definition 3
Vulnerability: The deviation D from a ‘reference value’ r, and associated uncertain-
ties U given an event A (risk source)
Vulnerability: (D,U|A)
18 BASIC CONCEPTS

As mentioned in Section!2.1, A!can be one event or a set of events. Which events


A!to consider is critical for the vulnerability judgments. A!system may not be vulnerable
to one type of event but very vulnerable to others. If we also allow for unknown types
of events, we will clearly have problems in measuring the level of the vulnerability. Yet
it is interesting to question if the system is also vulnerable to new types of events. We
will return to this discussion in Chapter!3.
We use the term vulnerability in relation to the activity considered and the values
addressed. Often vulnerability is discussed in relation to a system, like a person or a
technical structure. A!related activity is defined by the ‘operation’ of this system. We
say, for example, that a system is vulnerable if a failure of one component leads to
system failure. However, this leads us to a discussion of how to measure or describe the
level and magnitude of the vulnerability, which is a topic of Chapter!3.
Returning to the examples in Section! 2.2, risk can be divided into ‘event risk’
(A,U) and vulnerability (C,U|A). For the epidemic case of Section!2.2.3, a virus out-
break is the event A, and vulnerability is the potential for undesirable consequences
given an outbreak. Clearly both components are essential for understanding risk and
next handling the risk. Similar interpretations can be made for the other examples of
Section!2.2.

2.3.2 Resilience
Consider the health condition of some population. We speak about this population
as being resilient, meaning that if they are exposed to various risk sources, for exam-
ple, some infections, they are able to quickly return to the normal state. Resilience
is also commonly defined as the ability of the system to sustain or restore its basic
functionality (alternatively and more general: achieving desirable functionality) fol-
lowing a risk source or an event (even unknown). Often resilience is concerned
about recovery given a major event, for example, a community hit by a strong hur-
ricane. We see that resilience provides key input to the vulnerability concept, as the
resilience influences the degree to which something undesirable is to happen given
the event or risk source.

Definition 1:
Resilience: The ability to quickly return to the normal state given an event (risk
source)

Definition 2
Resilience: The ability of the system to sustain or restore its basic functionality fol-
lowing an event (risk source)
WHAT IS RISK? 19

The resilient system

As for vulnerability, the event or risk source referred to could be a set of events
or risk sources. It is tempting to say that lack of resilience is the same as vulnerability.
However, as we use the terms here, there is a difference. Vulnerability is a broader
concept. In the previous example, lack of resilience means that the people struggle with
returning to a normal health state given the risk source. The vulnerability concept,
on the other hand, highlights what the actual consequences could be of this lack of
resilience. Referring to the representation (C,U|A), the vulnerability concept addresses
all consequences defined by the values considered, for example, implications of deaths
for other family members. This type of consequences would, however, not be relevant
for the resilience concept. What matters here is the ability of the system (here these
persons) to return to the pre-event state following the disruptive event (the infection).
Resilience is thus an input to and an aspect of vulnerability. See Problem 2.11.

Vulnerability and resilience are not the same. Vulnerability, represented by (C,U|A),
is a broader concept. Resilience – how fast a system recovers – is an aspect of the
vulnerability concept.

To further illustrate the resilience concept, consider the following system model.
A system has four states, functioning as normal (3), intermediate state (2), intermedi-
ate state (1) and failure state (0). The state process is denoted Xt and is a function of time
t, t!=!0,1,2,!.!.!.!. Different stressors (events of type A) lead the system to jump to state 2, 1
or 0. We may, for example, let state 2 relate to a known type of stressor and state 1 to an
20 BASIC CONCEPTS

unknown type. Or state 2 may relate to a stressor from which we know that the system
will always recover quickly, whereas 1 is a state in which there are considerable uncertain-
ties as to what will happen. Being in state 2 or 1, the system may recover and jump to state
3, but it could also jump directly to state 0. We may think of resilience as being related to
the degree to which the system is able to return to state 3 if a stressor causes it to go to state
2 or 1. For example, if it is a fact that the system returns to state 3 in the next period of
time if a stressor has made it jump to state 2 or 1, the system is indeed resilient. However,
in practice, the system may not return to state 3, or it may return but take an extremely
long time to do so. And it may return from state 2 but not state 1.
We can also think about a system which frequently jumps from state 3 to 0 (failures
occur) but hardly visits states 1 or 2. The system could be resilient in these two intermedi-
ate states, but if the system seldom or never visits these states, it cannot be viewed as very
resilient. Hence the system resilience is also linked to the risk of direct system failures
(jumping from state 3 to state 0) relative to the risk associated with visits to the intermedi-
ate states (jumping from state 3 to state 2 or 1). We will discuss this further in Section!5.6.
As the system in Figure!2.3 is resilient, it quickly recovers when in states 1 and 2.
Thus it cannot be vulnerable when in one of these states. If the system is not resilient for
these states, the system can be considered vulnerable: it stays for a long time in states 1
or 2 and could jump to the worst state, 0. Hence the concepts in this example capture
the same ideas. Consider then these two concepts for state zero. For the resilience of
the system, we are focused on the time it takes to recover. However, for vulnerability,
we may also include considerations of effects of being in state 0 relative to the values of
interest, which is of importance for the risk and vulnerability concepts but not for the
issue of recovery and resilience, repeating the arguments provided previously for why
there is a difference between these two concepts.

2.3.3 Reliability
The reliability of a system, for example, a water pump or a power plant, is defined
as the ability of the system to work as intended. The reliability!– or rather the unreli-
ability! – concept can be viewed as a special case of risk by considering the activity

System state Xt

3 Resilience: the
degree to which the
2 system is quickly
able to return to
1 state 3 when in
state 2 or 1
0

Time t

FIGURE 2.3 Illustration of a resilient system: the system quickly returns to the normal
functioning state (3) when in one of the intermediate states (1 or 2) (based on Aven 2017a)
WHAT IS RISK? 21

generated by the operation of the system and limiting the consequences C to failures or
reduced performance relative to the system functions. Unreliability captures, then, the
potential for undesirable consequences, namely system failure or reduced performance,
and is represented by (C,U) with this understanding of C. The (C,U) definition of risk
covers both negative and positive consequences; hence, it is a matter of preference
whether we refer to risk in this setting as unreliability or reliability.

2.3.4 Safe and safety, secure and security


Safety is often referred to as an absence of accidents and losses, but such a definition
is not very useful in practice, as it refers to an unknown event when looking into the
future. However, adding the uncertainty dimension, we see that we are led to the risk
concept (D,U), (C,U) and (A,C,U). The event ‘absence of accidents and losses’ is con-
sidered a reference value r, and risk is understood as deviation from this reference with
associated uncertainties. Thus, safety can be viewed as the antonym of risk (as a result
of accidents). Consider a situation in which you walk on a frozen lake with a thick layer
of ice. The risk is low, and the safety level is high. We may conclude that walking on the
lake is safe, but then we have made a judgment about the risk: we find it acceptable or
tolerable. Thus, the term safe refers to acceptable or tolerable risk. We also often use
the safety term in this way, for example, when saying that safety is achieved.
Analogously, we define secure as acceptable or tolerable risk when restricting the
concept of risk to intentional malicious acts by intelligent actors. Security is understood
in the same way as secure (for example, when saying that security is achieved) and as
the antonym of risk when restricting the concept of risk to intentional malicious acts
by intelligent actors.

REFLECTION
A risk manager of a company states that the company has secure deliveries of
essential equipment for their production. How would you interpret this statement
in relation to the previous analysis?
The risk manager is referencing the risk related to the deliveries as acceptable
or tolerable. The reasons for adverse consequences could include intentional
malicious acts, but other reasons could also be possible, including accidents.
The term secure (and also security) is thus used in a broader way than defined
previously.

2.4 PROBLEMS

Problem 2.1
A person, Tom, walks underneath a boulder that may or may not dislodge from a ledge
and hit him.
22 BASIC CONCEPTS

a Define and explain the risk concept for this situation (activity, event A! and
consequences C).
b Define and explain the vulnerability concept for this situation. Consider two
different events A.
c Use the example to illustrate both a safety and a security situation.
d What do you call the boulder using risk terminology?
e Which of the following risk statements do you consider consistent with the
definition of risk provided in Section!2.1?
i There is a risk that the boulder will hit Tom.
ii Tom takes a risk by walking under the boulder.
iii Tom is in a risky situation.
iv Tom risks getting hit by the boulder.
v Tom faces risk!– the boulder may dislodge from the ledge and hit him.
vi The boulder represents a threat to Tom.
vii The boulder represents a hazard.

Problem 2.2
A person, Sara, invests $1!million in a project. Define risk in relation to this investment.
Clarify what type of time considerations that are needed.

Problem 2.3
Discuss the term resilience in relation to the human body and potential infections and
trauma. Do you consider the human body resilient? Why/why not?

Problem 2.4
We all would like the food we eat to be safe. Based on the discussion in this chapter,
what does it mean that food is safe?

Problem 2.5
In security contexts, when talking about risk, it is common to refer to the triplet: value,
threat and vulnerability. Explain how this perspective is covered by the general frame-
work presented in this chapter.

Problem 2.6
The concept of sustainability relates to meeting the needs of the present without com-
promising the ability of future generations to meet their needs. Define and explain
related risks, in line with the ideas of this chapter (hint: relate sustainability to the
definition of safety).

Problem 2.7
The risk term is defined in many different ways in the scientific and popular literature.
Identify one or more of these definitions. Are they consistent with the definitions pro-
vided in this book?
WHAT IS RISK? 23

Problem 2.8
Conceptualize risks for the example presented in Section!1.1.

Problem 2.9
Conceptualize risks for the example presented in Section!1.2.

Problem 2.10
Conceptualize risks for the example presented in Section!1.3.

Problem 2.11
Pam is a resilient athlete in the sense that she is rather quickly back following injuries.
At the same time, she can be vulnerable. Explain.

Problem 2.12
Consider the occurrence of a major incident A!in a community (for example, an earth-
quake, a hurricane or a flooding event). Resilience is about the recovery phase, the
ability of the community to return to the normal state. Explain how risk can be defined
in relation to this recovery.

KEY CHAPTER TAKEAWAYS

• Risk is the potential for undesirable consequences of the activity considered and
can be formalized as (C,U), where C is the consequences of the activity considered
and U the associated uncertainties. Some features of the consequences are undesir-
able. Alternatively, risk can be represented by (A,C,U), where A!is an event and C
the consequences given the occurrence of A. Two aspects of time apply, the time
interval for which the activity is considered and the time for which the conse-
quences are considered given the occurrence of an event.
• Vulnerability is essentially risk conditioned on the occurrence of an event A. Risk
can be written Risk!=!(A,U)!& (C,U|A), where the former term is the event risk and
the latter is vulnerability.
• The terms safe and secure imply acceptable or tolerable risks, typically accident
risk when talking about safe and intentional malicious acts in a security context.
• Resilience is the ability of a system to sustain or restore its basic functionality fol-
lowing a risk source (known or unknown). It is an aspect of vulnerability.
CHAPTER 3
Measuring and
describing risk

CHAPTER SUMMARY

Anne and Amy are university students who are considering a trip from Washington,
DC, to San Francisco. Anne prefers to travel by plane, but Amy is hesitant because she
views this as being risky. Anne argues that traveling by commercial airliners is very safe
because there have been very few plane crashes over the recent years. Amy is not con-
vinced. She calls for more information and a more careful explanation of what ‘risky’
means.
Anne and Amy are debating several issues. First, they are debating the likelihood of
a plane crash. What is a probability, and how can it be used to evaluate this situation?
Second, they are debating the concept of risk. What aspects are included in a character-
ization of risk? Third, they are debating the concept of safety. Both are making value
judgments about how safe air travel is, using their own values of personal welfare and
uncertainty (probability) judgments.
This chapter will discuss the issues described previously, including a thorough dis-
cussion of probability and risk. It is explained that probability is used to both represent
variation and to express uncertainties. In the former case, we speak about frequentist
probabilities and probability models, whereas in the latter case, we refer to subjective
(knowledge-based, judgmental) probabilities, or just probabilities. It is shown that the
probability concept is an essential tool for measuring and describing risk, in particular
how large or small the risks are. The limitations of the probability concept for this
purpose are also discussed. Using the (C,U) and (A,C,U) definitions of risk, we derive
at general risk characterizations (C’,Q,K) and (A’,C’,Q,K), where A’ is a set of specified
events, C’ some specified consequences, Q a measurement or description of uncer-
tainties and K the knowledge that Q and (A’,C’) are based on. The main aim of this
chapter is to motivate and explain these risk characterizations. A!number of examples
will be presented to illustrate the characterizations and their use in practice. Common
risk metrics, such as expected values and value-at-risk, are presented and discussed.
A!related vulnerability characterization will also be looked into: (C’,Q,K|A’). A!com-
mon approach is to use probability P as the uncertainty description Q, but it should
always be supplemented with judgments of the strength of the knowledge (SoK) sup-
porting the probabilities. The chapter also discusses the potential for surprises relative

DOI: 10.4324/9781003156864-3
MEASURING AND DESCRIBING RISK 25

to the available knowledge. The actual event A!occurring may be overlooked by the
specification of events in A’. The event A!is what we refer to as a black swan, which is
a popular metaphor in risk science. Its definition and use will be carefully discussed, as
well as a related metaphor!– the ‘perfect storm’.

LEARNING OBJECTIVES

After studying this chapter you will be able to explain


• the basic differences between the risk concept and its measurement/
description
• the meaning of a frequentist probability
• the meaning of a probability model
• the meaning of a subjective (knowledge-based, judgmental) probability
• the meaning of the risk characterizations (C’,Q,K) and (A’,C’,Q,K)
• the meaning of the vulnerability characterizations (C’,Q,K|A’)
• what type of risk metrics are commonly used, and what their strengths and
weaknesses are
• why strength of knowledge judgments are needed to supplement probabilities
• why potential surprises and the unforeseen are important issues in relation to
risk characterizations
• the meaning of the black swan metaphor
• the meaning of the perfect storm metaphor

First in this chapter, we discuss the probability concept, which gives a basis for
introducing and discussing the risk characterizations (C’,Q,K) and (A’,C’,Q,K). Then
some illustrating examples are provided before we end the section with some reflec-
tions about potential surprises and black swans. Background literature for the chapter
includes Lindley (2006), Aven and Reniers (2013), Aven and Renn (2009) and Aven
(2012a, 2014, 2020a).
26 BASIC CONCEPTS

3.1 THE PROBABILITY CONCEPT

Leo plays a game, throwing a die. If the outcome is 1, he loses $36. If the outcome is
2, 3, 4, 5 or 6, he wins $6. What is the probability that he loses money, that the die
shows 1?

Leo thinks about this and answers quickly: it is 1/6, as the possible number of
outcomes is six, and each outcome is equally probable. The sum over all outcomes is 1,
and hence the sought probability must be 1/6. Leo has used the classical interpretation
of probability. He has assumed that the die is fair, meaning that the probability is the
same for all six potential outcomes. But how can we ensure that the die is fair? He can
study the die’s physical properties, its symmetries and weight, and from that conclude
that each outcome should have the same probability of being the result when throwing
the die.
This study of the die’s physical properties may instead reveal that the die is not
symmetrical, is weighted or there is some other minor physical difference, but Leo does
not believe that would change the probability much. He is thinking more deeply into
what he is actually saying: “would not change the probability much”. He understands
that he is now interpreting the probability in a different way than according to the clas-
sical interpretation. The probability depends on the properties of the die and is referred
to as a frequentist probability. He performs an experiment, throwing the die 60 times.
The frequency of outcomes 1, 2,!.!.!.!, 6 is 7, 12, 8, 9, 14 and 10, respectively, resulting
in observed relative frequencies 7/60, 12/60, 8/60, 9/60, 14/60 and 10/60. If the die
were fair, as in the classical interpretation of probability, he would have expected a
relative frequency of 10/60 for each outcome. Clearly these frequencies do not confirm
that the probability of getting the outcome 1 is 10/60. However, when Leo throws the
die many more times, he observes that the fraction of times the outcome is 1 becomes
closer and closer to 1/6. We say that the observed frequency converges to 1/6. A!similar
process happens for the other outcomes. The die can be considered fair. The probability
of outcome 1 is interpreted as the long-run limiting frequency of outcome 1 if we could
perform an infinite number of similar throws of the die. In practice, we cannot perform
an infinite number of throws; thus, the frequentist probability is a mind-constructed
quantity. In this example, the frequentist probability is equal to the classical interpreted
probability. However, in general, the frequentist probability is unknown. Let us think
MEASURING AND DESCRIBING RISK 27

about a case where the die is not fair. What is then the frequentist probability of getting
the outcome 1?
The answer is that we do not know. It needs to be estimated. Yet this underlying
frequentist probability exists. We therefore need to be careful in writing and distin-
guishing between the frequentist probability and estimates of this probability. In the
example previously, 7/60 is an estimate of the frequentist probability of getting out-
come 1. We happen to know the true frequentist probability in this case (1/6), but in
general that is not the case. In writing, we refer in general to a frequentist probability
as Pf and its estimate as (Pf)*.
Thus, we need to take into account that the estimates could deviate from the fre-
quentist probabilities. There is estimation uncertainty. Textbooks in statistics analyze
these uncertainties using measures such as variance (standard deviation) and confi-
dence intervals; see Section!3.2.3.
The prominent mathematician Jacob Bernoulli (1654–1705) discussed such issues
more than 300!years ago (Aven 2010). He tried to determine probabilities of different
types of events, with an accuracy of 1/1000. He referred to the term ‘moral certainty’
about this accuracy. He concluded that a total number of 25,500 trials were required
to establish this accuracy. Jacob Bernoulli found this number very high, and it made his
analysis difficult. Working on a book project on the topic, Ars Conjectandi, he wrote
to Leibniz on October!3, 1703: “I!have completed the larger part of my book, but the
most important part is missing, where I!show how the basics of Ars Conjectandi can
be applied to civil, moral and economic matters” (Polasek 2000). Jacob Bernoulli did
not finalize the book. Researchers have indicated that Jacob was not really convinced
by his examples. He could not specify the probabilities with the necessary precision for
situations of practical importance.

The problem with this type of reasoning, the search for accurate determination of
an underlying true probability (i.e., frequentist probability), is the need for repetitions
of similar situations, as in the die example previously. In many real-life events, such
repetition is not possible. As an example, think about the frequentist probability that
28 BASIC CONCEPTS

Jacob Bernoulli would finish the book within one year following the letter to Leibniz.
Clearly, such a probability cannot meaningfully be defined. It is not possible to estab-
lish repeated experiments. Either he will finish the book or not. It is a unique event.
A frequentist probability represents the fraction of time the event considered occurs
if the situation can be repeated under similar conditions infinitely. It represents the
variation!– event occurs and event does not occur. When such a population of similar
situations can be meaningfully defined, is a judgment call. Thus, the frequentist prob-
ability is a model concept, which needs to be justified. It does not exist in all situations.
Think again about the die example. If the die is fair, the probability that the out-
come is any of the numbers 1, 2, 3, 4, 5 or 6 is 1/6. These probabilities are classical
probabilities but also frequentist probabilities representing the variation in outcomes:
throwing the die many times, the fraction of times it shows a particular number is
about 1/6. In the limit, it is exactly 1/6. If the die is not fair (and also if the die is fair),
trials can be used to accurately estimate the true underlying frequentist probabilities,
that is, the variation in outcomes, when conducting an infinite number of trials.
The third and final interpretation of probability we will discuss here is subjective
probability, also referred to as knowledge-based or judgmental probability. These are
of a different type than classical and frequentist probabilities and can always be speci-
fied, as they express the assessor’s uncertainty and degree of belief for an event to occur
or a statement to be true!– therefore the term ‘subjective’. Let A!denote the event that
Jacob Bernoulli will finish the book within one year following the letter to Leibniz. We
can think about a friend of Jacob on October!3, 1703, making a subjective probability
of A!to occur equal to 0.10, that is, P(A)!=!0.10. It means that this friend has the same
uncertainty and the same degree of belief for A!to occur as randomly drawing a red
ball out of an urn containing ten balls, of which one is red. Hence this probability is a
judgment of the assessor, not a property of ‘the world’, that is, the situation considered.
If the statement were that the probability is maximum 0.10 (10%), it means that
this friend has the same uncertainty and the same degree of belief for A!to occur as ran-
domly drawing a red ball out of an urn containing ten balls, of which one or zero is red.
The probability is an imprecise probability; the assessor is not willing to be more precise.
Let us return to the Leo example introduced in the beginning of Section!3.1 and
suppose he notices that the die is not fair. And suppose he is not allowed to throw the
die before the game. What is then the probability of getting 1? It is possible to think
about an unknown frequentist probability of this event, but we can also consider this
probability as a subjective probability. Leo studies the die carefully and assigns a prob-
ability of 0.1. This number represents Leo’s judgment; it cannot be said to be objec-
tively right or wrong, as there is no reference for such a statement. The probability is
given or conditional on some knowledge, which we refer to as K. We write P!=!P(A|K),
where the sign ‘|’ is read as ‘given’. Thus, Leo assigns a probability equal to 0.10 given
the knowledge K. He has the same uncertainty and degree of belief for the die to occur
as randomly drawing a red ball out of an urn containing ten balls, of which one is
red. A!subjective probability always has to be seen in relation to the knowledge K. We
often refer to K as the background knowledge for the probability assignment. The term
‘knowledge-based probability’ is motivated by this construction P(A|K).
MEASURING AND DESCRIBING RISK 29

Probability interpretations
Classical probability of event A, Pc (A)
Pc (A) = m/n, where n is the total number of outcomes and m the number of out-
comes resulting in event A.
It is assumed that all outcomes have the same probability.
Frequentist probability of event A, Pf (A)
Pf (A) = lim xn /n, as n goes to infinity, where xn is the number of A events in n trials
similar to the one studied.
Subjective (knowledge-based, judgmental) probability of event A, P(A)
P(A): the assigner has the same uncertainty and degree of belief for the event A to
occur as randomly selecting a red ball out of an urn containing P(A) · 100 %
red balls.

REFLECTION
Ronny states that he tells the truth, but Lisa is not convinced. She believes he is
lying and specifies a probability of at least 90% that he is doing so. How would
you interpret this probability?
It is an imprecise subjective probability. The probability means that Lisa’s uncer-
tainty and degree of belief that Ronny is lying is comparable to randomly draw-
ing a red ball out of an urn containing 100 balls, of which at least 90 are red. Lisa
is not willing to be more precise.
A frequentist or classical probability interpretation would not make sense in this
case.

3.1.1 Expected value


Return again to the die example. Suppose the die is fair. If you throw the die over and
over again, what would the average outcome be? The answer is 3.5. We refer to this
value as the expected value, or the statistical expected value. We note that this value in
this case can never be obtained; the possible outcomes are 1, 2, 3, 4, 5 and 6. Hence
we cannot interpret the expected value as a probable outcome or something like that.
Probability theory shows that the expected value, referred to as E[X], where X is
the outcome of the die trial, can for this example be computed in this way:

E[X]!=!1 · P(X!=!1) + 2 · P(X!=!2) + 3 · P(X!=!3) + 4 · P(X!=!4) + 5 · P(X!=!5)


+ 6 · P(X!=!6). (3.1)

Therefore, if P(X!=!x)!=!1/6 for x!=!1, 2,!.!.!.!, 6, the expected value is indeed 3.5. The
probability P here may refer to any of the three interpretations discussed previously.
30 BASIC CONCEPTS

When the expected value is founded on frequentist probabilities, it can be inter-


preted as an average value when the situation considered is repeated over and over again
infinitely. However, when the probabilities are subjective (or classical), an interpretation
based on averages cannot be used, as such repeatability is not always possible. If X is
the number of fatalities as a result of a specific pandemic (e.g., COVID-19), frequentist
probabilities cannot meaningfully be defined, and hence a related frequentist interpreted
expected value would not exist. However, subjective probabilities can always be defined,
and an expected value based on a formula similar to (3.1) can be introduced:

E[X]!=!1 · P(X!=!1) + 2 · P(X!=!2) + 3 · P(X!=!3) +!.!.!. (3.2)

We can interpret E[X] according to this type of formula (3.1 and 3.2) as a center of
gravity of the probabilities P(X!=!x). This interpretation of an expected value is always
applicable, independent of the way probability is understood.

3.1.2 Examples of probabilistic statements


Probability of getting a disease
In a large population group of people, about 1 out of 1!million get a specific disease on
a yearly basis. Frank is a member of this population. What is the probability that Frank
gets this disease next year? How should this probability be interpreted?
It is possible to provide two types of interpretations, using frequentist probabilities
and subjective probabilities. In the former case, a frequentist probability Pf is intro-
duced representing the proportion of people in the population who get this disease. The
Pf is unknown, but based on the information provided previously, 1 · 10–6 is an accurate
estimate of this frequentist probability.
Alternatively, 1 · 10–6 can be viewed as a subjective probability P for the event
that “Frank gets this disease next year” (A), given the available knowledge K, that is,
P(A|K)!=!1 · 10–6. The assessor has the same uncertainty and degree of belief for the
event A!to occur as randomly drawing a red ball out of an urn containing 1!million
balls, of which one is red. The knowledge is strong in the sense that it is based on evi-
dence that shows that about 1 out of 1!million get this disease on a yearly basis, and
Frank is considered a typical member of this population.
Now say that we have information that Frank is somewhat more exposed than
average for getting this disease. How could we then formulate the probabilities?
We could use an imprecise probability statement, expressing that P(A|K) ≥ 1 · 10–6;
that is, the probability for Frank to get this disease the next year is at least 1 · 10–6.
Using a frequentist interpretation is not straightforward, as the frequentist prob-
ability Pf is a property of the population and not Frank. Hence, we need to change the
population in order to take into account this new information. We must define a new
population with members who are ‘similar’ to Frank. Then a new, unknown frequen-
tist probability is constructed. The information available provides an estimate of this
frequentist probability which is higher than 1 · 10–6.
MEASURING AND DESCRIBING RISK 31

The example demonstrates the importance of being precise on what the event con-
sidered is and how the probabilities are interpreted.

The probability of a defect unit in a production process


Consider a production process of units of a specific type. Let p!=!Pf(A) be the frequentist
probability that an arbitrary unit from the production the coming week is defective,
where A!refers to the event that the unit is defective. This probability is interpreted as
the fraction of units being defective in this production process the coming week. To
estimate p, a sample of 100 units is collected, showing 2 defects. From this, an estimate
p* equal to 2/100 is derived.
Textbooks in statistics show how to express uncertainties in relation to p and p*
using concepts like variance (standard deviation), confidence intervals and prediction
intervals. Two main lines of thought are used, traditional statistical analysis and Bayes-
ian analysis. For details, see Problem 4.12 and Section!5.2.
Alternatively, we can define a subjective probability P(A|K), expressing the asses-
sor’s uncertainty or degree of belief for the event A!to occur, using the urn interpreta-
tion discussed previously. Based on the observations, the assessor may conclude that
P(A|K)!=!2/100, seeing these observations as the background knowledge K.

3.1.3 Probability models


A probability model is a model built on frequentist probabilities, that is, a model of
variation. Consider again the previous die example, where the die is not necessarily
fair. A!probability model is defined by frequentist probabilities Pf(X!=!x) of the six out-
comes. To simplify the notation, we introduce px!=!Pf(X!=!x) for x!=!1, 2,!.!.!.!, 6. The
probability model is specified by px, for x!=!1, 2,!.!.!.!, 6. The frequentist probabilities
are in most cases unknown and need to be estimated.
Probability models are commonly used in probability theory and statistics. Well-
known types of probability models are binominal, Poisson and normal (Gaussian).

3.2 BASIC IDEAS ON HOW TO MEASURE OR DESCRIBE RISK

Let us return to the example of Section! 2.2.1, where you face the risk of losing
$36,000, but you can also win $3,000, depending on the outcome of the two dice.
Now the issue is the magnitude of the risk. How large is the risk? How should it be
described? Formally, we defined the risk by the pair (C,U), where C is the conse-
quences (outcome) of the activity (the game) and U the associated uncertainties. The
consequences C is either a loss of $36,000 or a win of $3,000; hence, it remains to
express the uncertainties U. Assuming that the dice are fair, we are led to the prob-
abilities 1/36 and 35/36, respectively, as the loss occurs only if both dice show 6.
These probabilities can be interpreted as classical, frequentist or subjective. All apply
in this case.
32 BASIC CONCEPTS

We note that the associated expected win for John is 3,000 · (35/36) – 36,000 ·
(1/36)!=!6,900/36 # 1,917.
This expected value is not really providing much information for you, as you are
allowed to play the game only once. However, for John, it is informative. The expected
value shows that on average, if the game is offered to many people, he would lose about
$1,917 per game. Clearly this would be a poorly designed game if the purpose was
to earn money from it. However, we do not know the background for this particular
game. It is also possible that John is cheating, using dice which are not fair. Clearly you
as a player also need to reflect on that risk. How should the risk characterization be
updated to take into account this risk aspect?
There are different approaches; see Table!3.1. One is to consider the previous prob-
abilities as subjective probabilities conditional on the assumption that the dice are fair
and provide a judgment of the reasonability of this assumption. Based on the observation
that John will lose money in the long run if the dice are fair, this assumption can indeed
be questioned. The knowledge supporting the probabilities 1/36 and 35/36 is weak.
An alternative and related approach is to add to the previous subjective prob-
abilities a judgment about the risk related to this assumption being wrong. This risk is

TABLE 3.1 Alternative approaches for expressing your risk


Approach Event/ Probabilities Knowledge Comments
consequences
The dice are fair 3,000 35/36 Condition on the Based on the
–36,000 1/36 assumption assumption that
the dice are fair
Assigned 3,000 35/36 Weak knowledge
probabilities, –36,000 1/36 supporting these
strength of probabilities
knowledge
judgments
Assigned 3,000 35/36 Weak knowledge An imprecise
probabilities, –36,000 1/36 supporting these probability
judgment of probabilities expressing that it
risk related to (35/36 and 1/36) is likely – at least
assumption a probability of
deviation 50% – that the
dice are not fair,
strong knowledge
supporting this
judgment
Assigned 3,000 P(loss =36,000|K) Weak supporting P(John cheating
probabilities, –36,000 ≈ 0.45 knowledge |K) =
probabilistic P(not fair dice
analysis John is |K) = 0.90,
cheating
P(losing
36,000|John
cheating,
K) = 0.50
MEASURING AND DESCRIBING RISK 33

Approach Event/ Probabilities Knowledge Comments


consequences
Assigned 3,000 P(loss = 36,000|K) Stronger P(John cheating
probabilities, –36,000 ≥ 0.05 supporting |K) ≥ 0.50
probabilistic knowledge
analysis John P(losing 36,000|
is cheating, John cheating, K)
imprecise ≥ 0.10
probabilities
Assigned 3,000 35/36 Strong Assuming John is
probabilities, –36,000 1/36 knowledge not cheating
strength of
knowledge 3,000 P(loss = 36,000| Relatively strong Assuming John is
judgments, –36,000 John is cheating, cheating, the dice
focusing on K) is large are not fair
event (John
cheating) and
consequences
given event John is cheating Rather high Relatively strong

expressed as an imprecise probability with associated knowledge strength judgments,


for example, an imprecise probability expressing that it is likely! – there is at least a
probability of 50% – that the dice are not fair, with knowledge judgments considered
rather strong arguing that the expected value of the game is favoring the player: John
will lose money in the long run if the dice are fair.
As a third approach, we seek to add a probabilistic analysis of John cheating in
this game. Suppose you assign a probability of John cheating equal to 90% given your
background knowledge, that is

P(John cheating |K)!=!P (not fair dice |K)!=!0.90,

and a probability of 50% of losing $36,000 if the dice are not fair; that is, John is cheat-
ing. Then using probability calculus (the law of total probability), you can calculate a
probability of losing $36,000 equal to

P(C!=!36,000|K)!=!P(C!=!36,000| fair dice, K) · 0.10 + P(C!=!36,000| not fair dice, K) · 0.90


= (1/36) · 0.10 + 0.50 · 0.90 # 0.45.

The background knowledge for this probability would be weak, as it is di$cult to


assign the probability that John is cheating and the probability of a loss of $36,000
if the dice are not fair. The probability number 0.45 seems poorly justified and thus
somewhat arbitrary. The arbitrariness can be reduced somewhat by using imprecise
probabilities, for example, saying that there is at least a 50% probability that he
is cheating, and P(C = 36,000| not fair dice, K) is at least 10%, which leads to the
calculations
34 BASIC CONCEPTS

P(C!=!36,000|K)!=!(1/36) · P(fair dice | K) + P(C!=!36,000| not fair dice, K) · P(not fair


dice | K)
≥ P(C!=!36,000| not fair dice, K) · P(not fair dice | K) ≥ 0.10 · 0.50 ≥ 0.05!=!5%.

Thus, there is at least a probability of 5% of losing $36,000, and this may be su$cient
to conclude that you would not like to play the game. You judge the knowledge sup-
porting the probability as medium strong.
As a final approach, risk is described by two main contributions, as indicated previ-
ously. The first is the same as in the quantitative approach, presuming the dice are fair.
The second is simply a qualitative description of the risk in the case that John is cheat-
ing. Let A!be the event that he is cheating and C the consequences for you as previously.
Then you describe risk by reporting

Risk description when A! does not occur (is not true): loss $36,000 with probability
1/36 and win of $3,000 with probability 35/36
Risk description when A! is true: A! loss of $36,000 is rather likely. The supporting
knowledge is considered relatively strong.
It is likely that A!is true, as John would lose money in the long run if the dice are fair.
The background knowledge is rather strong.

Overall, risk is considered high, given that it is rather likely that you will lose $36,000,
and the knowledge supporting that judgment is rather strong.
This description and judgment of the knowledge strength would of course be dif-
ferent if you knew John and were informed about how he had designed this game. You
would then ignore the possibility that he is cheating.
Subjective probabilities are needed if we are to incorporate the cheating risk aspect
into the analysis. Classical or frequentist probabilities cannot be meaningfully defined.
In all of the previous approaches for describing risk, there are three main elements:

• The identified or specified consequences (events, effects/consequences).


• A way of describing the uncertainties.
• The knowledge supporting the previous judgments.

In general terms, we denote these three elements C’, Q and K, respectively, leading to
a risk description or characterization (C’,Q,K). When events are focused, we are led to
(A’,C’,Q,K). See Table 3.2.
The identified or specified events and consequences (A’ and C’) are not the same
as A!and C in the risk definition. The A!and C are the actual event and consequences
occurring, whereas A’ and C’ are those considered in the analysis. We may, for example,
think about a situation in which you play the game and win but do not get the money
from John: he just disappears. Hence the actual C is equal to 0, which is an outcome
not covered in your risk description. We will discuss this issue further in Section!3.4
when addressing surprises and black swans.
The description Q covers probability (precise or imprecise) and judgments of the
strength of the knowledge (SoK); see Problem 3.12. We have discussed probabilities
MEASURING AND DESCRIBING RISK 35

TABLE 3 .2 General elements of a full risk description or characterization (A’,C’,Q,K) for a


risk (A,C,U) related to an activity
Risk description Meaning Examples
elements
A’ Specified events John is cheating.
C’ Specified consequences Outcome of the game.
(C’,Q) A description of the risk (C,U): Risk description when A’
does not occur (is not true):
The uncertainties U expressed
loss $36,000 with probability
by Q: Probability (imprecise
1/36 and win of $3,000 with
probability) and Strength of
probability 35/36.
knowledge (SoK) judgments
Risk description when A’ is
true: A loss of $36,000 is rather
likely, as John would lose
money in the long run if the
dice are fair. The supporting
knowledge is considered
relatively strong.
K The knowledge that Q (and A’ A belief that John will give you
and C’) are based on the $3,000 if you win.

in Section!3.1 and illustrated the SoK concept previously, but further discussions are
needed. How should we conclude that the knowledge is strong or weak? Is it is possible
to quantify the SoK?
Thinking about the previous dice example, it can quickly be concluded that SoK
judgments cannot be meaningfully quantified on a continuous, interval measurement
scale such as, for example, weight or height. Rather, we are restricted to a nominal
with order scale, for example, weak knowledge, medium strong knowledge and strong
knowledge. Think about your knowledge concerning John cheating. To transform that
knowledge into one unique number is problematic, as discussed previously, even for an
interval. Clearly the knowledge would be strong if we had a lot of relevant data sup-
porting the probability judgments or a strong understanding of the phenomena studied.
If we had performed tests or weight and symmetry analysis of the dice, the supporting
knowledge would be strong. In the absence of such data and analysis as in this case, the
knowledge is weak. Common issues to consider when making judgments about the SoK
are (Aven and Flage 2018):

• The reasonability of the assumptions.


• The amount and relevancy of data/information.
• The degree of agreement among experts.
• The degree to which the phenomena involved are understood and accurate models
exist.
• The degree to which the knowledge K has been thoroughly examined (for example,
with respect to unknown knowns; i.e., others have the knowledge but not the anal-
ysis group).
36 BASIC CONCEPTS

In the dice example, the assumption of fair dice is critical. Clearly if this assumption is
used as a basis for the risk description, the knowledge would be judged as weak. Care has
to be shown when it comes to the third criterion, as experts may have the same type of
background and training. Hence, this criterion should highlight agreement across di%erent
‘schools’ and perspectives. The forth criterion could, for example, relate to the justification
of a specific probability model. The fifth and last criterion questions whether the knowl-
edge has been scrutinized, to identify potential surprises relative to the current knowledge.
For example, in the dice example, we could question whether processes have been con-
ducted to look for events that could lead to di%erent outcomes than 36,000 and 3,000.
See Problem 3.19 for examples of scoring systems based on such issues.
The knowledge K is of different types, but for the present discussion, it is best
explained as justified beliefs. You have knowledge about the game and John. In general,
the knowledge is founded on data, information, modeling, testing, argumentation and
so on. Often the knowledge adopted can be formulated as assumptions, as in the previ-
ous case, when stating the dice are fair.
It is useful to distinguish between general knowledge (GK) and specific knowledge
(SK) for the activity considered (Aven and Kristensen 2019). To illustrate, consider the
epidemic example of Section!1.3. The GK includes all the generic knowledge available
on epidemics and pandemics, whereas the SK includes knowledge concerning the spe-
cific situation related to the particular epidemic discussed.
In addition, we need to clarify whose knowledge we refer to. We distinguish here
between your knowledge and John’s knowledge and the total knowledge available
when adding also other people’s knowledge.
Evidence as a concept is closely related to knowledge. It captures the basis for a
belief or statement in the form of data, information, modeling insights, test results,
analysis results and so on. A!risk assessment can produce evidence in the form of a risk
characterizations of relevance for making a judgment about a statement being true. In
a court, risk assessment results can be used as evidence, acknowledging that it is not
representing the truth but a judgment about the truth subject to uncertainties.
In situations of very weak knowledge, a broad imprecision interval would typically
be used. The extreme case is a probability [0,1], which means that the assessor is not
willing to use the probability scale at all. The analyst assesses the knowledge and leaves
it up to others to conclude whether the relevant event is likely to occur. However, in
most cases, there is some data or information available that allows for a judgment like
‘unlikely’, for example, indicating a probability below 10%. This judgment is then
supplemented with a description of the knowledge supporting it, as well as a judgment
of the strength of knowledge.
New knowledge, such as in the form of new data, can make the SoK stronger. The
knowledge could potentially reduce the uncertainty about the unknown quantities (A’
and C’). However, it could also lead to a higher level of uncertainty about these quanti-
ties, for instance, if the new knowledge challenges existing beliefs and assumptions. We
are also cautious in noting that more data has the potential to mislead and promote
over-confidence in the assessment: The data could to a varying degree be informative
(accurate and relevant) for the issues discussed.
MEASURING AND DESCRIBING RISK 37

3.2.1 Vulnerability
If John is cheating, it is likely that your loss will be $36,000. You are vulnerable in the
case of John cheating. We remember that vulnerability was defined as (C,U|A), basically
reflecting the risk!– the potential for negative consequences!– given the occurrence of
an event A’, that John is cheating. Thus, we can describe the vulnerabilities given that
John is cheating as follows:
A likely loss of $36,000, as John would lose money in the long run if the dice are
fair. The supporting knowledge is considered relatively strong.
In general terms, the vulnerability is described by (C’,Q,K|A’), that is, risk condi-
tional on the occurrence of the event A’. We remember from Section!2.3.1 that risk can
be written as (A,U) and vulnerability (C,U|A). Similarly, risk can be described by

Risk description!=!(A’,Q,K)!& (C’,Q,K|A’) =


Event risk description and Vulnerability description

The last row of Table 3.1 provides a risk description in line with this separation between
event risk description and vulnerability description.

3.2.2 How to conclude that the risk level is high or low


Risk is described by (A’,C’,Q,K), which is multidimensional, and it is not straightfor-
ward to conclude that the risk is high or low. Fortunately, there are ways of simplifying
the judgments. The first one relates to small risks.
A risk is judged as low if the probability of the related undesirable events/conse-
quences is small and the associated strength of knowledge is strong. The activity can be
seen as safe. For example, food is safe if the probability of getting an illness as a result
of the food is small and the supporting knowledge is strong. What is sufficiently low
depends on regulatory standards established for food and also depends on the value
judgment of the person. We will discuss the issue in Chapter!8 when addressing issues
related to risk acceptance. In the present section, our focus is on how to characterize the
risk to be able to make appropriate judgments about the acceptability.
Moreover, we can conclude that the risk is high if the probability of the related
undesirable events/consequences is high and the associated SoK is strong. Consider the
risk related to the number of fatalities in relation to a pandemic in a period of one year,
described by a probability distribution as shown in Table!3.3. Is the risk description
(C’,P) showing that the risk is high? It is of course impossible to say without having
a reference for what is high. We can relate the numbers to the typical death rates in
that country to the numbers in other countries, as well as to the intervention costs to

TABLE 3 .3 Number of fatalities due to a pandemic per 100,000


Number of fatalities Less than 1 1–10 10–50 50–100 >100
Probability 0.05 0.25 0.40 0.25 0.05
38 BASIC CONCEPTS

mitigate the pandemic. However, as previously, the discussion in the present chapter
is more about how to present the risk than what the conclusions would be. The prob-
ability distribution of Table!3.3 can be informative, but, as stressed repeatedly, we also
need to add judgments of the strength of knowledge supporting these probabilities.

REFLECTION
Suppose the probability of some undesirable event/consequence is high, and
the associated SoK is weak. Would you then categorize the risk as high?
Yes, remembering that risk is the potential for some undesirable consequences,
or the combination of undesirable consequences and uncertainties, and in this
case, there are considerable uncertainties related to the occurrence of some
undesirable event/consequence.

It is challenging to compare risk levels when using probability distributions and


SoK judgments. Different metrics are often used to simplify the comparisons; see Sec-
tion! 3.3. In general, such metrics should be used with care, as they often provide a
rather poor characterization of the risk. In the following, we discuss additional qualita-
tive aspects to consider when making judgments about the risk being high or low.
For a risk to be classified as high, the following criteria need to be considered (a
minimum of one of these criteria applies) (Aven and Kristensen 2019):

a) The risk is judged to be high when considering consequences and probability


b) The risk is judged as high when there is a potential for severe consequences and
considerable uncertainties (weak knowledge)
c) A high degree of vulnerability, lack of resilience
d) Weak general knowledge about the consequences C of the activity
e) Weak specific knowledge about the consequences C of the activity
f) Strong general knowledge about undesirable features of the consequences C of the
activity
g) Strong specific knowledge about undesirable features of the consequences C of the
activity

For example, the risk is classified as high if a critical failure of a safety system has been
revealed through testing, by reference to a), c) and g). We can also classify the risk as
high if the safety system has not been tested, with reference to b), c) and e). In the for-
mer example, we argued according to what we know and, in the latter, to what we did
not know.
The undesirable features of the consequences could relate to risk sources and
events, as well as barriers implemented to avoid the occurrence of undesirable events
MEASURING AND DESCRIBING RISK 39

and reduce the effects if such events should occur. The category ‘high’ is typically associ-
ated with an unacceptable risk, which requires additional measures in order to proceed.
Analogously, we can specify criteria for low (all relevant criteria apply):

h) No potential for serious consequences


i) The risk is judged to be low when considering consequences and probability, and
supporting knowledge is strong
j) Solid robustness/resilience
k) Strong general knowledge about the consequences C of the activity
l) Strong specific knowledge about the consequences C of the activity

Medium applies to all other cases.

3.2.3 Some examples


We first return to the plane example introduced in the beginning of this chapter, where
Anne and Amy plan a plane trip from Washington, DC, to San Francisco. Then we dis-
cuss the examples in Section!3.1.2, the risk of getting a disease and the risk related to
defective units in a production process. Finally, we include a radon example, following
up the discussion of Section!3.2.2.

Traveling by plane
Amy argues that the trip is safe: the risk is very low, she argues. But Anne is not
convinced.
The natural point of departure for such a discussion is accident statistics. A!num-
ber of documents present historical data regarding fatalities in aviation for commercial
airliners and in general. There are different ways of presenting the statistical data, but a
common metric is to use fatalities per 1!million flights. The data show a positive trend
over the years. Typical numbers presented for recent years show about 0.1–0.5 fatali-
ties per million flights for commercial airliners. For general aviation, a typical number
is 1 fatality per 100,000 exposed hours (FAA 2018), thus considerably higher than
for commercial aviation. Extensive amounts of data are available, showing differences
with respect to a number of factors, for example, flight type, flight phase (for example,
approach, landing, cruise) and continent. Other types of references are also used, for
example, fatalities per km travel. Do these data allow us conclude whether it is safe to
travel by plane?
To be able to do this, we first need to clarify what safe means. As noted in Sec-
tion!2.3.4, people often understand being safe as being associated with an absence of
failure or loss: here, that the flight does not result in a crash with associated fatalities
and injuries. However, looking into the future, we do not know if such an event will
occur or not; we face uncertainties. Thus, we are led to considerations of risk. We say
the flight is safe if the risk is sufficiently low. This forces us to clarify what ‘sufficiently
low’ risk means. The historical accident numbers referred to previously clearly relate
40 BASIC CONCEPTS

to risk, but risk is about the future, and, hence, we cannot simply use the data we have
without any type of reflections on the relevancy of these data. However, taking into
account the comprehensiveness and solidness of the accident data related to flying, and
the structure and features of the aviation industry, we quickly conclude that the acci-
dent data are relevant for flights today, tomorrow and later. The industry will basically
be the same, although events like COVID-19 could lead to some changes. Looking at
the data, overall positive accident trends are observed over the years. The reasonable
explanation is better planes and improved safety and risk management systems.
Thus, for Anne and Amy’s travel, these historical metrics are informative for the
judgment about the risk being sufficiently low. What is sufficiently low is in principle
a subjective judgment. Amy may find the risk sufficiently low, but not Anne. Amy con-
cludes that the risk is sufficiently low and acceptable for the trip, as the accident risk
metrics are so small and the basis for these metrics is strong. Clearly, if the data sup-
porting the metric of, for example, 0.1–0.5 fatalities per million flights had been weak,
the metric would not had been given the same weight. To conclude on an activity being
safe, it is not sufficient that the derived probability of a negative outcome be low; the
knowledge supporting this probability also needs to be strong.

A safe activity:
The probability of undesirable consequences is low and the supporting knowl-
edge strong

The judgment of the risk being sufficiently low and acceptable is subjective. How-
ever, in cases like this, with such strongly founded metrics showing such small accident
rates, the judgments quickly become intersubjective: There is broad agreement in the
judgments. A! rationale is provided which compares the metric with other types of
activities, for example, other transportation means (e.g., car driving). Although there
are some measurement problems in producing comparable results for different activi-
ties, the overall message of such comparisons is clear: Flying shows low fatality rates
compared to other transportation methods. It is much safer to fly than to drive, if we
take into account the traveling time.
A risk and vulnerability description is shown in Table!3.4. The probabilities have a
strong knowledge base. Given a crash, the vulnerabilities are high.
Finally, in this example, some comments concerning security issues. Anne and
Amy have decided to travel by plane from Washington, DC, to San Francisco. First,
they have to go through security to enter the gates. We all know why these security
arrangements have been developed. Many travelers view these security precautions as
unavoidable but irritating, increasing travel times. Some question security efforts made
by the aviation authorities, referring to statistics that the number of deaths as a result
of such attacks at airports is basically negligible compared to other activities (Aven
2015a). However, this type of reasoning is unjustified. The low number of incidents can
MEASURING AND DESCRIBING RISK 41

TABLE 3.4 Risk description for plane example


Risk description element Meaning Examples
A’ Specified events Plane crash
C’ Specified consequences Injuries and loss of lives
(C’,Q) A description of the risk (C,U): Fatality probability less than
The uncertainties U expressed 1 · 10–6 for a flight
by Q: Probability (imprecise Strong knowledge basis
probability) and Strength of
Knowledge (SoK) judgments
K The knowledge that Q (and A’ Historical data are the only
and C’) are based on source used
(C’,Q,K|A) Vulnerability description given a Given a plane crash, the fatality
plane crash probability is very high. The
knowledge is strong. Hence the
vulnerability is large

rather be explained by effective security management. Attackers have not been able to
conduct events like September!11 since 2001. If we had not implemented such arrange-
ments, it is not unreasonable to conclude that many types of serious incidents would
have occurred. There is no certainty, but any serious judgment of the risk facing the
aviation industry due to terrorist attacks would state that the risk would be great and
clearly unacceptable if we had not implemented strong security measures.
In the absence of security arrangements, Anne and Amy would not have made a
plane trip. The reason is simply that they then judge the risk to be too high. However,
given the current security arrangements, they judge the risk to be low and acceptable.
It is also possible to travel from Washington, DC, to San Francisco by train, but
it would have been a much longer trip. For this transportation means, there are few
security arrangements. Yet we consider the related risk acceptable. Is the risk so low?
If we look at the statistical data, we would indeed find that the frequencies of security
incidents are low, and the knowledge basis is strong. We consider it a safe means of
transportation. Now let us perform a thought construction. Suppose a series of terror-
ist events occurred on trains in one particular country. The number of events is still
rather limited, but they raise concern in the country and in other countries, as there is
considerable uncertainty about the intentions of the group responsible for the attacks.
A!security expert claims that there is nothing to be worried about. Using the incident
data, the death risk is claimed to be tiny compared to, for example, driving. However,
people are concerned, and the expert indicates that this is because of risk perceptional
aspects like fear and dread. It is not rational to not travel by train is the claim.
This reasoning fails. The issue is more complicated than this. First, risk is not given
by the historical data. Uncertainty is a main component of risk, and, when considering
how to deal with the risk, the issue of uncertainties is critical: What are the strategies of
the attackers? Will we experience an increase in the frequency and/or intensity of such
attacks in the near future? How will these attackers react to the measures taken by the
authorities? See further discussion in Chapter!6 (Section!6.1.1 in particular).
42 BASIC CONCEPTS

The risk of getting a disease


In Section!3.1.2, we considered the probability that Frank would get a specific disease
next year. Now we extend the discussion by asking: How big is the related risk?
One way of describing the risk is to use subjective (knowledge-based) probabilities
together with judgment of the strength of knowledge. A!probability equal to 1 · 10–6
is assigned, that is, P(A|K)!=!1 · 10–6, where A!is the event that Frank gets this disease
next year. The knowledge K is strong, as it is based on data showing that about 1 out
of 1!million get this disease on a yearly basis, and Frank is considered a typical member
of this population.
Another approach is to define the consequences C’ by the frequentist probability Pf
representing the proportion of people in the population who get this disease. An esti-
mate of Pf equal to 1 · 10–6 is derived. Uncertainties about Pf are ignored, as 1 · 10–6 is
considered an accurate estimate of Pf.
In the general case, we also need to consider uncertainties about Pf. Subjective
probabilities can be used for that purpose, with associated strength of knowledge judg-
ments. Suppose Frank has shown some symptoms of getting the disease, but the evi-
dence is difficult to interpret. The doctor uses the general statistics together with the
specific knowledge (the symptoms) and derives at a 90% uncertainty interval for Pf,
[0.001–0.5], meaning that the doctor’s subjective probability for Pf to be in this interval
is 90%, that is

P(0.01 & Pf & 0.5)!=!0.90.

The doctor considers the knowledge supporting this judgment relatively weak.
Traditional statistics provide different types of uncertainty characterizations for Pf,
including confidence intervals. These intervals do not reflect the assessor’s uncertainty
about what the value of the unknown Pf is but express variation in observations if we
could repeat the type of situation considered over and over again. To establish a confi-
dence interval, we need to have available a set of relevant observations, that is, a popu-
lation of n persons similar to Frank having the same type of symptoms. If X denotes the
number of persons among n who have gotten the disease, we can use probability theory
to establish two quantities Y and Z depending on X so that Pf(Y & Pf & Z)!=!0.90. The
interval [Y,Z] is a confidence interval of level 90%. If we could do the ‘experiment’ over
and over again, the interval would cover the unknown Pf in 90% of the cases. We refer
to textbooks in statistics.

The risk related to defective units in a production process


Consider the production process defined in Section!3.1.2, with A!referring to the event
that a unit is defective. The consequences of the activity are defined by the fraction of
units that are defective, and the uncertainties are described by a probability distribu-
tion or prediction interval for that fraction, with associated strength of knowledge
MEASURING AND DESCRIBING RISK 43

judgments. Let X be the number of defective units among the 100 sampled units. The
assessor predicts X to be two. In addition, a 90% prediction interval for X (or X/100) is
presented: [0,5] ([0, 0.05]), meaning that it is 90% probable that X will be in the inter-
val [0,5] based on the current knowledge. The knowledge basis is a previous sample of
100 units.

Radon example
Consider a house development project during spring and summer, with an issue of
radon risk (Aven and Kristensen 2019). The owner has currently not been able to make
proper radon measurements in the house, as it needs to be measured in the winter.
Based on a radon map, the owner observes that there are high radon concentrations in
the area where the house is located. The general knowledge concerning radon risk is
strong, and the risk is judged high according to the scheme presented in Section!3.2.2,
for example, by referring to criterion 6. The general knowledge recommends some
protection barriers be installed. The specific knowledge is judged to be rather weak,
and to strengthen this knowledge, the owner needs to defer the project some six to
nine months to make detailed measurements. The results of such measurements could
potentially show that the radon exposure for the house is small, and no special protec-
tion is required. However, a delay of six to nine months is considered too costly for
the owner, and the conclusion is that the protection barriers be installed on the basis of
the general knowledge, in line with the cautionary principle and the implementation of
robust/resilient measures; refer to Chapter!8.

3.3 COMMON RISK METRICS

In its most general form, risk is described by (C’,Q,K) or (A’,C,Q,K). From these expres-
sions, specific metrics are often used summarizing or highlighting aspects of risk. Here
we will discuss some of them, including

• Expected value E[C’]


• Distributions P(C’ ≥ c) and risk matrices based on P(A’) and E[C’|A’]
• Value at Risk (VaR), defined by a quantile of the probability distribution of C’

Related frequentist metrics can also be defined. The P and E then need to be replaced
by estimates Pf* and Ef*.

3.3.1 Expected values


Let us return to the die gamble of Section!2.2.1 where John offers you one game, where
you will receive $3,000 from him if the outcome is a success for you!– the two dice do
not both show 6!– and you have to pay him $36,000 if both dice show 6.
44 BASIC CONCEPTS

Suppose the dice are fair. Then the expected value (win) for you is, as noted in
Section!3.2:

3000 · (35/36) – 36000 · (1/36)!=!6900/36 # 1,917.

Say that you are informed that the expected value (win) of the game is equal to $1,917
with no further information. Would that information be su$cient for you to make a
decision whether to play the game? Certainly not – you need to see beyond the expected
value. The possible outcomes of the game, here $3,000 and –$36,000, and the associ-
ated probabilities are required. A loss of $36,000 could have severe consequences for
you. Seemingly the game is attractive for you, as the expected value is positive and
large, but you may lose a considerable amount of money, which may cause you think
very carefully whether to play. To accept the game, you will reflect on the value/utility
of the di%erent outcomes.
Hence a risk description based on expected value alone would not be very infor-
mative. It would, however, be so for John if he offers this game to many people. Then
$1,917 would be an accurate estimate of the actual loss for him per game. This follows
from the law of large numbers; see discussion subsequently and Ross (2009). Clearly, if
he offered this game to many people, he would lose a considerable amount of money.
The observation that we need to see beyond expected values in decision-making
situations goes back to Daniel Bernoulli (1700–1782) more than 250! years ago. In
1738, the Papers of the Imperial Academy of Sciences in St! Petersburg presented an
essay with this theme: ‘the value of an item must not be based on its price, but rather on
the utility that it yields’ (Bernstein 1996). The ‘price’ here refers to the expected value.
To reflect the utility, an adjusted risk metric can be derived, –E[u(C’)], where u is a
utility function. See Problem 3.14.
The previous probabilistic risk numbers are founded on a strong knowledge basis
(given that the dice are confirmed fair). If, however, the dice are not necessarily fair, the
expected value will be based on a weak knowledge base. Probabilities can be assigned
(see Section!3.2 and Problem 3.14), leading to an updated expected value, but this value
cannot be given much weight because of the weak supporting knowledge.
It can be argued that the expected value is an attractive risk metric, as it is based
on one number!– or potentially a few numbers if the metric is split into different con-
sequence attributes (loss of lives, environmental damage, etc.). However, given that the
aim is to describe risk, the metric has some strong limitations to a large extent moti-
vated by the previous discussion (Aven 2020e):

1 It does not show the potential for extreme consequences or outcomes.


2 It does not show the uncertainties of the estimates of E[C’] (frequentist case).
3 It does not show the strength of the knowledge on which the E[C’] is based (knowl-
edge-based case).

The first point represents a serious weakness of this metric and is discussed previously
using the die example. A portfolio type of argument can be used to justify the use of
expected values in some cases, as for John if he plays the game many times. Another
MEASURING AND DESCRIBING RISK 45

example is an insurance company with a number of activities or projects covered. This


company is mainly interested in the average value and not the individual ones. By
the law of large numbers, we know that under certain conditions, the average value
converges with probability one to the expected value of each quantity. However, for
this argument to be valid, the number of projects must be large and there must be
some stability to justify the existence of frequentist probabilities. With the potential for
extreme and surprising observations, the approximation of replacing the average with
the expected value could be rather poor – the uncertainties in the estimates would be
large.
The question now is the extent to which we can apply the portfolio type of argu-
mentation for real-life risk, for example, global risk or national risk (e.g., pandemic,
extreme weather events, cyber attacks). In the case of global or national risks, we have
a number of projects or activities, but, unfortunately, these cannot, in general, be seen
as averaging themselves out. There are two main problems. The first is that, in such
risk studies, a main focus and interest is extreme types of events, which occur relatively
rarely. Second, the world is rapidly changing, and the stability required to think in rela-
tion to frequentist probabilities is challenged. A!key aspect to consider, when looking
into the future and the related risks, concerns incorporating potential surprises and
what today is not foreseen. The conclusion is that an expected value computed today
could be a poor predictor of the future average value, even when taking a global or
national perspective.
As mentioned previously, in practice, E[C’] characterizations are often refined by
presenting the pair P(A’) and E[C’|A’], where A’ is an event (hazard, threat, opportunity)
and E[C’|A’] is the conditional expected value, given the occurrence of A’. The previous
discussion concerning E[C’] also applies to E[C’|A’], although this split into these two
terms reduces some of the issues discussed previously for E[C’], in particular the prob-
lem of multiplying a small probability of the event occurring with a large consequence
number. However, E[C’|A’] is still an expected value and 1–3 apply. Important differ-
ences in potentials for extreme outcomes are not revealed using the approach.
Another example to illustrate this discussion is the following (Aven 2017c). A!coun-
try has about 100 facilities, of which all have the potential for a major accident, leading
to a high number of fatalities. A!risk assessment is conducted, and the total probability
of such an event in the next 10!years is computed as 0.010. From the assessment, one
such major event is expected in this period. A!safety measure is considered for imple-
mentation. It is, however, not justified by reference to expected value calculations also
reflecting the benefit of the measure, as well as the costs. The expected benefit of the
measure is calculated to be rather small. The costs are considered too large in compari-
son with this expected value. The rationale is that we should expect one such event in
the period, and the measure considered would not really change this conclusion.
However, the perspective taken is close to being deterministic and destiny oriented.
One such event does not need to happen. Safety and risk management aim to avoid
such accidents, and, if we succeed, the benefits are high!– saving many lives. The value
of the safety measure is not fully described by the expected number. The measure’s
value is mainly about confidence and beliefs that the measure can contribute to avoid-
ing an occurrence of the accident.
46 BASIC CONCEPTS

Probabilities are commonly used for this purpose. Implementing the safety measure
can, for example, result in a reduction in the accident probability from 0.010 to 0.009,
which shows that it is less likely that a major accident will occur. However, the differ-
ence is small and will not really make any difference for the decision-making problem.
One major accident is still foreseen.
The full effect of a risk-reducing measure is not adequately described by refer-
ence to a probability number alone. A!broader concept of ‘confidence’ is better able
to reflect the total effect. This concept is based on probability judgments, as well as
assessments of the knowledge supporting these judgments. For example, it matters a
great deal whether the probability judgments are based on strong knowledge or weak
knowledge. In the case of the safety measure discussed previously, its implementation
could reduce risk by reference to criteria h), i)!.!.!.!, l) defined in Section!3.2.2.

3.3.2 Distributions P(C’ ≥ c) and risk matrices based on P(A’) and E[C’|A’]
Next, we consider the risk description based on the pair of consequences and prob-
ability, (C’,P), partly following the analysis of Aven (2020e). This metric meets critique
1 raised against E[C’] discussed in the previous section but not 2 and 3. In practice,
standard risk matrices are often used to present the risk. The matrix typically shows the
risk expressed for different events (hazards/threats) using the probability of the event
P(A’) and the conditional expected value E[C’|A’] or a typical C’ value for different
categories of consequences. There are often ambiguity problems in defining the events
A’ and the related consequence value. The event A’ could, for example, be defined as
‘flooding’, but its consequences vary from small to extreme. The definition used is criti-
cal for the probabilities assigned and the magnitude of the consequence dimension. If,
for example, a broad event A’ is defined, like ‘flooding’, a rather low consequence value
follows. However, the probability of A’ is relatively high. If, on the other hand, a more
specific A’ event is defined, the probability is lower, but the consequence value is higher.
Clearly, if the study is not clear on the definitions, different people would make dif-
ferent choices, and the overall results, for example, represented by average judgments,
would be meaningless.
For this reason, adjustment should be made by fixing events B, having some defined
large consequences!– for example, at least 1,000 or 10,000 fatalities, and then focusing
on P(B). A!set of B events may be considered. The ambiguity problem is then solved.
In the frequentist case, uncertainties of the estimates need to be assessed. If only
hard data are used to estimate the frequentist probabilities, confidence intervals can be
produced, but these only show variation in data and do not say much about uncertain-
ties regarding the ‘true’ values of Pf. Similar types of problems may occur in the case
that the estimates are based on expert opinions, but, using Bayesian analysis (see Prob-
lem 4.12 and Section!5.2), it is possible to make formalized expressions of uncertain-
ties about Pf, which reflect different sources of uncertainties. The Bayesian approach
leads to many probability assignment tasks, and, normally, some types of uncertainty
intervals are preferred, showing, for example, a 90% uncertainty interval for Pf.
The interval can be generated by removing the most extreme expert estimates!– for
MEASURING AND DESCRIBING RISK 47

example, below the 5% percentile and above the 95% percentile. Such intervals are
most informative when used for the Pf(B) case, as these are then the only probabilistic
quantities used.
The case of knowledge-based probabilities is similar to the frequentist case in many
respects, with obvious changes in terminology. Here the probabilities are not uncertain,
but the knowledge supporting the probabilities can be more or less strong. Different
probabilities produced by different assessors are seen as reflecting different knowledge
bases and degrees of belief of the relevant events. The analysts may use the total set
of expert judgments to generate their best judgments, or the study could present an
interval showing the variation in judgments among the experts. A!90% interval could
be derived, as for the frequentist case, but the interpretation would thus be different.
The interval in the knowledge-based case does not reflect uncertainties, as there is no
presumed true value to relate the assignments to. All the assignments in the knowledge-
based case could be based on more or less strong knowledge, but that is not revealed
in the metric.
Risk matrices presenting scores for impact/consequences of specified events, with
associated probabilities, are commonly used in practice. Two main problems with using
such risk matrices are as partly reflected by the previous discussion:

• The consequences of the events are in many cases not properly represented by
one point in the matrix but by several with different probabilities. If we restrict
attention to one point, this value would typically be interpreted as the ‘expected
value’ (conditional expected value given the initiating event), which is the center of
gravity of the probability distribution for the appropriate consequences. In most
cases, this value is not very informative in showing the range of possible (or even
the most likely) consequences. Take the event ‘pandemic’. It is possible to foresee
many scenarios with severe negative impact, ranging from a rather limited number
of affected persons to situations where millions are suffering. Grouping all such
scenarios into one, and highlighting only the expected (mean) value, can obvi-
ously lose essential information needed to characterize the risk to usefully inform
decisions.
• Two events can have the same location in the risk matrix, but the knowledge sup-
porting these judgments could be completely different, as discussed previously.

To meet these challenges, adjusted risk matrices have been developed. See Figures 3.1
and 3.2. Figure 3.1 shows an example of an ‘extended risk matrix’, which presents
risk using consequence categories given the events, probabilities of the events and SoK
judgments. Four A’ events are shown. As an example, consider the event in the lower-
right corner. This event has a catastrophic consequence, a low probability (&0.01) and
weak SoK.
In Figure!3.2, the consequences are fixed, as referred to previously when introduc-
ing the events B. This eases the interpretation of the diagram. With the consequences
specified, the risk characterizations would cover two dimensions for the events consid-
ered: the probability P of the event and the strength of knowledge SoK supporting the
48 BASIC CONCEPTS

Probability ≥ 0.90

0.50-0.90

0.10-0.50

0.01- 0.10

≤ 0.01

Small Moderate Considerable Signifi ant Catastrophic

Consequences

Strong knowledge K
Medium strong knowledge
Weak knowledge

FIGURE 3 .1 Example of an extended risk matrix also including the strength of knowledge
(based on Aven and Thekdi 2020, p. 15)

Probability ≥ 0.90

0.50-0.90 Event 2

0.10-0.50

0.01-0.10 Event 3 Event 1

≤ 0.01

Strong Medium Weak

Strength of Knowledge

F IGURE 3 .2 A risk matrix showing risk scores for probability and strength of knowledge,
with fixed consequences (based on Aven 2020e)

probability judgments, as shown in Figure!3.2. The scores are judgments made by the
assessors. In the example of Figure!3.2, the biggest risks are those in the upper-right
corner, as these have high scores on probability and weak knowledge strengths.

3.3.3 Value at Risk


In some applications, in particular business and the financial service industry, it is
common to use the risk metric value-at-risk (VaR), xp, which equals the 100p% quan-
tile of the probability distribution of the potential loss X (Aven 2010). That means
that xp is given by the formula P(X & xp)!=!p. Typical values used for p are 0.99 and
MEASURING AND DESCRIBING RISK 49

0.999. Thus, if the VaR at probability level 99% is $1!million, there is only a 1%
probability of a loss exceeding $1!million. VaR has an intuitive appeal as a risk mea-
sure. However, it suffers from some severe problems. The main one is that it does not
reflect the size of potential outcomes higher than the VaR value. We may have two
situations: in one, X is limited to values close to the VarR value, and in the other,
there is a potential for extreme outcomes. The metric does not reveal the difference.
In addition, the metric does not take into account the strength of the knowledge sup-
porting the probabilities.

3.4 POTENTIAL SURPRISES AND BLACK SWANS

Let us return to the game introduced in Section!2.2.1 where John offers you a game;
you have to pay him $36,000 if both dice show 6, and you will receive $3,000 from
John otherwise. Suppose you make your assessment based on the assumption!– your
belief!– that both dice are fair. Then it is revealed at a later stage that the dice were not
fair. You are surprised by this new information. You did not foresee this scenario.
Using the general risk setup of (A,C,U) and (A’,C’,Q,K), the knowledge K was
based on the assumption that the dice were fair. However, knowledge and assump-
tions can be wrong, as in this case. It is obviously difficult to include this aspect in the
risk characterization, as it extends beyond the knowledge that forms the characteriza-
tion. However, following the guidance provided by describing risk through (A’,C’,Q,K),
the knowledge aspects are highlighted, in particular the strength of the knowledge.
Judgments of this strength will stimulate reflections and analysis which could reveal
potential surprises. A!key point in these judgments are considerations of the validity
of assumptions. There is no guarantee that all erroneous assumptions and knowledge
are identified, but the issue is focused on, and some potential surprises can be revealed.
50 BASIC CONCEPTS

As discussed in previous sections, knowledge can also be characterized as weak


or strong, not only wrong or correct. To explain, think about the situation where you
assign a rather high probability of John cheating; the supporting knowledge is consid-
ered relatively strong, as John would lose money in the long run if the dice were fair.
You make no assumption that the dice are fair or not. If it turns out that the dice are
in fact fair, you may be surprised, based on your knowledge. By also presenting the
judgment of the supporting knowledge, the overall risk results will shift focus from the
probabilities to the arguments and evidence provided.
Here is another example of the issue of surprises and how it relates to risk
characterizations.
Terry is a professor and teaches a course in risk analysis. It is time for examina-
tions. Terry walks around among the students, who work hard solving the problems.
Several students point to a particular problem and question if a specific formulation is
correct. Terry reviews the comments and has to admit that the formulation represents
an error. Terry is surprised. He has prepared such examinations for 30!years and never
experienced any mistake of this type. In addition, he has implemented strong quality
assurance processes, which include providing accurate answers to the problems, check-
ing the examination text carefully several times and having a colleague reviewing it
before accepting the test. Terry was convinced that the test was fine. He considered
the probability of having errors in the test very small and based on strong knowledge.
The error risk was considered very small. Yet, the error occurred!– a surprise for Terry
relative to his knowledge.

Hence Terry produced a risk characterization according to the book by specify-


ing the consequences (here focusing on errors) and using probability and strength of
knowledge judgments (Q as P and SoK). The third component K captures the knowl-
edge supporting the judgments made, and includes, for example, beliefs that that his
colleague actually did a serious check of the text. It turns out that Terry, some few
days before the examination, actually observed a signal of complacency!– he felt that
making the examination test was a rather routine work!– it always goes fine. However,
he ignored this signal. This feeling of complacency could have resulted in a lack of
MEASURING AND DESCRIBING RISK 51

awareness and precision when reviewing the text. The signal should have been added
to the knowledge basis. By acknowledging it, Terry could have reconsidered the P and
SoK judgments, and he probably would had made an extra check of the text.
The example demonstrates the importance of highlighting all aspects of the risk
characterizations (C’,Q,K). The knowledge K is not static. By careful analysis, exam-
ining, for example, assumptions and signals, new knowledge can be gained and the
analysis will be based on a stronger, more justified knowledge basis.
The black swan metaphor is commonly used for surprising events. We define a
black swan here as a surprising extreme event relative to one’s knowledge (Aven 2014,
2015c). ‘Extreme’ means that events have large/severe consequences.

The history of the black swan metaphor goes back a long time. The most common
interpretation is the one provided by Taleb (2007). About 300!years ago, all observed
swans in the Old World had been white. But then, in 1697, a Dutch expedition to
Western Australia, led by Willem de Vlamingh, discovered black swans on the Swan
River –a surprising event for people in the Old World. In his famous book from 2007,
Taleb refers to a black swan as an event with the following three attributes. First, it is
an outlier, as it lies outside the realm of regular expectations, because nothing in the
past can convincingly point to its possibility. Second, it carries an extreme impact.
Third, in spite of its outlier status, human nature makes us concoct explanations for its
occurrence after the fact, making it explainable and predictable. See Problem 3.16 for
a discussion of the differences and similarities between two definitions of a black swan.
Taleb (2007) present a number of examples of black swans:

Just imagine how little your understanding of the world on the eve of the events of
1914 would have helped you guess what was to happen next. (Don’t cheat by using
the explanations drilled into your cranium by your dull high school teacher). How
about the rise of Hitler and the subsequent war? How about the precipitous demise
of the Soviet bloc? How about the rise of Islamic fundamentalism? How about the
spread of the internet? How about the market crash of 1987 (and the more unex-
pected recovery)? Fads, epidemics, fashion, ideas, the emergence of art genres and
52 BASIC CONCEPTS

schools. All follow these Black Swan dynamics. Literally, just about everything of
significance around you might qualify.
(Taleb 2007, p. x)

Taleb comments specifically on the September!11 event:

Think of the terrorist attack of September!11, 2001: had the risk been reasonably
conceivable! on September! 10, it would not have happened. If such a possibility
were deemed worthy of attention, fighter planes would have circled the sky above
the twin towers, airplanes would have had locked bulletproof doors, and the attack
would not have taken place, period. Something else might have taken place. What?
I!don’t know. Isn’t it strange to see an event happening precisely because it was not
supposed to happen? What kind of defense do we have against that? Whatever you
come to know (that New York is an easy terrorist target, for instance) may become
inconsequential if your enemy knows that you know it. It may be odd to realize
that, in such a strategic game, what you know can be truly inconsequential.
(Taleb 2007, p. x)

Suppose a national risk assessment in the United States was conducted in 2000. A risk
characterization (A’,C’,Q,K) is presented. Then it could be the case that A’ does not
include such an attack as September 11. That means that the actual A is not covered
by the assessment A’; a surprise occurs relative to the knowledge of the assessors. In
this case, we refer to the event as an unknown known, meaning that it was unknown
for the analysts but known to some (here the terrorists). We also talk about unknown
unknowns (events not known by anybody).
For the September!11 event, a different type of argument can be used to explain
why it came as a surprise and is a black swan. Suppose the risk assessment has identi-
fied this type of event as a threat. It is included in (A’,C’,Q,K). However, the likelihood
of the event may be judged to be so low (also with strong knowledge support) that it is
considered that it will not happen. As discussed in the previous quote by Taleb, if that
would not have been the case, some measures would had been implemented to avoid it
happening. Figure!3.3 illustrates the different types of black swans.
Clearly these events represent a challenge, as they come as a surprise relative
to our knowledge. It is not possible to show or present the related risks as a part
of the risk descriptions, but we can and should highlight the knowledge on which
the judgments are based and, in particular, the assumptions made. Addressing and
discussing this knowledge and these assumptions are, in many cases, equally, if not
more, important than highlighting the probabilities derived. The way we have con-
ceptualized and described risk underlines the importance of analysis processes to
scrutinize the knowledge K. Many of these approaches, methods and models are
based on monitoring and tracking critical assumptions. Irrespective of how thor-
ough the assessment is, potential surprises is an important analysis issue. The focus
on such surprises ensures that this type of risk is highlighted and acknowledged.
Decision-makers need to cope with the issue, and analysts should guide them on
how to do this.
MEASURING AND DESCRIBING RISK 53

Types of Black swans

a) Unknown unknowns

b) Unknown knowns

c) Known but not believed to


occur (because of low judged
probability and strong
knowledge support)

Extreme consequences

FIGURE 3.3 Types of black swans (based on Aven and Krohn 2014)

REFLECTION
The black swan has become a very popular metaphor. Why do you think it has?
Because of the black swan metaphor, we have noticed an increased interest
and enthusiasm for discussing risk analysis. The metaphor has made a complex
issue comprehensible and practical. Risk assessments give us insights about
risk. However, these assessments have limitations. They do not show how the
world performs but instead provide judgments about the world. Surprises occur
relative to these judgments, despite often seemingly strong evidence and expert
agreement. That is the essence of the metaphor. It applies not only to risk analysis
but any type of seemingly strong justified beliefs. The history and our frameworks
of thought did not give us any ideas of a different perspective, a different truth.

3.5 PROBLEMS

Problem 3.1
Let C be the outcome from the throw of a die and let p!=!Pf(C!=!1). Explain what p
expresses. The die is of a special type, and p is not necessarily 1/6. Let us assume that
p is either 1/6 or 1/3. Explain how you can express your uncertainty about p using
knowledge-based probabilities. Assign concrete numbers to illustrate how it is done.

Problem 3.2
A person states that he predicts B to occur 3 times in a 10-week period. Give two
possible interpretations of this expression by reference to frequentist probabilities and
knowledge-based (subjective) probabilities.
54 BASIC CONCEPTS

Problem 3.3
In a risk assessment, a risk analyst, Per, has introduced a probability model f(y), where
f(y) equals the frequentist probability that Y equals y, y!=!0,1,2, 3,4. Explain what f(y)
expresses. How will you interpret the expected value Ef[Y]? Assume that the following
estimates of f(y) have been established: 0.80, 0.10, 0.05, 0.03 and 0.02, respectively.
From these numbers, estimate the expected value Ef[Y].

Problem 3.4
Variation is commonly represented using probability models. The Poisson model is
used to model the variation in the occurrences of events per unit of time (for example,
days). Express the Poisson model and interpret the probabilities.
Let D denote the difference between the Poisson frequentist probabilities f(x)!=!Pf
(X= x) and the true fractions g(x) with x number of events occurring. Explain why it is
reasonable to refer to D as model error and uncertainty about D as model uncertainty.

Problem 3.5
Consider again the Poisson model with parameter λ. Assume that λ is unknown.
Specify knowledge-based (subjective) probabilities to express your uncertainty
(degree of belief) of λ when you would like to reflect that λ could be either 1 or 2
with equal probability.

Problem 3.6
A risk analyst, Kari, is to make a judgment about the value of an unknown future
observable quantity N. She knows that N is either 0, 1, 2, 3 or 4. To express her
uncertainty about N, she assigns probabilities P(N!=!0), P(N!=!1),!.!.!.!, P(N!=!4). Say
that these probabilities are 0.80, 0.10, 0.05, 0.03 and 0.02, respectively. Explain the
meaning of these probabilities. What is the expected value? Explain the meaning of this
value. Is this value uncertain? Explain.

Problem 3.7
Present a 95% prediction interval (i.e., a prediction set) for N in Problem 3.6. Explain
what the interval (set) means.

Problem 3.8
Consider a probability model f(x)!=!Pf(X!=!x), x!=!1, 2. Explain the meaning of f(x).
Let p!=!f(1) (hence f(2)!=!1!– p). Assume that p is unknown. Specify a subjective prob-
ability density to express your uncertainty (degree of belief) of p when you would like
to reflect that any interval of the same length has the same probability.

Problem 3.9
In a safety context, two common risk metrics are PLL (potential loss of lives) and FAR (fatal
accident rate). PLL is defined as the expected number of fatalities in a period of one year,
whereas FAR is defined as the expected number of fatalities per 100!million exposed hours.
MEASURING AND DESCRIBING RISK 55

Suppose the PLL is 1/10,000. The number of exposed hours in a year is 1,000.
Calculate a related FAR value. What is the ‘individual risk’ (IR) assuming 10 people
being equally exposed? The individual risk is defined as the probability that a specific
person is killed in a year.

Problem 3.10
A risk assessment presents risk using a risk matrix, with the two dimensions P(A’) and
E[C’|A’], where A’! =! event and C’! =! consequence/loss. Argue why a risk description
based on such a matrix could be rather misleading.

Problem 3.11
A risk assessment presents risk using a risk matrix. Two scenarios are given the same
loss-probability score, where probability is a knowledge-based probability. In the risk
assessment, it is concluded that the risks related to these scenarios are of the same mag-
nitude. What is the problem with this conclusion?

Problem 3.12
Two risk analysts discuss whether strength of knowledge (SoK) judgments are related
to the knowledge supporting the probabilities P or the combination of consequences C’
and P. Can you help them clarify the issue?

Problem 3.13
Tim plans a car trip from Paris to Rome. Introduce a frequentist probability p for an
accident to occur during this trip. Express what p means. Is it a meaningful concept to
use in risk analysis?

Problem 3.14
It is suggested to define risk by the expected disutility –E[u(C’)], where u is a utility
function. Discuss the suitability of such a definition.

Problem 3.15
Explain what a black swan of the unknown known type means using the (A,C,U) –
(A’,C’,Q,K) conceptualization.

Problem 3.16
Does the definition of a black swan used in this book capture the same ideas as the
definition by Taleb?

Problem 3.17
The black swan metaphor is the most common metaphor for reflecting rare, surprising
events. Another metaphor often referred to is ‘perfect storms’. This storm was a result
56 BASIC CONCEPTS

of the combination of a storm that started over the United States, a cold front com-
ing from the north, and the tail of a tropical storm originating in the south. All three
meteorological features were known before and occur regularly, but the combination
is very rare. The crew of a fishing boat decided to take the risk and face the storm, but
its strength was not foreseen. The storm struck the boat, it capsized and sank; nobody
survived (Paté-Cornell 2012). This extreme storm is now used as a metaphor for a rare
confluence of well-known phenomena creating an amplifying interplay leading to an
extreme event (Glette-Iversen and Aven 2021).
Would you classify this event as a black swan?

Problem 3.18
Was the occurrence of the 2020 COVID-19 pandemic a black swan? Why?

Problem 3.19
Based on the criteria suggested in Section!3.2 to assess the strength of knowledge (SoK),
suggest some possible ways that scoring systems can be defined for what is strong
knowledge, medium strong knowledge and weak knowledge.

Problem 3.20
Is knowledge expressing the truth?

Problem 3.21
Look for some historical data comparing fatality rates for travel by plane compared
to driving. Suppose a specific risk analysis based on some data showing that the FAR
value for flying is the double of driving. Would you conclude that it is riskier to fly than
driving? The FAR value is defined in Problem 3.9.

Problem 3.22
Give an example of a known unknown (events which we know will occur, but we do
not know the form of the events and when they will occur).

Problem 3.23
In a class, let the students assign probabilities and SoK judgments for a specific event
and present the scores using a scatter plot (see Figure!3.2).
MEASURING AND DESCRIBING RISK 57

Problem 3.24
Study current reports from the Intergovernmental Panel on Climate Change (IPCC)
on climate change risk. How is risk described? Make comparisons with the recom-
mendations of the present chapter. Comment on the meaning of the statement that it is
extremely likely (at least 95% probability) that most of the global warming trend is a
result of human activities (IPCC 2014).

Problem 3.25
Prepare a five-minute video presentation that explains the differences among classi-
cal, frequentist and knowledge-based probability. Post this video on social media with
#whatisrisk and ask respondents to comment on the video. Prepare a written response
that explains:

• Was it difficult to explain this concept? Why?


• Did the respondents understand your video? Were there parts that were poorly
understood?
• What questions do you still have about these probabilities after making the video
and receiving feedback?

Problem 3.26
A traditional risk matrix is based on presenting (P(Ai), E[C|Ai]), for events Ai, i!=!1,
2,!.!.!.!, n. Present an alternative approach based on specifying a set of categories of
consequences associated with the events. Why do you think many analysts prefer using
this alternative approach?

KEY CHAPTER TAKEAWAYS

• Probability is used to reflect uncertainty, imprecision and variation.


• Knowledge-based (subjective) probabilities express uncertainties.
• A knowledge-based probability P(A|K) is equal to 0.15 (say) if the assigner has the
same uncertainty and degree of belief for A!to occur as randomly drawing a red ball
out of urn containing 100 balls, of which 15 are red, given the knowledge K.
• Imprecise probabilities express imprecision. They are interpreted similarly.
• An imprecise knowledge-based probability P(A|K) is equal to an interval [0.10,
0.20] (say) if the assigner has the same uncertainty and degree of belief for
A!to occur as randomly drawing a red ball out of urn containing 100 balls, of
which 10, 11,!.!.!. or 20 are red!– given K. The assigner is not willing to be more
precise.
• Frequentist probabilities represent variation in infinite populations of similar
situations.
• A frequentist probability Pf(A) is interpreted as the limiting fraction of times
A!would occur when repeating the situations considered over and over again.
58 BASIC CONCEPTS

• Risk is described by (C’,Q,K) and (A’,C’,Q,K), where A’ is a set of specified events,


C’ some specified consequences, Q a measurement or description of uncertainties
and K is the knowledge that Q and (A’,C’) are based on.
• Q is commonly represented by (P,SoK) where SoK are judgments of the strength of
the knowledge supporting the probabilities P.
• Common risk metrics are expected values E[C’], probability distributions P(C’ & c),
value at risk and combinations of P(A’) and E[C’|A’] as in risk matrices. SoK judg-
ments should be added to these metrics.
• To present (A’,C’,Q,K), it is useful in many cases to fix C’ and show P(A’) and SoK
judgments.
• Vulnerability is described as (C’,Q,K|A’).
• Common vulnerability metrics are conditional expected values E[C’|A’] and prob-
ability distributions P(C’ & c|A). SoK judgments should be added to these metrics.
• A black swan is a surprising extreme event relative to one’s knowledge. Different
types of black swans are shown in Figure!3.3.
CHAPTER 4
Basic theory of risk
assessment

CHAPTER SUMMARY

This chapter discusses concepts and tools to conduct a risk assessment, which consists
of methods that aim at improving our understanding of the risks considered and in this
way supporting decision-makers when deciding what actions to take. A!comprehensive
risk assessment addresses several key aspects:

What can happen (‘go wrong’)? Leverage past data, expert opinions, trends, modeling
and so on to identify potential events that could cause negative (and positive) con-
sequences (outcomes). The task of the risk assessment team is to understand these
risk events by distinguishing between risk sources, threats, hazards and opportuni-
ties. While this team cannot imagine or list out every possible outcome, a prelimi-
nary list is essential for the later steps. How to address potential surprises needs
also to be addressed but represents a challenge.
Link risk events to consequences: Why and how the potential events (discussed previ-
ously) could result in negative or positive consequences. These consequences can be
good or bad or have a mix of good and bad aspects.
Assess uncertainty: Acknowledge uncertainty and use a characterization of uncertainty
to frame an understanding of what can and will happen, covering judgments about
likelihood and strength of knowledge as explained in Chapter!3.
Evaluate the risk: Determine the significance of the risk (for example, expressing
that the risk derived is exceeding what is typically accepted in society) and rank
alternatives using relevant criteria. There is a leap from the risk assessment to
the decision, referred to as the management review and judgment. It is defined
as the process of summarizing, interpreting and deliberating over the results of
risk assessments and other assessments, as well as of other relevant issues (not
covered by the assessments), in order to make a decision. As the risk assessment
has limitations in capturing all aspects of risk and there are other aspects than
risk that are important for the decision-making, a management review and judg-
ment is always needed.

DOI: 10.4324/9781003156864-4
62 RISK ASSESSMENT

As an example of a risk assessment, think of a study of a technical system like a nuclear


power plant. The aim is to identify what can go wrong and the most critical risk con-
tributors. This information can later be used to improve the system and reduce risks.
As another example, think of a business owner who is considering some investments.
A risk assessment is conducted in order to enhance the understanding of the risks asso-
ciated with these investments. Based on the assessment, the business owner will have a
better basis for what decision to make.
The main stages of a risk assessment include using the terminology introduced in
Chapter!3, identification of events (threats/hazards/opportunities and risk sources A’),
cause analysis to investigate how these events can occur, consequence analysis to study
the effects (C’) of these events, characterization of the risk (A’,C’,Q,K), study of alter-
natives and measures to modify risk and evaluation of the risk. A!risk assessment com-
prises two main parts: risk analysis and risk evaluation. The risk evaluation is typically
conducted by risk analysts. The risk assessment provides input to the decision-makers,
who will conduct a management review and judgment and make a decision.
This chapter also discusses how to characterize a good risk assessment. We point
to the importance of the identification stage and that the process of understanding risk
is more about knowledge and lack of knowledge than producing probability numbers,
in line with the insights provided in previous chapters. A!risk assessment which does
not include processes scrutinizing the knowledge and in particular the assumptions on
which the quantitative analysis are based is not a high-quality risk assessment. Another
critical point is the emphasis on vulnerabilities and resilience.
Models, including probability models, play an important role in risk assessments.
Examples are provided on how to develop and use such models in risk assessment
contexts.

LEARNING OBJECTIVES

After studying this chapter, you will be able to explain

• the basic ideas, purpose and stages of a risk assessment


• the difference between risk assessment, risk analysis and risk evaluation
• what characterizes a high-quality risk assessment
• the concepts of validity and reliability of a risk assessment
• how to treat uncertainties in risk assessments
• the importance of analyzing vulnerabilities and resilience to properly assess
risk
• how to relate the risk characterizations of the risk assessments to knowledge
• how to address potential surprises
• how models are used in risk assessments
• the concept of model uncertainty in risk assessments
BASIC THEORY OF RISK ASSESSMENT 63

4.1 BASIC IDEAS

Figure!4.1 summarizes the main features of a risk assessment, covering events, causes
and consequences. Risk assessment is a tool used to understand the risk related to an
activity, evaluate the significance of the risk and rank or rate relevant options based on
established criteria. A!simple example will be used to illustrate the basic ideas.
Consider the following example: Tommy is a graduate student in a very competi-
tive academic program. His professor has invited him to present a project to his class,
professors in his department and industry professionals. Tommy is honored by this
invitation but is hesitant to accept because he does not like to make oral presentations.
He asks the professor if he could think about it for a couple of days. He would like to
perform a risk assessment for the oral presentation to provide a basis for whether to
take on this challenge.
Tommy first reflects on what it is that is important for him in relation to this talk.
What are the values of concern for him? He quickly concludes that he would like to
perform well and get positive feedback from the audience. Then he starts to think about
the usefulness of doing an exercise like this to learn and be a better speaker. Maybe the
presentation will not be ‘perfect’, but he could still benefit from it from a longer per-
spective. Based on these reflections, he decides not to have an overly ambitious goal for
the talk, such as ‘a highly successful presentation’ and ‘a brilliant talk’.
Next Tommy performs a high-level simulation of the presentation, to clarify what
it will entail. Using risk terminology, the presentation is the activity to be assessed.
From this simulation, he starts to think about what can go wrong!– what events could
occur that could negatively affect the presentation. He writes down a number of such
events, including

1 He is so nervous that it affects the talk, and the audience is more concerned about
his state than what he is saying

Causes Consequences
Events

Risk assessment: Process to identify risk sources, threats, hazards and opportunities;
understanding how these can occur and what their consequences can be; representing and
expressing uncertainties and risk; characterize the risk; and determining the significance
of the risk and rank alternatives using relevant criteria

FIGURE 4 .1The main features of a risk assessment, with its focus on events, causes and
consequences
64 RISK ASSESSMENT

2 He forgets a key argument for a statement


3 He is not able to meaningfully answer a question from the audience

He acknowledges that he may have overlooked some important events in this list
and considers using a systematic method to reveal others. He knows from the risk
assessment course he has taken that there are many methods that can be used to iden-
tify events that could negatively affect his presentation. He decides to apply the struc-
tured what-if technique (SWIFT).
This method is based on using some guidewords, which need to be tailor made
to the context considered. Some of the common guidewords are: time, place, amount,
things, persons, ideas and process. Using these guidewords, and phrases like ‘What if’
and ‘How could’, Tommy is led to questions like

• What if the presentation is too long (or too short)? (wrong time)
• How could the presentation be too long (or too short)? (wrong time)
• What if the presentation is considered boring? (wrong ideas)
• How could the presentation be considered boring? (wrong ideas)
• What if I!say something that offends one of the professors? (wrong ideas or persons)
• How could I!offend one of the professors? (wrong ideas or persons)

Based on these questions, some undesirable events can be formulated. The process can
also be used to generate positive events. For example, how could the presentation be
considered interesting and exciting?
Tommy reviews these events and make a judgment about the severity of the conse-
quences. The severity is considerably higher for some events than others, but it is dif-
ficult to make a judgment because the effects of the events could range from something
rather minor to a quite serious consequence. Think, for example, about the event where
Tommy is not able to provide a meaningful answer to a question. Depending on the
type of question and what the response is, we can picture a spectrum of effects, some
more serious than others. Tommy’s focus is on events he thinks would be embarrassing,
which is the largest or most severe consequence. He performs similar delineations for
BASIC THEORY OF RISK ASSESSMENT 65

other events on his list, ensuring that for all events considered, the consequences are
judged as large/severe.
Then Tommy considers the likelihood for these events to occur. He uses score cat-
egories: highly likely (higher than 50%), probable (10–50%), low probability (1–10%)
and unlikely (less than 1%). Table!4.1 presents the results for some selected events, with
associated strength of knowledge judgments. His judgments are based on the assump-
tions that he will carefully plan the talk and practice a lot. The assessment shows that
he is most concerned about events 1, 5 and 6. For event 1, his judgment is that the ner-
vousness should not be that strong if he prepares the talk very well. There is, however,
an element of uncertainty. Sometimes he becomes very nervous for reasons he does not
really understand, and he therefore assigns a weak score on the strength of knowledge
for the low likelihood (unlikely) judgment. For event 5, he considers it rather likely
that the talk will be perceived as very boring; he is confident about it, as he plans to
use notes, and he is not normally able to improvise. For event 6, the problem is that
one of the department professors is said to be very sensitive to what is communicated
about some risk science topics, but that professor seldom attends presentations of this
type. Thus, it is unlikely that this will be a problem, but the knowledge supporting this

TABLE 4.1 Risk characterization for Tommy presentation example


Events Severity Likelihood Strength of Comments Risk
knowledge
1 He is so nervous that High Low Weak He will practice High
it affects the talk, and probability– a lot
the audience is more Unlikely
concerned about his
state than what he is
saying
2 He forgets a key High Low Medium He will practice Moderate
argument for a statement probability strong a lot
3 He is not able to High Low Medium He will practice Moderate
meaningfully answer a probability strong a lot
simple question from the
audience
4 The presentation is far High Unlikely Strong He will carefully Low
too short or long plan the talk
and practice
a lot
5 The presentation is Moderate– Probable– Strong He reads from Moderate–
considered very boring High Highly likely a manuscript High
6 A professor is offended High Unlikely Weak One of the Moderate–
or provoked by what he professors is High
says very sensitive
to what is
communicated
on specific risk
science issues
66 RISK ASSESSMENT

judgment is considered weak, as his talk may trigger this professor’s interest!– it is dif-
ficult to know.
The analysis shows the key features of a risk assessment commonly referred to as
a coarse or preliminary risk assessment. Often the assessment is divided into sections
covering different parts of the activity or subject studied. In this example, Tommy could
distinguish between the opening part of the talk, the main part, the conclusions and the
discussion part and then perform the analysis for each of these four stages.
Based on the given risk assessment, Tommy has obtained an improved basis for
making a judgment about whether to accept the invitation to have this presentation.
He reflects on the results of Table!4.1 – is the risk acceptable? The risk assessment
does not provide a clear answer on that question, but it provides some input (refer
to discussion in Section!4.2.3). And using the assessment, Tommy has an instrument
for systematically examining measures that he could implement to reduce the risks,
thereby giving him some control over these aspects. For example, he could simply
accept the risk related to event 5, but he could also try to do something about it!–
practice so much that he could talk without notes. Then this particular event would
no longer pose a severe risk. However, this risk could affect some of the others;
presenting without notes could lead to him being more nervous, thereby affecting
event 1.
For event 6, Tommy made a simple event tree model; see Figure! 4.2. This type
of tree is commonly used in risk assessments to analyze what can happen given the
occurrence of an initiating event! – or the activity! – here that Tommy conducts his
talk. Then two branches are introduced, responding to two questions. Depending on
whether the answer is yes or no, three scenarios and consequences are derived. The
worst case is that the sensitive professor attends the meeting and is provoked. If this
professor attends the meeting but is not provoked, the consequences are not so serious
for Tommy, but he would still perceive the situation as stressful because there would
be uncertainty about whether this professor will be provoked. Tommy uses this model
to structure his thoughts in relation to this event. The tree allows for probability cal-
culations, but Tommy chooses not to conduct a detailed precise probabilistic analysis.
Based on the rather weak knowledge he has about this professor, it is difficult to assign
specific probabilities. However, he still uses probabilistic reasoning, as explained in
the following.

Professor is provoked?
Professor attends
meeting? Yes Serious consequences
Yes

Not serious consequences, but the


No situation is stressful

No ‘Normal situation’

F IGURE 4 .2 An event tree model of situations involving the sensitive professor


BASIC THEORY OF RISK ASSESSMENT 67

Let B1 denote the event that the sensitive professor attends the meeting, B2 that the
professor is provoked and C1 that the consequences become severe. Then probability
calculus shows us that

P(C1)!=!P(B1) P(B2 | B1), (4.1)

that is, the probability of C1 is the product of the probability of B1 and the conditional
probability that the professor is provoked given that he is attending the meeting. It fol-
lows that, for example, if both P(B1) and P(B2 | B1) are less than 10%, the probability of
C1 will be less than 1%. This number does not make Tommy relaxed, as he knows he has
weak knowledge supporting such an analysis. Rather he seeks to improve the knowledge
about what it is that provokes this professor. Tommy talks to several of his co-students,
and it does not take very long to identify one major issue: This professor gets frustrated
when hearing students and others referring to risk as expected values – probabilities
multiplied by losses, which for this professor is a completely inadequate way of describ-
ing risk in most cases. Tommy is in line with this thinking, although he is surprised that
someone can be provoked just because of that. With these new insights, the supporting
knowledge of event 6 in Table 4.1 is adjusted to medium strong, still acknowledging that
there could be other sensitive topics for this professor. The risk is judged as moderately
high. Clearly, this professor can be considered a risk source for Tommy’s talk.
Tommy then reflects on vulnerabilities and resilience. There could potentially be a
poor response in case some type of disturbance, either the attendance of the sensitive
professor or a difficult question. Tommy acknowledges that such disturbances represent
a main risk contributor. He makes some plans on how he can practice to reduce the vul-
nerabilities and strengthen the resilience. If these plans can be properly implemented, the
risks associated with events 3 and 6 would become small. A!main element in this plan is
to simulate a talk with his friends and practice answering surprising type of questions.
Despite all of the analyses conducted, Tommy is concerned that he has overlooked
something of importance. He decides to also make an assessment specifically addressing
potential surprises. He remember from the curriculum on risk assessment the method
referred to as red teaming. It serves as a ‘devil’s advocate’ by seeking to produce alter-
native interpretations and challenge established ideas and thinking. A!friend of Tommy,
Roger, is invited to represent the other perspective. Roger addresses many issues, for
example, related to the sensitive professor. He questions if this professor is so terrible
as the ‘rumors’ indicate and if there are examples showing that the professor is in fact
generally a very nice and friendly person. Is there really truth in these rumors? Roger
specifically challenges Tommy on the events for which a low risk has been assigned.
He argues that the risk for having too long a talk is significant, as Tommy has planned
to use quite a number of slides, and Roger claims that Tommy often uses far too much
time on some detailed topics.
This analysis has a focus on what can go wrong. Tommy also makes an assessment
reversing the focus, for example, considering the event that the presentation is per-
ceived as very interesting and exciting. His initial judgment is that this event is unlikely,
with a strong strength of knowledge basis, unless he makes some fundamental changes
68 RISK ASSESSMENT

in the plans. He needs to refrain from using a manuscript or notes and also practice a
lot to be relaxed and be an engaging speaker. Again, this may influence the assessment
associated with other events. We see how the assessment can be used to systemize the
available knowledge and judgments and help Tommy make the best decision for him-
self under the current circumstances.
As mentioned, the aim of a risk assessment is to improve the understanding of risk
related to the activity considered, here the talk, and use this understanding to evaluate
the significance of the risk and rank alternatives with respect to risk. A!risk assessment
can be seen as the sum of a risk analysis and a risk evaluation; we write

Risk Assessment!=!Risk Analysis + Risk Evaluation

The risk analysis covers the first part of the risk assessment, concerned with understand-
ing risk. It covers the identification of risk sources, threats, hazards and opportunities;
understanding how these can occur and what their consequences can be; representing
and expressing uncertainties and risk; and characterizing risk. The previous example is
mainly a risk analysis in this sense.
The risk evaluation is closely linked to judgments about the acceptability of the
risk and the following risk treatment. However, the risk evaluation is conducted by risk
analysts, and normally they are not the decision-makers. In the previous case, the ana-
lyst is also the decision-maker, which makes the difference between risk evaluation and
the risk acceptability and treatment somewhat blurred here. In other cases, it is criti-
cal to separate the professional risk analysts’ risk evaluation and the decision-makers’
evaluation and decisions. Think, for example, about a city which contracts a consultant
to conduct a risk assessment for its many activities and functions. The analysts perform
a risk evaluation to compare the risk levels in this city with other cities and what are
generally considered high or low levels for the type of risks considered. The analysts
do not, however, conclude what are acceptable or not acceptable risks!– that is for the
bureaucrats and politicians to decide. We will return to this issue later; see Section!4.2.
Note that the term risk analysis is also sometimes used in a broader sense, as the
totality of risk assessment, risk characterization, risk communication, risk manage-
ment, risk governance and policy relating to risk in the context of risks that are a
concern for individuals, public- and private-sector organizations and society at a local,
regional, national or global level (SRA 2015); see also Appendices A!and B. It will be
clear from the context what the proper interpretation is.

REFLECTION
Is it important for Tommy to be strict on separating his professional risk evalua-
tion and the risk acceptability and treatment (handling)?
Yes, as in this way, Tommy is able to separate what his professional judgment is
about risk and what other concerns are, for example, perceptional factors like
fear and dread. The issue will be discussed in more detail in Chapter 6.
BASIC THEORY OF RISK ASSESSMENT 69

4.1.1 Different types of risk assessments


There are many ways of categorizing risk assessments. One common classification is
to distinguish between quantitative risk assessment, qualitative risk assessments and
semi-quantitative risk assessments. In quantitative assessments, risk is quantified using
probabilities and expected values. However, as we thoroughly discussed in Chapter!3,
risk quantification should always be supplemented with qualitative strength of knowl-
edge judgments, leading to a semi-quantitative assessment. A!full risk characterization
(A’,C’,Q,K) is by definition semi-quantitative or qualitative. In a qualitative risk assess-
ment, risk is expressed qualitatively, without numbers. Tommy’s assessment is mainly
qualitative. The likelihood judgments are based on imprecise probabilities, and hence
the assessment can be interpreted as semi-quantitative.
Another way of classifying risk assessment is to distinguish between data-driven
risk assessment methods and model-based methods. The former category uses prob-
ability models, but the main element is data and statistics. The model-based methods
are used when few data are available. Two examples will be presented in the following
to illustrate the approaches.

A data-driven risk assessment: is smoking dangerous (risky)?


Today, there is broad agreement in society and among scientists that smoking is risky;
however, it was only a few decades ago that the statement that smoking is dangerous
was very much contested. In 1960, a survey by the American Cancer Society found
that not more than a third of all US doctors agreed that cigarette smoking was to be
considered “a major cause of lung cancer” (Proctor 2011). As late as 2011, research
work conducted by the International Tobacco Control Policy Evaluation Project in The
Netherlands showed that only 61% of Dutch adults agreed that cigarette smoke endan-
gered non-smokers (Proctor 2011; ITC 2011).
The main sciences dealing with this issue are the medical and health sciences. Risk
science and statistics have supporting roles, providing knowledge on what it means
that smoking is dangerous or risky, that smoking causes lung cancer and how risk
assessments should be conducted to conclude on such questions, taking into account
all types of uncertainties. Risk science and statistics provide guidance on how to bal-
ance the two main concerns: the need to show confidence by drawing some clear
conclusions (expressing that smoking is dangerous) and to be humble by reflecting
uncertainties.
Standard risk assessment frameworks are used for these purposes, established by
statistics and risk science. For example, a probability model may be introduced based
on frequentist probabilities, expressing proportions of persons belonging to specific
populations (for example, men of a specific age group) who get lung cancer. By com-
paring the probability estimates for non-smokers and for smokers, conclusions can
be made. For example, let p1 be the frequentist probability that an arbitrarily selected
person in this population who is smoking (more than x number of cigarettes per day)
gets lung cancer within a specific period of time, and let p2 be the corresponding fre-
quentist probability for a person who is not smoking. Based on random samples of
70 RISK ASSESSMENT

persons within this population, estimates (p1)* and (p2)* are derived for p1 and p2, and
comparisons can be made. Following statistical reasoning, it is concluded that smoking
increases the cancer risk: p1 is greater than p2 if (p1)* is sufficiently larger than (p2)*.
What is sufficiently larger depends on the ‘level’ of the test. If the level is 1%, it means
that if this type of test is performed over and over again, then in only 1 out of 100 cases
on average, it is erroneously concluded that smoking increases the cancer risk; that is, it
is concluded that p1 > p2, when in fact p1!=!p2. We refer to textbooks in statistics dealing
with hypothesis testing.
More refined analysis can be carried out introducing parameters of the probability
models, representing, for example, the number of cigarettes per day and the duration
of smoking; see Flanders et!al. (2003) and Yamaguchi et!al. (2000).
The statistical analysis may demonstrate that there is a correlation between two
factors (here smoking and lung cancer), but that does not prove causality. A!commonly
referenced example is related to ice cream!sales in a big city. The sales numbers cor-
relate with the drowning rate in the city!swimming pools, but this does not mean that
there is a causality link between the two factors. The heat or temperature can explain
the correlation. The temperature (heat) is an example of an unseen or hidden vari-
able, also called a!confounding variable. In general, we can say that for B to cause A
(smoking to cause lung cancer), at a minimum, B must precede A, the two must covary
(vary together) and there must be no competing explanation that can better explain the
correlation between A!and B. See Problem 4.11 for further discussion of the causality
concept.
Another common framework for the statistical analysis is the Bayesian one, as
briefly discussed in Section!3.3.2, in which epistemic uncertainties are represented by
knowledge-based or subjective probabilities expressing degrees of beliefs. When new
evidence becomes available, the probabilities are updated, using Bayes’ formula. See
Problem 4.12. A!key quantity computed in this setup is the change in the probability
that a person will get lung cancer given some new information about this person’s
health condition or other issues.

Model-based risk assessment: space exploration (partly based on Aven 2020d)


Consider the problem of assessing the risk for a spacecraft with a specific mission. To
be concrete, think about the Apollo or Shuttle projects and plans for sending astronauts
to Mars. When preparing for such flights, risk considerations play an important role.
Risk science offers guidance on how to think in relation to risk and how to best assess
the various risks. The problems are fundamentally different from those discussed previ-
ously for smoking, as relevant data and statistics are not available. Alternative analysis
approaches and methods are needed. Basically, as mentioned previously, risk science
offers three types of perspectives: quantitative, qualitative and a mixture (semi-quanti-
tative), all based on models to represent the system and related processes. Models are
needed, as experience in the form of observations of the performance of the spacecraft
is not available in the planning phase.
BASIC THEORY OF RISK ASSESSMENT 71

In the Apollo program, probabilistic risk assessment (PRA) was used. This type of
risk assessment is also referred to as quantitative risk assessment (QRA). It is based on
answering the following three questions (the triplet of Kaplan and Garrick 1981):

• What can happen (i.e., what can go wrong)?


• If it does happen, what are the consequences?
• How likely is it that these events/scenarios will occur?

Scenarios are developed using events trees of the type shown in Figure 4.2. Normally
the number of branches is considerably larger than two. For example, to analyze poten-
tial leakage from a liquid tank in a spacecraft, the tree could cover branches reflecting:
Leak not detected? Leak not isolated? Damage to flight critical avionics? Damage to
scientific equipment? Depending on the branch questions, different scenarios are devel-
oped. For each of these branches, normally a fault tree analysis is conducted aimed at
identifying what combinations of failures could lead to this particular branch event
occurring. Fault trees are discussed in Problem 4.6. Then, assigning probabilities to
each failure event and branch, calculations of the probability for specific types of con-
sequences can be carried out extending the computation shown in formula (4.1). For
the Apollo project, focus was placed on the event of having success in landing a man
on the moon and returning him safely to earth. PRAs are discussed in more detail in
Chapter 5.

It is interesting to note that the use of PRA in the Apollo program was not contin-
ued. The Shuttle was designed without PRA; instead, qualitative approaches like failure
mode and failure effect analysis (FMEA) were used. In an FMEA, we investigate what
happens if the component fails for each component of the system studied. A!probability
judgment is made, and a classification is obtained by combining effect and likelihood
categories; see Problems 4.4 and 4.5.
72 RISK ASSESSMENT

In relation to the Apollo PRA, considerable focus was on the numbers calculated.
A!probability of success in landing an astronaut on the moon and returning safely to
earth at below 5% was indicated (Jones 2019; Bell and Esch 2018). For the NASA
management, this number was considered dramatic and harmful for the project: It
would be impossible to communicate to society a risk of that magnitude. The result
was that, in relation to the Shuttle project, they later stayed away from PRA as a design
tool. The judgment was that PRA overestimated the real risk. The result of the high
judged risk numbers in relation to Apollo was that risk in that project was acknowl-
edged as a serious problem, and measures were implemented to make improvements in
all aspects of the project and design.
NASA management believed and testified to Congress that the Shuttle was very
safe, referring to a 1 in 100,000 probability of an accident (Jones 2019). The justifica-
tion for this number was, however, weak. NASA engineers argued for 1 in 100 and,
following the loss of Challenger and more detailed assessments, the latter number was
used.
Risk science at that time provided guidance on how to conduct the PRA. These
analyses are quantitative, with probabilities computed for different types of failure
events and effects using event trees and fault trees. A!value of the PRA as important as
the quantification is the improved understanding of the system and its vulnerabilities.
The systematic processes of a PRA require that the analysts study the interactions of
subsystems and components and reveal common-cause failures, that is, multiple com-
ponent failures to a common source.
This case demonstrates the challenges of using numbers to characterize risk. At the
time of Apollo, risk analysis was very much about PRA and quantification of risk using
probabilities. Although the importance of gaining system insights was highlighted as
mentioned previously, the numbers were considered the main product of the analysis,
estimating the real risk level. The main goal of the risk analysis was to accurately esti-
mate risk. If a failure probability of 0.95 was computed, it was interpreted as express-
ing the frequency of failures occurring when making a thought construction of many
similar systems. Clearly, if such a frequency represented the true failure fraction, the
project would not have been able to continue!– it would have been too risky. Risk sci-
ence explains, however, that this number does not express the truth, what will happen
in the future, but is a judgment based on modeling and analysis, which could be sup-
ported by more or less strong knowledge. The actual frequency could deviate strongly
from the one estimated or predicted. In this case, the knowledge basis was obviously
weak, and the numbers should therefore not be given much weight. The fact that the
analysis was also based on many ‘conservative assumptions’, leading to higher risk
numbers than the ‘best estimates’, provided additional arguments for not founding the
risk management only on the numbers.
At the time of these projects, a main thesis of risk science was that risk can
be adequately described by probability numbers. More precisely, risk could be
well characterized by the risk triplet, as defined by Kaplan and Garrick (1981)
and referred to previously: events/scenarios, their consequences and associated
BASIC THEORY OF RISK ASSESSMENT 73

probability. This perspective on risk is also commonly used today, but new knowl-
edge has been derived since the 1980s. According to contemporary risk science,
it is essential that the risk characterizations also cover the knowledge supporting
these probabilities and judgments of the strength of this knowledge, as thoroughly
discussed in Chapter! 3. Of special importance here is the need to examine the
assumptions that the probabilities are based on, as they could conceal aspects of
risk and uncertainties and reveal potential surprises relative to the knowledge that
the assessment is based on (see case in Bjerga and Aven 2016). The main aim of the
risk assessment is not to accurately estimate the ‘true’ risk but to understand and
characterize the risk.
NASA (Jones 2019) makes some interesting statements concerning the importance
of risk analysis:

Shuttle was designed without using risk analysis, under the assumption that good
engineering would make it very safe. This approach led to an unnecessarily risky
design, which directly led to the Shuttle tragedies. Although the Challenger disaster
was directly due to a mistaken launch decision, it might have been avoided by a
safer design. The ultimate cause of the Shuttle tragedies was the Apollo era deci-
sion to abandon risk analysis.!.!.!. The amazingly favorable safety record of Apollo
led to overconfidence, ignoring risk, and inevitable disasters in Shuttle.!.!.!. The
Shuttle was cancelled after the space station was completed because of its high risk.
NASA’s latest Apollo like designs directly reverse the risky choices of Shuttle. The
crew capsule with heat shield is placed above the rockets and a launch abort system
will be provided.
(Jones 2019)

According to NASA, the experience with the Apollo and Shuttle projects suggests two
observations:

First, the most important thing is the organization’s attention to risk. To achieve
high reliability and safety, risk must always be a prime concern. Second, the risk
to safety must be considered and minimized as far as possible at every step of a
program, through mission planning, systems design, testing, and operations.
(Jones 2019)

The message is clearly that what is needed is proper risk management and a good safety
and risk culture. The investigations following the Shuttle disasters found a bad safety
culture, leading to poor decisions. Risk assessments, like PRAs, are useful tools but
alone will not help much if the culture and the leaders are not encouraging scrutiny and
follow-up of all types of issues to enhance reliability and safety.
Jones (2019) gives also an illustrative simple example, showing the importance
of proper risk assessment and management. A!mission is often thought of as a chain
of links, and success is believed to be ensured by giving priority to the weakest links,
74 RISK ASSESSMENT

and improving others is considered wasted effort. However, such reasoning could be
disastrous, as the overall probability of failure is basically determined by the sum of all
the linked failure probabilities (see Problem 4.13). With many links, the overall failure
probability could be high, even if each of the linked failure probabilities is small. Risk
management needs to take this into account when seeking to control and reduce risk.
Risk analysis and risk science provide this type of knowledge. They specifically help
decision-makers use organization resources in the best possible way. If a big risk for
one link is difficult and expensive to reduce, the same total risk effect could be achieved
by improving a set of other links.

4.2 RISK ASSESSMENT STAGES

Figure!4.3 shows the main stages of a risk assessment, covering the planning of the risk
assessment (determining the context), the risk analysis and risk evaluation and the use
of the risk assessment.

4.2.1 The planning of the risk assessment – establish the context


To conduct a risk assessment, it needs to be properly planned. The key is to clarify the
problem/issue and define the objectives of the assessment. In general, the main aim
of risk assessment is to improve the understanding of risk concerning risk sources,
hazards/threats/opportunities and related consequences, leading to a better basis for

Planning - Establish context


- Problem/ issue definition
- Clarify who are the stakeholders
- Set study objectives
- Establishing relevant principles
and approach
- Data and information gathering

Risk analysis Risk evaluation


- Identifying events (hazards/threats/opportunities) - Judging the significance of the risks
- Cause analysis - Ranking alternatives and measures
- Consequence analysis wrt risk
- Risk characterization
- Studying and comparing alternatives and
measures wrt risk

Use of risk assessment


- Use of the risk assessment in cost-benefit analysis and other
types of studies
- Management review and judgement
- Decision

FIGURE 4.3 The main stages of a risk assessment process


BASIC THEORY OF RISK ASSESSMENT 75

answering questions like: What can go wrong? What are the main risk contributors?
What is the effect of implementing a specific measure? This understanding is used to
generate measures to modify (commonly reduce) risk and support decision-making on
the choice of arrangements and measures. Risk assessments provide input to other
types of analysis such as the cost-benefit type of studies, as well as judgments about
what are acceptable and unacceptable risks.
Let us return to the smoking and spacecraft examples of Section! 4.1.1. For the
smoking example, the issue is about demonstrating that smoking is dangerous or risky.
Earlier studies, observations and experience have clearly indicated that this is the case,
but further studies are sought to strengthen the insights on the issue, for example, by
showing how the risk depends on the level of smoking. Hypotheses are formulated and
a statistical approach is chosen to test these hypotheses.
For the spacecraft example, the issue and related objectives are not so clear, as
indicated by the discussion in Section!4.1.1. Very much of the focus was on accurately
estimating a success probability, but it turned out that this type of objective was prob-
lematic, and a change was made: the aim of the assessment was to increase the under-
standing of the risks in order to support the design and operation of the spacecrafts.
Different principles and approaches were considered and used for the assessment. For
Apollo, PRA formed a pillar, whereas qualitative approaches were used for the Shuttle.
The NASA example also shows the importance of clarifying who the relevant stake-
holders are. It makes a big difference if risk numbers are used in-house as an instrument
for identifying critical risk contributors or they are to be presented to US Congress
demonstrating that the spacecraft is safe.
Data and information gathering is partly conducted in the planning phase and
partly as an integrated part of the risk analysis. It is not always possible to see what
data and information are needed at the early stages of the risk assessment process.
Risk assessments are conducted in different phases of a project. The differences in
aims in scope and goals would typically be considerable. Think, for example, about a
risk assessment in a planning or design phase of a technical system, like a spacecraft,
compared to a risk assessment in the operational phase. When designing a system,
there is normally considerable flexibility, and it is possible to choose among many
different arrangements and measures. In the operational phase, however, the main
system elements are fixed, and the possible changes relate mainly to operational,
human and organizational factors. The risk assessment methods vary accordingly.
Consider again the NASA case. The risk assessments discussed in Section!4.1.1 were
all conducted for the design phase. As an example of risk assessment conducted in the
operational phase, think about a risk assessment to be conducted for the operations
of the international space station, highlighting workplace risks as a result of chemi-
cals, gases, products, radiation and other types of exposures. Here actual testing and
measurements can be carried out to support the assessments, leading to an improved
understanding of what types of risk the astronauts face and the severity of these risks.
While the differences in scope between risk assessment could be very large, the basic
ideas are the same.
76 RISK ASSESSMENT

REFLECTION
When the stakes are high – there is a potential for severe consequences – should
we always try to apply a model-based analysis of the form PRA (QRA)?
No, not in general. If the uncertainties about the system or activity studied are
very large, such an approach would not be meaningful, as realistic models can-
not be derived. In this case, a crude or preliminary risk assessment is more
appropriate. However, if the uncertainties are not too large, a model-based anal-
ysis is attractive, as it allows analysts to study effects of changes and identify
what the most important risk contributors are, even when no data are available
for the activity considered. What type of approach or method to use of course
also depends on the amount of resources one is able and willing to use on the
assessment.

4.2.2 The risk assessment process


The risk assessment process covers two main stages, the risk analysis and the risk evalu-
ation, as shown in Figure!4.3. We first consider the risk analysis. The bow-tie is com-
monly used to illustrate the key features of the risk analysis; see Figure!4.4.
The first step of the risk analysis is the identification of events A’ (hazards, threats,
opportunities). We remember the list produced by Tommy in the talk example of Sec-
tion!4.1. Many analysts consider this step the most important one, as if we overlook an
event, it cannot be further assessed with respect to risk. As mentioned in Section!4.1,
many methods exist for this purpose. A!common characteristic of all the methods is that
they are founded on a type of structured brainstorming in which one uses guidewords,
checklists and so on adapted to the issue or problem being studied; see also Section!5.7.
In the cause analysis, we study how the events A’ can occur, what underlying events,
risk sources and risk influencing factors may result in A’. Again returning to the Tommy
talk example and starting from an event A’ equal to ‘poor response to a question’, a
cause analysis could point to factors such as a weak knowledge basis, stress and lack

Barriers

Consequences C’
Risk sources Event A’

Risk influencin
factors

FIGURE 4.4 A schematic example of a bow-tie used in a risk analysis context


BASIC THEORY OF RISK ASSESSMENT 77

of skill in speaking. The barriers, as shown in Figure!4.4, may hamper the influence of
these factors and potentially prevent the occurrence of event A’. Training and rehearsals
of the talk are examples of such barriers. The quality of the barriers in preventing A’ is
a key element of the cause analysis.
To conduct a quantitative cause analysis, the common approach is to make a
model of the event A’ showing the relationship between A’ and more underlying fac-
tors. A!simple example is shown in Figure!4.5 for the Tommy talk example. According
to this model, the event A’ occurs if one of the (sub)events B1, B2 or B3 occurs.
Tommy assigns probabilities equal to 0.25, 0.10 and 0.01 for the three events B1,
B2 and B3, respectively. From these probabilities, we see that the probability of event
A’ is about 0.36, derived by summing these three input probabilities. Alternatively, we
can compute P(A’) by the formula

P(A’)!=!1!– P(Not A’)!=!1 – [(1!– P(not B1))(1!– P(Not B2))(1!– P(Not B3))]!
=!1 – (0.75 · 0.90 · 0.99)]!=!0.33.

This formula is based on the assumption that the three events B1, B2 or B3 are inde-
pendent. The formula expresses that for A not to occur, B1 must not occur, B2 must not
occur and B3 must not occur.
The probabilistic analysis also needs to clarify the knowledge supporting these
judgments, in particular when it comes to the assumptions made. Using the model,
Tommy can analyze changes as a result of measures introduced, for example, on train-
ing and rehearsals. The model can be further developed by asking what is needed for
each of events B1, B2 or B3 to occur. The method used is similar to a fault tree analysis;
see Problem 4.6. Fault tree analysis is one of the most used methods in PRAs. Another
common method used for this purpose is Bayesian networks; see Section!5.2.
The next stage is the consequence analysis, for which the effects C’ of events A’ are
studied. The consequences can relate to different values, including lives, the environ-
ment or economic assets. Event tree analysis, as illustrated in Figure!4.2, is a common
method used. The number of stages in the event sequence depends on the number of
barriers in the system. These barriers aim at preventing the events from resulting in seri-
ous consequences. For each of these barriers, it is common to perform a barrier failure
analysis, using, for example, a fault tree in line with the ideas of Figure!4.5.
Studies of system and barrier dependencies are an important part of the analysis.
Think about a case where sensitive information is stored in a computer system behind

Poor
response to
a question
A’

The knowledge Tommy is too Tommy is too poor in oral


about the issue is stressed to think speaking to be able to provide
too weak B1 clearly B2 an understandable answer B3

FIGURE 4.5 Simple model linking event A’ and the subevents B1, B2 and B3
78 RISK ASSESSMENT

two password-protected security levels. The idea is, of course, that an unauthorized
person shall not gain access to the information even if this person gets through one
layer of protection. However, if the user applies the same password for both security
levels, this extra protection is lost. There is dependency between the two barriers (the
same password). An obvious measure would be not to allow the user to use the same
password for both security levels.
Physicians often refer to dose-response relationship – it is a type of consequences
analysis. Formulas are derived showing the relationship between varying dose levels
and the average response. The dose here means, for example, the amount of drug that is
introduced into the body or the amount of exposure from a risk source (e.g., radiation).
Commonly, probability distributions are used showing the average responses as a func-
tion of specified dose levels. More refined analysis will also provide probabilities for
the response for a fixed dose level, with strength of knowledge judgments supporting
these probabilities. We may, for example, assign a probability of 20% that the effect or
response will be a factor twice as high as the typical response value. Also, uncertainty
intervals may be suitable for this purpose.
The consequence analysis may capture detailed studies of real-life phenomena,
for example, how a fire would spread in a building or how a pandemic affects the
economy. Risk scientists support this type of studies by providing suitable risk-based
concepts, principles and methods.
For risk characterizations, we refer to Chapter!3. The risk characterization allows
for studying differences between alternatives and the effects of changes, for example,
potential measures to modify (reduce) risk. It also allows for the identification of
important risk contributors. A!common approach is to assess the difference in the risk
characterization when removing a risk source or event. In this way, an improvement
potential is identified. See Problem 4.8 and Section!5.1. Another similar approach is to
adjust the assumptions that the analysis is based on to some extreme level to reveal the
contribution from these assumptions.
It is also common to perform other types of sensitivity analyses on the basis of the
risk characterization to see how sensitive the results are with respect to changes in the
input quantities; see Section!5.1.
Finally, there is a need to discuss risk evaluation. It covers judgments of the signifi-
cance of the risks, ranking of risk priority and measures with respect to risk. The risk eval-
uation is conducted by risk analysts; decision-makers are typically not involved in the risk
evaluation process. When referring to judgments determining the significance of the risk,
comparisons are made with criteria or references made for the type of analysis. For exam-
ple, in industry, it is common to formulate some reference values for what is considered
high-risk and low-risk numbers. These reference values are commonly referred to as risk
tolerability and acceptance criteria and will be thoroughly examined in Chapter!8. These
reference levels provide a basis for determining what are significant (high, low, acceptable,
unacceptable) risks and what are not. As thoroughly discussed in Chapter!3, proper risk
descriptions extend beyond probability-based risk metrics, which means that simple evalu-
ations on the basis of comparisons with defined reference levels need to be used with care.
A!risk evaluation should not conclude what risk is acceptable or unacceptable. The risk
assessment just provides insights based on a certain perspective!– the decision-makers are
BASIC THEORY OF RISK ASSESSMENT 79

informed by this perspective but also need to take into account other aspects, including
the limitations of the risk analysis approach. The NASA example illustrates this. It would
clearly be meaningless to let the Apollo project decision be based on whether the PRA
showed that a specific probability number was met. The political support for the project
needed a much broader and stronger foundation, covering both confidence in the technical
feasibility of the project and an extreme desire to make a moment in history.
Similarly, national and international committees, such as those related to food
safety and health issues, aim at providing risk evaluations on the basis of evidence and
risk analysis without becoming political or including non-scientific issues. They provide
professional judgments of the significance of the risks. However, in practice, this is an
ideal which cannot be fully met, as these committees make conclusions about what is
safe. That cannot be done without value judgments. The evaluation can build on values
of the decision-makers (for example, how safe they require the food to be), but the
processes are often challenging, as they depend on non-trivial judgments of uncertain-
ties and interpretation of knowledge. The result is that the committees often operate in
a ‘no man’s land’ between science and policy and quickly become exposed to critique.
The risk evaluation also addresses ranking of alternatives and measures with respect
to risk. The risk characterization provides the basis for this ranking, together with some
criteria that have been established for how to conduct the ranking. An example would
be a use of a type of cost-benefit analysis to compare options. We will discuss such cri-
teria in Chapter!9. Again, the process is on the borderline between professional assess-
ment and decision-making.

4.2.3 Use of risk assessment


As discussed in the previous section, the risk assessment informs decision-makers; it
does not provide a clear answer on what to do. There is a leap from the risk assessment
to the decision. We refer to this leap as management review and judgment, and it is
defined as the process of summarizing, interpreting and deliberating over the results of
risk and other assessments, as well as other relevant issues (not covered by the assess-
ments), in order to make a decision. The management review and judgment is needed,
as the risk assessment has limitations in capturing all aspects of risk, and there are other

Reflecting other issues than risk.


Input from other types of
assessments, such as cost-benefit
analyses

Management review
Risk Assessment Decision
and judgment

Reflecting that risk


assessments have limitations

FIGURE 4 .6 Illustration of the role of management review and judgment in risk


decision-making
80 RISK ASSESSMENT

aspects than risk that are important for decision-making, as illustrated in Figure!4.6.
Referring to the NASA example, the risk assessments (PRA) estimating a success prob-
ability for the moon project were not the only input to the decision to support the pro-
gram. The limitations of the risk assessment in predicting the success rate and reflecting
reality were acknowledged, as was the importance of other issues, like the prestige and
importance of the United States being the first nation on the moon. Different methods
exist for comparing options and evaluating the overall ‘goodness’ of an alternative or
measure, including cost-benefit analysis. We will discuss these in Chapter!9.
Decision-makers do not in general have specific risk assessment competence. How-
ever, they are used to dealing with uncertainties and risk. In the management review
and judgment, the knowledge supporting the findings is examined, in particular key
assumptions made. Provided the risk assessment has been conducted in a professional
way, giving sufficient weight to reporting and communicating the uncertainties and the
knowledge basis, decision-makers have a proper basis for making the appropriate deci-
sion using relevant information and values.

4.3 QUALITY OF RISK ASSESSMENTS

In this section, we discuss what ‘high-quality risk assessment’ is. Is it sufficient to say
that the quality of the assessment is high if the decision-maker and user of the risk
assessment are pleased with the results and the analysis conducted? Is the quality of
the assessment mainly determined by its ability to satisfy the decision-makers’ expecta-
tions? Clearly this is not sufficient. Think about a situation where the risk assessment
shows that the risk is very low compared to other similar type of activities, which are
broadly accepted on the basis of calculations of expected losses. The decision-makers,
who are not experts on risk science, may find the assessment and results trustworthy
and solid. Yet the quality of the risk assessment can be poor, as seen from a risk science
perspective. The decision-maker could be seriously misled by a risk characterization
based on expected values only, as thoroughly discussed in Section!3.3.1. The quality
always has to been seen in relation to what the current risk science knowledge is. The
risk assessment analysts may be confident that they are in fact applying suitable risk
assessment and management concepts, approaches, principles and methods, but this
does not mean that this is actually the case, as the point of reference is the current risk
science knowledge.
Take another example. A! risk assessment is conducted with the aim of provid-
ing the important risk contributors related to an activity. The analysts perform the
study, and it turns out that they overlooked a type of event A. This event has been
shown to represent a serious risk in other comparable activities, but the analysis team
lacked knowledge about this event. We would quickly conclude that the risk assessment
was of poor quality. However, it is commonly expressed that a risk assessment should
adequately understand and characterize risk based on the available knowledge. But
what does ‘available’ refer to? Maybe in this example, it would be rather easy to gain
the necessary knowledge about event A, but in other cases, it could be very resource
BASIC THEORY OF RISK ASSESSMENT 81

demanding to obtain all relevant knowledge concerning a phenomenon or process stud-


ied. So where do we draw the line? And we can problematize this even further. Can the
quality of a risk assessment be good if the knowledge available is very weak? Should
the knowledge not be strengthened when it is poor? Yes: often this is done. Assessments
cover activities to enhance knowledge, for example, through research projects, but in
general, there will be limitations on what type of knowledge generation can be made
within a specific risk assessment. It follows that the discussion of what constitutes a
high-quality risk assessment needs to be seen closely in relation to what the knowledge
supporting the assessment is.
Models play an important role in risk assessment, and their ‘goodness’ is criti-
cal. The Apollo PRA illustrates this clearly. Using comprehensive PRA models of the
system in the design phase, a frequentist probability p* for mission success (failure)
was estimated. Comparing with the underlying ‘true’ unknown success (failure) rate
p, we can speak about a model error p* – p. We can refer to the uncertainty about
the true value of this error as model uncertainty; see also Problem 3.4 and Sec-
tion!10.3.1. An important topic of the quality of risk assessment discussion is model
uncertainties. In this case, it was clearly very large. A!model is by definition a sim-
plified representation of the system; hence, there will always be a model error. The
challenge is to ensure that it is sufficiently low, to nevertheless provide some new
knowledge and be useful. The best guarantee for that is strong sciences understand-
ing the phenomena considered. However, risk assessments also have a role to play
to support decision-making when such understanding is not present; the uncertain-
ties are large. Then it is particularly important that the limitations of the models be
properly reflected. The use of knowledge strength judgments is a key instrument for
doing this.
Some risk science guidance is provided by this book, but to be a highly skilled risk
assessor, studies of fundamental risk science work are also needed. Examples of such
work are included in the list of references. Practical issues for ensuring that risk assess-
ments are appropriately planned, conducted and used are discussed in Chapters! 10
and 11.
We refer to Problem 4.15 for a discussion of the terms ‘validity’ and ‘reliability’,
which are closely linked to this discussions about the quality of risk assessment.

4.4 PROBLEMS

Problem 4.1
Accident data for an industry are reported. Would you see this report as a risk
assessment?

Problem 4.2
The culture in a company is that risk assessments are to be conducted to satisfy regula-
tory requirements. Do you find this culture prudent?
82 RISK ASSESSMENT

Problem 4.3
Sketch a possible coarse or preliminary risk assessment for a road tunnel and related
incidents (such as car accidents).

Problem 4.4
Figure!4.7 shows a tank that operates as a buffer storage for transportation of fluid
from the source to the consumer. The fluid consumption level varies. An automatic sys-
tem to control over-filling of the buffer storage works as follows: As soon as the fluid
reaches a level ‘normal high’, then the level switch high (LSH) is activated and sends a
closure signal to valve V1. The result is that the fluid to the storage then stops. If this
barrier does not work and the fluid level continues to increase to an ‘abnormally high
level’, the level switch high high (LSHH) is activated and sends a closure signal to valve
V2. The result is that fluid to the buffer storage then stops. Simultaneously, the LSHH
sends a signal to valve V3 to open so that the fluid is drained. The draining pipe capac-
ity is higher than the capacity of the supply pipe.
Illustrate the use of a failure mode and effect analysis (FMEA) by analyzing the
components V1 and LSHH, addressing function, failure mode, effect on other units,
effect on the system, failure probability, strength of knowledge and failure effect
ranking.

Problem 4.5
Discuss the strengths and weaknesses of an FMEA.

Problem 4.6
The fault tree analysis (FTA) technique was developed by Bell Telephone Laboratories
in 1962 when it performed a safety evaluation of the Minuteman Launch Control

FIGURE 4.7 Storage tank example


BASIC THEORY OF RISK ASSESSMENT 83

System. The Boeing company further developed the method, and since the 1970s, it has
become very widespread and is currently one of the most-used risk assessment methods
for studying the reliability and safety of technical systems.
A fault tree is a logical diagram which shows the relation between system failure
(also referred to as the top event) and failures of the components of the system, for
example, as a result of technical failures or human error, using logical symbols.

AND gate: The output event (above) occurs if all input events (below) occur:

OR gate: The output event (above) occurs if at least one of the input events (below)
occurs:

Construct a fault tree based on Figure!4.5.


For the tank example presented in Problem 4.4 and Figure!4.7, construct a fault
tree for the top event ‘overfilling of tank’, based on the components V1,V2,V3, LSH
and LSHH.

Problem 4.7
In a fault tree (and reliability block diagram), focus is on the minimal cut sets. A!cut
set is defined as a set of basic events which ensures the occurrence of the top event.
A!cut set is minimal if it cannot be reduced and still be a cut set. For the fault tree
of the tank example in Problem 4.6, find the minimal cut sets. Why do you think we
identify these sets?

Problem 4.8
Suppose the failure probabilities for the components V1, V2, V3, LSH and LSHH
are 2%, 2%, 2%, 1% and 1%, respectively. What is then the probability that
the top event will occur? Do you need to make an assumption to perform this
calculation?
Compute an approximate probability of overfilling in one year, given that the fluid
increases 25 times. Which of the components do you find to be most important? Why?
84 RISK ASSESSMENT

Problem 4.9
Draw an event tree starting from an initiating event A, with branches B1 and B2, that
results in Y number of fatalities.
A: leakage, B1: ignition, B2: explosion, X: number of leakages.
If A, B1 and B2 occur, Y is either 2, 1 or 0, with probabilities 0.5, 0.3 and 0.2,
respectively.
If A, B1 and Not B2 occur, Y is either 1 or 0, with probabilities 0.5 and 0.5, respectively.
If A!and Not B1 occur, Y is equal to 0.
Assume P(B1|A)!=!0.001, P(B2|A,B1)!=!0.10.
Compute P(Y # 1|A) and explain what this probability expresses in this case.

Problem 4.10
Event trees for the tank example can be drawn based on the initiating event ‘fluid
increases’. Specify possible branches for such trees.

Problem 4.11
With reference to the discussion in Section! 4.1.1, how would you explain causality
in relation to fault tree models (refer, for example, to the tank example discussed in
Problem 4.6). Discuss to what extent the statistical analysis related to smoking proves
that smoking causes lung cancer. Also discuss the concept of ‘root cause’ often referred
to in the literature, expressing some type of basic cause that is the root or origin of the
problem.

Problem 4.12 Use of Bayes’ formula


Suppose a patient is tested when there is indication that the patient has a specific dis-
ease D. From general health statistics, we know that 2% of the relevant population
is seriously ill from the disease, 10% is moderately ill and 88% is not at all ill from
this disease. Suppose that the test gives a positive response in 90% of the cases if it is
applied to a patient that is seriously ill. If the patient is moderately ill, the test will give
positive response in 60% of the cases. In the case that the patient is not ill, the test will
give a false response in 10% of the cases.
Now suppose the test gives a positive response. What is the probability that the
patient is seriously ill?

Problem 4.13
Section! 4.1.1 referred to the common misconception that a mission is thought of as
a chain of links, and success is believed to be ensured by giving priority to the weak-
est links, and improving others is considered wasted effort. Explain why this is a
misconception.
BASIC THEORY OF RISK ASSESSMENT 85

Problem 4.14
It is common to distinguish between a forward and a backward approach to risk analy-
sis. The former starts with the events A’ and analyzes the consequences C’ following
A’. The backwards approach, on the other hand, starts with some specified values or
categories of C’ – for example, only consequences with at least x number of fatalities!–
and studies how these specific values or categories can happen. Discuss the pros and
cons of the two approaches.

Problem 4.15
When discussing the quality of risk assessments, the two terms reliability and validity
are often used. The concept of reliability is concerned with the consistency of the ‘mea-
suring instrument’ (analysts, methods, procedures), whereas validity is concerned with
the success at ‘measuring’ what one sets out to ‘measure’ in the assessment. Discuss the
meaning and applicability of these concepts in the context of risk assessment.

Problem 4.16
Refer to the local news in your area or university and make a list of five things that have
‘gone wrong’. Refer to these as risk events that have actually occurred. Could each of
these events have been avoided? Explain.

Problem 4.17
Suppose your university is debating a controversial decision: whether to close the
university in response to a threat (pandemic, threat of violence, etc.). The university
administration is asking for your help in organizing its risk assessment. To help, you
have been asked to do the following: 1) Create a spreadsheet template for inputting
information for a coarse risk assessment and 2) create a tutorial (document, slides or
video) for the administration to use for inputting information and also making deci-
sions using the template. Create this template and tutorial and share with your class.

KEY CHAPTER TAKEAWAYS

• A risk assessment is about improving the understanding of the risk studied in order
to support relevant decision-making on how to handle the risk.
• The assessments can help us identify what might go wrong (or what can give posi-
tive outcomes), why and how; what the consequences are and how bad (good)
they are.
• A risk assessment covers risk analysis plus risk evaluation.
• The risk analysis comprises the following main stages: identification of events (risk
sources/threats/hazards/opportunities A’), cause analysis to investigate how these
events can occur, consequence analysis to study the effects (C’) of these events,
86 RISK ASSESSMENT

characterization of the risk (A’,C’,Q,K) and study of alternatives and measures that
can be used to modify risk.
• The risk evaluation part makes judgments about the significance of the risk and
ranks alternatives and measures based on comparisons with other risks and estab-
lished criteria.
• Risk assessment informs decision-makers. There is a leap between risk assessment
and decision-making. It is referred to as the management review and judgment.
• The management review and judgment takes into account the limitations of the
risk assessments as well as concerns other than risk.
• A risk assessment is high quality if it is conducted in line with risk science concepts,
principles, approaches, methods and models. It is not enough that relevant stake-
holders be satisfied.
• Model error relates to the difference between model output and the underlying true
value of the quantity considered. Model uncertainty is uncertainty about this value.

You might also like