0% found this document useful (0 votes)
45 views16 pages

Diderichsen 2016

This document discusses the use of Just War Theory as a framework for intelligence ethics. It argues that Just War Theory, with its focus on self-defense against known aggressors, is inadequate for evaluating preventative intelligence activities aimed at unknown future threats. The document proposes an alternative framework based on principles of openness, transparency, and informed public consent. Intelligence activities should have democratic legitimacy and citizens should actively participate in accepting the goals and methods of intelligence services. However, secrecy remains important for intelligence work. The document suggests ways to balance openness with secrecy to gain public acceptance and oversight without compromising security.

Uploaded by

Daniel Wambua
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views16 pages

Diderichsen 2016

This document discusses the use of Just War Theory as a framework for intelligence ethics. It argues that Just War Theory, with its focus on self-defense against known aggressors, is inadequate for evaluating preventative intelligence activities aimed at unknown future threats. The document proposes an alternative framework based on principles of openness, transparency, and informed public consent. Intelligence activities should have democratic legitimacy and citizens should actively participate in accepting the goals and methods of intelligence services. However, secrecy remains important for intelligence work. The document suggests ways to balance openness with secrecy to gain public acceptance and oversight without compromising security.

Uploaded by

Daniel Wambua
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Intelligence and National Security

ISSN: 0268-4527 (Print) 1743-9019 (Online) Journal homepage: https://www.tandfonline.com/loi/fint20

Intelligence by consent: on the inadequacy of Just


War Theory as a framework for intelligence ethics

Adam Diderichsen & Kira Vrist Rønn

To cite this article: Adam Diderichsen & Kira Vrist Rønn (2017) Intelligence by consent: on the
inadequacy of Just War Theory as a framework for intelligence ethics, Intelligence and National
Security, 32:4, 479-493, DOI: 10.1080/02684527.2016.1270622

To link to this article: https://doi.org/10.1080/02684527.2016.1270622

Published online: 30 Dec 2016.

Submit your article to this journal

Article views: 1410

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=fint20
Intelligence and National Security, 2017
VOL. 32, NO. 4, 479–493
http://dx.doi.org/10.1080/02684527.2016.1270622

ARTICLE

Intelligence by consent: on the inadequacy of Just War Theory


as a framework for intelligence ethics
Adam Diderichsen and Kira Vrist Rønn

ABSTRACT
This article contributes to the current discussions concerning an adequate
framework for intelligence ethics. The first part critically scrutinises the
use of Just War Theory in intelligence ethics with specific focus on the just
cause criterion. We argue that using self-defence as justifying cause for all
intelligence activities is inadequate, in particular in relation to the collection
and use of intelligence for preventive purposes. In the second part of the
paper, we tentatively suggest an alternative moral framework for preventive
intelligence, understood as intelligence activities with no specific suspicion or
aggressor. We suggest that the moral permissibility of such activities requires
a civilised moral framework, in which openness, transparency and informed
consent constitute crucial elements.

Introduction
In an age of global terrorism and emerging technical potentials for mass surveillance, the moral limits
to the use of intelligence methods for security purposes are clearly among the most pressing of ethical
questions.
The recent increase in the use of and importance placed on intelligence to counteract all sorts of
safety and security issues (as evident, for example, in the recent push for Intelligence-led Policing in many
Western Police services1) reflects a more general move towards pro-action, precaution, and disruption
on a range of issues related to public safety and national security. Thus, the scope of intelligence has
changed from being primarily concerned with military issues into an increasing number of domains
such as general safety and security.2 This increase in intelligence activities renders difficult questions
concerning the accountability and legitimacy of intelligence activities even more pressing and relevant.
The changed scope furthermore results in a move away from military threats and known aggressors
serving as the primary target of intelligence activities. Nowadays intelligence activities are used as a tool
to counteract general societal risks and uncertainties.3 Intelligence activities are therefore both disrup-
tive, pre-emptive and preventive in nature, meaning that intelligence is used to inform ­decision-makers
in regards to both known and potential or unknown aggressors.
In this article, we will argue that the former and heavily threat- and pre-emption-oriented discussions
within intelligence ethics cannot account for preventive intelligence activities dealing with risks and
uncertainties. We thus argue for an alternative civilised moral framework for preventive intelligence.
In the scholarly literature, Just War Theory (JWT) is often singled out as an appropriate conceptual
framework for thinking about intelligence ethics. This is the case both when attempting to identify a
general framework for intelligence ethics4 and when addressing the morality of more specific intelli-
gence activities, such as espionage5 and surveillance.6

CONTACT Adam Diderichsen [email protected]; Kira Vrist Rønn [email protected]


© 2016 Informa UK Limited, trading as Taylor & Francis Group
480 A. DIDERICHSEN AND K. V. RØNN

In the first, critical part of this paper, we argue that JWT and the notion of a just cause based on
self-defence can only account for a subset of all intelligence activities. Though JWT is a useful framework
for guiding disruptive and pre-emptive intelligence activities, previous attempts to justify all types of
intelligence by means of the notion of self-defence are problematic and potentially misleading, prompt-
ing conceptual confusion and mischaracterisation of the role that intelligence and intelligence agencies
play and should play in the modern security landscape. In particular, this approach to intelligence ethics
fails to recognise that many intelligence activities aim to manage risks and identify emerging threats to
national security interests rather than act as responses to aggressors or imminent threats, which would
be the minimum requirement for the activity to count as permissible self-defence. We thus argue that
many intelligence activities are preventive in nature – meaning that they aim to identify and apprehend
a threat or an aggression before it manifests itself. Our use of the term, intelligence therefore includes
not only activities aimed at guiding actions against a known or a potential aggressor or adversary, but
also activities aimed at identifying and developing options for action before a specific or suspected
aggressor is identified. This broad use of the term corresponds to the diverse and extensive set of
activities that modern intelligence agencies actually engage in. One way of summarising the argument
presented in this paper would therefore be to say that although JWT is useful as a normative guide in
relation to traditional activities targeting a specific and known aggressor, we need a broader and more
encompassing normative framework, based on political legitimacy and democratic values, in order to
ethically guide the many-sided, complex and often preventive activities that characterise present-day
intelligence.
In the second and more constructive part of the paper, we present such an alternative framework for
intelligence ethics, which can account for preventive intelligence actions. We argue that policing, i.e., the
provision by legal authorities of security as a public good, and the importance granted to legitimacy and
consent in policing, offers a better analogy for thinking about preventive intelligence and the ethical
question it raises. Intelligence and traditional policing are obviously different activities, and we do not
intend to trivialise or obfuscate the distinction between them. However, we argue that intelligence ethics
can learn a great deal from policing and the long tradition of Western political thought engaged with
the question of building legitimate state authorities based on democratic values and the rule of law.
Much the same is true in relation to JWT. Clearly, there are many important differences between
war and intelligence activities, although the latter may often be a part of, or a prelude to, the former
(at least in the case of military intelligence). However, the use of JWT in intelligence ethics does not
so much rest on an analogy between war and intelligence as between the theory of just war and the
theory of just intelligence. It is motivated by a desire to base intelligence ethics, a relatively new field
of study, on the existing body of sophisticated and comprehensive philosophical literature provided
by JWT instead of reinventing the wheel.7 It is therefore also a misunderstanding to claim that the use
of JWT implicitly endorses war rhetoric in the domain of intelligence and thereby rubber-stamps an
offensive and aggressive line of security actions.
Neither the policing, nor the JWT-framework therefore entails a simple reduction of intelligence to
either policing or the use of military force. However, the choice of framework is important, because
it directs our attention to different concepts and considerations. We thus argue that the preventive
nature of many intelligence activities fundamentally changes the way we should think about the nor-
mative dimensions of intelligence work. We suggest that in order for intelligence to be morally just,
the legitimacy of intelligence organisations and citizens’ informed consent to their working methods
and goals constitute necessary conditions. These elements require democratic debate enabling active
participation when accepting the choices and actions made by intelligence services.
Openness and democratic transparency are, however, challenging and perhaps problematic ideals
in the domain of intelligence, since intelligence agencies need, for good reasons, to keep many of their
activities secret. Yet in the end of this paper, we suggest possible ways of achieving public acceptance
of and consent to preventive intelligence actions without compromising secrecy in a damaging way.
However, we would also like to challenge the way in which secrecy is sometimes used as a shield against
criticism and public insight into less sensitive parts of intelligence. We thus suggest ways in which public
INTELLIGENCE AND NATIONAL SECURITY 481

acceptance of preventive intelligence actions is made possible without running the risk of compromis-
ing national security while at the same time challenging the current notion of ‘appropriate secrecy’.

Intelligence ethics and the Just War Theory


The use of Just War Theory as a framework for intelligence ethics is, in the words of Mark Phythian, ‘one
of the most thoughtful dimensions of Intelligence Studies in recent years.’8 JWT is perhaps best seen
as an ongoing tradition in Western political and philosophical thought for arguing about war using
concepts, arguments, and considerations, which have previously been found useful and valid while
adapting them to the particular situation and case at hand. Moreover, contemporary JWT has devel-
oped in various forms and directions, for example into a traditional – also called orthodox – path led by
Michael Walzer as well as a revisionist path led by Jeff McMahan. The main disagreements pertain to
the criterion concerning discrimination of targets and to liability of harm in warfare.9
This long and rich tradition cannot be reduced to a simple checklist to be crossed off before engag-
ing in warfare, and at least some examples of the use of the theory within intelligence ethics may be
criticised for their presentation of JWT as if no controversy and disagreement existed. Nonetheless,
JWT does propose a set of criteria for deciding whether engaging in warfare is morally permissible
(jus ad bellum requiring e.g., just cause, proportionality, and last resort) and for acting morally in a war
(jus in bello particularly emphasising discrimination between combatants and non-combatants).10 In
some versions, Just Intelligence Theory (JIT) reproduces this division as a distinction between jus ad
intelligentiam and jus in intelligentia.11 Other intelligence scholars merge the two types of criteria into
one, which can be applied in order to morally assess both types of actions.12
It would also be a misunderstanding to discuss one criterion for just war in isolation from the rest.
JWT should be seen as a comprehensive theory where the interpretation of one individual criterion
in a specific context depends in part on the interpretation of the other criteria. In this paper, we shall
nonetheless pay special attention to the just cause criterion and the notion of self-defence. We will
therefore not take all the various criteria into account mainly due to the overall scope of this paper and
because the criterion for a just cause seems to be overarching in the sense that if there is no just cause,
there is no need to consider the rest of the jus ad intelligentiam criteria.
Moreover, the notion of self-defence, i.e., that it is permissible to use force (and thereby inflict harm)
in order to protect oneself against a wrongful attack, plays a central role in all versions of JWT, although
it may be supplemented by other just causes, such as coming to the aid of an ally or preventing human
rights abuses. Self-defence does indeed seem to be the clearest example of a just cause for using force
and therefore also a likely candidate for the type of cause that may be used to justify the collection
of intelligence. Perhaps this is also the reason why self-defence is central to International Law, giving
additional weight to the use of that particular reason within intelligence ethics.13 Hence, self-defence,
or more generally the avoidance of harm to national interests, plays a key role in the use of JWT within
intelligence ethics. Ross Bellaby, for example, states that ‘[a]s such, the just cause for the use of intelli-
gence collection is one of self-defence against threats.’14
There are good historical reasons why the analogy between JWT and JIT seems so tempting. Many
intelligence agencies such as the CIA were founded in the period following World War II and played
an important role during the Cold War, making it natural to see them as part of the military power
apparatus. It may also be important that the so-called War on Terror, declared by the USA in the wake
of September 11, was conceptualised as exactly that, i.e., a war, again serving to underline the analogy
between intelligence and the use of military force.
However, JWT presupposes a clear distinction between knowledge and the use of coercive means.
First, you need to know quite a lot about your (potential) adversary in order to decide whether the
various jus ad bellum criteria are fulfilled, and then you may subsequently proceed to warfare if the
conditions are met. In contrast, many of the central questions in intelligence ethics are raised due to a
growing awareness that the very act of knowing (collecting, interpreting, and analysing information)
may in itself inflict harm. This is indeed why it may seem tempting to use JWT as a starting point for
482 A. DIDERICHSEN AND K. V. RØNN

intelligence ethics. But by undermining the clear distinction between knowledge and the infliction of
harm implicit in JWT, such a use of the theory is bound to cause serious conceptual problems.
First, it creates a vicious cycle. When applying JWT to the domain of intelligence, one would need
to have knowledge about specific threats or aggressors in order to justify actions upon these threats.
Since intelligence actions often aim to obtain knowledge about whether someone poses a ‘sufficient
level of threat’15 to national security and/or public safety to justify the use of coercion, this epistemic
requirement may be difficult to lift when attempting to justify the use of intelligence methods in the
first place. In other words, intelligence actions aim to identify threats by obtaining knowledge about
one’s potential opponents, and the existence of these threats cannot be used to justify the intelligence
actions in advance without circularity. The Just War framework thus entails an epistemic requirement
that is often fulfilled via intelligence actions, and the same moral framework thus seems inadequate for
both types of actions. Ross Bellaby has addressed this challenge in his book The Ethics of Intelligence,
where he argues for a so-called ‘ladder of escalation’ providing a ranking of various types of intelligence
collections according to the harm caused by these types of activities. According to Bellaby, the lowest
ranking, the so-called ‘initial level’ encompassing passive collecting activities (e.g., unfocused CCTV
scans), needs no moral justifications and thus no specific degree of belief in the existence of a potential
aggressor.16 Bellaby’s ladder of escalation is a good rule of thumb when deciding whether the harm
caused by various intelligence collection activities is proportional to the threat. However, we simply
disagree that the initial level of intelligence collection does not require a justification. Even initial steps
in the intelligence process would in our view need some kind of justification, since even passive forms
of intelligence collection could potentially inflict harm such as the so-called chilling effect. However,
JWT cannot provide the necessary justification for passive monitoring and surveillance. Instead, we
suggest, such activities need to be justified in terms of legitimacy and public consent.
Second, the notion of harm, as it applies to intelligence, is notoriously slippery.17 Most scholars
endorsing JWT as an adequate starting point for intelligence ethics do so because both entail unavoid-
able harms when acting upon wrongful attacks from a specific aggressor. In other words, warfare and
intelligence gathering both cause harm to others, which in itself seems to be morally problematic unless
it can be justified by more compelling reasons, such as the necessity of self-defence. This means that the
analysis of the harms (allegedly) caused by surveillance and other intelligence methods is fundamental
to the use of JWT in intelligence ethics. Moreover, the harm must not be trivial in order for the analogy to
work and should perhaps even be damaging to the vital interest of the targeted people.18 As witnessed
by the scholarly debate over these issues, we may not, however, have a conceptually clear image of the
harm caused, although infringements of citizens’ dignity, privacy, and ‘human flourishing’ do seem to
be central concerns.19 Indisputably, intelligence actions have the potential to harm the people being
targeted by such acts (especially when applying intrusive intelligence methods such as wiretapping),
and the analogy therefore seems plausible in part because it may guide us towards minimising the
caused harms. However, the somewhat unclear nature of the harm caused by intelligence is a problem
for JIT since the kind of harm that results from targeted and specific surveillance methods seems to
differ from that caused by broader intelligence-scanning activities. After all, there seems intuitively
to be a difference between someone knowing that you called your significant other yesterday and
someone actively eavesdropping on the conversation. One could, for example, imagine that the harms
associated with mass surveillance might be more connected to the public fear of being watched rather
than actually being physically or mentally harmed by the specific intelligence activity. Moreover, most
discussions focus mostly on the harms caused by surveillance and other methods of gathering intelli-
gence. Increasingly, however, as intelligence agencies turn their attention from foreign enemies to real
or imagined enemies within, they also yield considerable ‘symbolic’, yet very real power over the lives
of citizens since the information they possess and the way that they interpret it may have life-changing
consequences for the individuals in question. We grant agencies the right to interpret social reality on
behalf of the state and society at large. But their interpretations and the kind of concepts and language
they use may cause significant harm (justified or otherwise) to the people or organisations interpreted
as threats and enemies. We therefore require a concept of harm that is broad enough to accommodate
INTELLIGENCE AND NATIONAL SECURITY 483

not only surveillance but also symbolic power. This, however, renders the notion of harm even more
slippery and difficult to apply in intelligence ethics. Yet if we lack a conceptual grasp upon the type
of harm caused by intelligence methods such as mass-surveillance, we seem even less secure when
attempting to apply JWT to intelligence.

Levels of intelligence activities


Setting these two general challenges aside for the moment, we would furthermore argue that we
need a more nuanced view of the activities performed by contemporary intelligence agencies. In the
scholarly literature, intelligence activities are often reduced to a one-size-fits-all concept, with little
attention being paid to their substantial differences in nature and scope.20 We would, however, like
to propose the following distinction between three types of intelligence, based on an ordering by
increasing degree of suspicion and knowledge about specific aggressors:

(1)  a broad scan in order to identify possible aggressors or threats,


(2)  targeted actions against individuals and organisations suspected to be aggressors,
(3)  targeted actions against individuals and organisations known to be aggressors.21

Examples of the latter type of activity are military intelligence aimed at establishing the location and
capabilities of enemy forces as part of an armed conflict or the use of intelligence methods in policing
in building a case against known criminals. JWT seems to be a suitable conceptual framework for JIT in
these cases since intelligence activities take place as part of an ongoing war or conflict and therefore
seem to be justified to the same extent that the use of military or police power in the encompassing
war or conflict is justified. For the same reasons, this is, however, also a relatively uninteresting result.
An example of the second type of intelligence activity is military intelligence targeting a foreign
country in order to determine whether it plans to invade a third country (intention) or whether it pos-
sesses nuclear or chemical weapons (capabilities). In policing, an example is the use of wiretapping in
order to determine whether a suspect really is engaged in criminal activities. We argue that JWT is also
a suitable framework for JIT in these cases, although it does presuppose an extension of JWT based on
the notion of pre-emption. This is a more interesting result than the first case since we are considering
situations in which an armed conflict (or the policing equivalent) has not yet begun and may indeed
not be ethically justified.
The most challenging type may, however, be Type 1, which includes, for example, the use of mass
surveillance for anti-terrorist purposes. We argue that JWT cannot account for this type since there is no
specific aggressor present, which would be a requirement for applying JWT. Before we do so though,
we must give a more precise account of the preventive role that intelligence often plays.

Preventive intelligence
One reason why JWT seems such a compelling framework for thinking about intelligence ethics may
be that intelligence activities in the post-WW2 era and in the Cold War period were often, perhaps
primarily, of the second or third of the three aforementioned types since the enemy was generally
known.22 Everyone knew who the enemy was, and intelligence activities could be justified either as
part of the numerous conflicts and proxy wars during the Cold War or as pre-emptive measures for
establishing the enemy’s intentions and capabilities. This is, however, no longer the world in which we
live. We today face complex security challenges in which we no longer know who the enemy is and
must therefore establish his/her identity – and whether there is an enemy at all – before we can start
inquiring into his/her intents and capabilities. In the words of Ulrich Beck, the current risk era moves
away ‘from a word of enemies to a world of dangers and risks.’23
In the contemporary ‘world of dangers and risks’, intelligence services very often cast a wide net
and collect information just in case it is needed, as illustrated by bulk surveillance aimed at identifying
484 A. DIDERICHSEN AND K. V. RØNN

communication patterns. These and related intelligence practices are what we attempt to cover with
the notion of preventive intelligence (or Type 1 intelligence). Sometimes such activities aim to find the
notorious needle in the haystack, i.e., a threat that we may know or surmise is there but that we have
been unable to identify or find. The needle in the haystack metaphor does not however capture the full
range of present-day intelligence activities. Often such activities do not simply aim to find a pre-existing
threat (the needle), but rather to prevent possible and emerging threats from ever materialising. The
aim is, so to speak, not so much to find an existing needle as to prevent that there are any sharp objects
in the hay-stack to begin with, for instance by preventing a fragile object, say a glass, from shattering.
Modern intelligence practices thus follow the Beckian logic of trying to prevent low-probability,
high-impact events, often characterised by the precautionary logics: better safe than sorry. In this sense,
our notion of preventive intelligence reflects the current merging of national intelligence activities
and risk management and might also be phrased in terms of risk-oriented or precautionary intelli-
gence.24 The central element of this type of intelligence work is that there is no specific aggressor and
no imminent threat. On the contrary, it is future oriented in the sense of aiming to identify potential
aggressors and threats.

The self-defence account: pre-emption and prevention


In the following, we argue that JWT is inadequate as a normative framework for modern preventive
intelligence. More specifically, we argue against the notion that intelligence can in general be concep-
tualised as a kind of self-defence, since intelligence actions only rarely have the degree of emergency
and urgency entailed in the concept of self-defence. According to many scholars working within the
field of permissible self-defence, the anticipated threat would necessarily need to be at least immi-
nent in order to prompt permissible actions of self-defence.25 As a result, a moral permissible act of
self-defence is defined as either a response to a specific (and occurred) aggression (often phrased as a
wrongful attack) or a pre-emptive response against an imminent threat. In other words, if there is neither
a wrongful attack nor an imminent threat, actions cannot be deemed pre-emptive and will not count
as actions of permissible self-defence.
Defining an imminent threat is difficult. Furthermore, the concepts of imminence and pre-emption
have both attracted a range of negative and problematic connotations over the last decades since the
response of the Bush administration to the September 11 attacks, including in particular the American
invasion of Iraq based on so-called pre-emptive grounds in terms of Iraq’s alleged possession of WMD.26
Subsequently, many scholars have described the past decades as the age of prevention, in which the
proliferation of precautionary and anticipatory logics have been the predominant means of obtaining
and securing public safety and national security.27 The ways in which these concepts have been used
in everyday language thus open up for very broad interpretations of permissible self-defence since
almost any situation could potentially be interpreted as entailing an imminent threat and therefore as
legitimising pre-emptive self-defence. Such a broad notion of self-defence is clearly problematic since
a theory of permissible self-defence should be able to rule out some actions as morally impermissible
actions of self-defence. In order for self-defence to function as a plausible delimiter for morally permis-
sible actions and thus as a just cause for intelligence, a proper notion of pre-emption and imminence
is needed. JWT provides precisely discussions on and clarifications of these concepts.
Although it would be fair to claim that the post-September 11 security threats lead to a blurring of
the distinction between pre-emptive and preventive actions,28 it seems safe to say that the core difference
between the two notions can be rendered in terms of the existence or non-existence of an imminent
threat:29 Pre-emption is thus a response to a specific aggression, whereas prevention encompasses actions
without prior aggression but aiming at addressing (preventing) risk and possible dangers. Though this
distinction might bring some clarity to the situation, the challenge remains to identify what imminence
means and when situations can be characterised as an imminent threat. This is no easy task, since you
could perhaps claim that in an age of terror, imminent threats exist everywhere and thus that any
imaginable or feared threat constitutes an imminent threat.
INTELLIGENCE AND NATIONAL SECURITY 485

There is, however, a clear difference between threats that are merely possible (although feared)
and those that are imminent, even though there is bound to be a wide grey-zone. Our claim is that a
great proportion of intelligence actions belong to the first rather than the latter category, which raises
the question as to whether the Just War framework is adequate as a theory for intelligence ethics. A
necessary condition for imminence could in general terms be phrased as a situation entailing at least
‘some degree of preparation of a specific aggression’ and ‘a manifestation of an intent to harm or injure’.30
Again, this clarification evidently raises new questions, such as: What counts as preparation? What sorts
of activity, evidence, etc. are needed in order to represent a sufficient degree or preparation and thus
as entailing an imminent threat?31
No matter how one specifies the concept of imminence, we argue that intelligence cannot be exhaus-
tively understood in terms of pre-emptive actions. This is because intelligence often aims to identify
the existence of possible threats without knowing the specific identity of the would-be aggressors, e.g.,
mass surveillance of electronic telecommunication. However, to the extent that intelligence actions
aim to identify emerging threats (risks), they cannot simultaneously be understood as responses to immi-
nent threats. Therefore, the notion of pre-emptive action cannot provide just cause for intelligence
actions in general, which in turn implies that many intelligence activities cannot be morally justified
as self-defence.
This may, of course, be either because the use of intelligence in such cases is unjustified and unethical,
or because JWT is inadequate as a general conceptual framework for preventive intelligence ethics. We
argue that the latter is true. Thus, in the remainder of this paper, we sketch an alternative normative
framework for conceptualising the preventive nature of contemporary intelligence practices. More
specifically, we argue that policing, rather than warfare, represents the correct conceptual framework
for thinking about intelligence ethics.

Intelligence by consent
Summarising our results so far, we may conclude that JWT applies to intelligence activities of Types 2
and 3, but justifying Type 1 (preventive intelligence activities) requires a notion of prevention, which is
much stronger than the notion of pre-emption provided by JWT. Type 1 therefore requires a different
conceptual framework. A great deal of the new development in contemporary intelligence practices is
furthermore preventive in the sense that it aims to identify security risks before they become imminent
threats. Also note that preventive intelligence measures (Type 1) may often be more or less permanent in
nature as opposed to time-limited interventions against a specific target. We therefore require an ethics
of prevention in order to formulate and perhaps answer the difficult question that this development
raises: When is the use of state power for preventive purposes ethically justified?
More specifically, we argue that this new ethical framework should focus on the construction of
legitimate security agencies and other state authorities. We therefore take our cue from another strik-
ingly popular use of state power for preventive purposes, namely policing. Prevention has always been
important in modern policing, especially in the form of the notion that the presence of uniformed
police officers in itself prevents crime from being committed and conflicts from escalating but also in
the sense that preventive measures are central to all modern forms and ideologies of policing, from
community policing to Intelligence-led Policing.
The term security should thus be understood in the broadest possible sense when it comes to polic-
ing. The police force is much more than a crime-fighting agency and devotes most of its time and
energy to a range of other activities, such as traffic regulation, various control and safety measures,
and settling disputes and minor conflicts. Some of these activities are related to law enforcement and
crime fighting; many others are not. It is therefore best to see policing as social peacekeeping,32 with
crime fighting being only the sharp end, so to speak, of its peacekeeping mission.
In today’s scholarly literature, the term policing is often used to denote an activity that can be per-
formed by a number of different actors, including private security providers, as opposed to the police
in the sense of a specific public institution. There is an argument for this use of the term in intelligence
486 A. DIDERICHSEN AND K. V. RØNN

studies as well since a whole range of actors, including not only private security providers, but also
many other private companies use intelligence methods. Here, however, we shall use the term in a
somewhat more restricted sense as denoting the activities of legal authorities aimed at providing secu-
rity as a public good.33 Such provision is the central task of the police (a specific institution), but other
governmental institutions and even private companies may also provide it, to the extent that these
operate as subcontractors under the general supervision of the police or other governmental agencies.
Our suggestion is thus that preventive intelligence in the present-day security landscape should
be seen as a type of social peacekeeping. In other words, preventive intelligence is part of a broader
policing mission aimed at providing public safety and social peace.
Drawing an analogy between intelligence and policing, instead of military force, implies that some
intelligence measures, namely those of Type 1, which JWT cannot account for, can be ethically assessed
within the new framework. The new analogy does not, however, mean that any and all intelligence
measure would be justified. The scope of the new framework is thus precisely to guide us regarding
when, in which form, and how such activities can and cannot be morally permissible.
The policing analogy comes with its own ethical framework. In particular, the notions of legitimacy
and public consent are all-important in policing, both because the legitimacy of authorities is a central
element in democracy and the rule of law and because a lack of legitimacy in the eyes of the citizens
makes the job of the police much harder.34 The first step would therefore be to ask what it takes to build
legitimate state institutions in the field of intelligence: Instead of asking whether mass surveillance and
other modern intelligence techniques are ethically justified, we should ask what it would take to render
legitimate an institution with those kinds of power.

Legitimacy and consent


In the following, we explore the concept of public consent in more detail in order to investigate whether
this could be a plausible future path to develop in the search for an adequate framework for discussing
preventive intelligence ethics. Let us begin by taking a closer look at the concept of consent as it may
apply to policing and intelligence. The concept has at least three key elements. First, it has an active
component. It is not mere acceptance in the sense of passive acquiescence; it is something you actively
do. This should not be taken too literally, as evidenced by the fact that the notion of implicit or tacit
consent plays a central role in much political philosophy, going all the way back to Plato, who in his
Crito has the Athenian laws argue that Socrates should stay in prison since anyone who has decided
to live in Athens has also thereby implicitly accepted their authority (Crito 51E). The use of arguments
from tacit consent has, however, always been controversial, which may in part be because the notion
seems inconsistent given the active component in the concept of consent. At a minimum, we would
need to limit its use so we only ascribe tacit consent to someone on the condition that it would be
rational and consistent with the (known) preferences of that particular individual to consent to the thing
in question. This brings us to the second key element, namely that consent is a rational concept in the
sense that consenting people need good reasons to do so. Those reasons may be of many different
kinds – self-interest, moral or political reasons, and so forth. But whatever one’s reasons may be, if one
consents to something, one has reasons for doing so that one holds to be good and compelling. Third,
consent implies a degree of publicness. If one consents to something, one obviously needs to know
what one is consenting to, just as one needs to express one’s consent in a public way. Furthermore,
one needs to have enough information to judge whether consenting would be in one’s best interest.
Turning to policing, the notion of policing by consent has historically been tied to British policing and
the (ideological) attempts to differentiate it from the Continental tradition of policing. The Metropolitan
Police, sometimes hailed as the first modern police force, was founded on two premises. First, that the
police consist of ordinary citizens with no extraordinary powers exceeding those of any other citizen.
Second, that the police should be uniformed so that the public can know who they are dealing with,
in contrast to the Continental tradition of secret police, and so that the presence of uniformed police
officers may have a preventive effect and deter crime.
INTELLIGENCE AND NATIONAL SECURITY 487

In order to transfer the notion of policing by consent to the domain of preventive intelligence, we
must also note two important differences between European and American policing. First, American
political culture takes its point of departure in the Lockean opposition of limited state power and
entrenched citizen rights, in contrast to the more étatist British and Continental European political
traditions. Second, the maintenance of social order is much more central to British and European polic-
ing than it is to American policing, which is more exclusively concerned with law enforcement.35 So
whereas the focus of American policing is on the enforcement of the law against rights-holding citizens,
that of European policing is on the creation of social order to the benefit of all. Since the US has been
so important for modern intelligence at both a technological and political level, this may also explain
why so much attention in both the public debate and the scholarly literature has been devoted to pro-
tection of citizens’ rights. In contrast, the issue of the possible role of intelligence in the maintenance
of social order seems to have been more or less overlooked. Our claim in this paper is, however, that
the European approach to policing may act as an important source of inspiration for the attempt to
‘civilise’ intelligence.
To summarise, we may say that if the specific understanding of policing by consent is to serve as an
alternative analogy to the military analogy provided by JWT, then intelligence by consent should include
the following elements: activeness, rationality, publicness. The new framework we propose therefore sees
citizens as ‘active participants’ when discussing the moral acceptability of intelligence actions instead
of viewing them as ‘passive targets of intelligence actions’.36 Furthermore, the agents or organisations
authorised to provide security as a public good should be ‘uniformed’ so that their activities are visible
to the public. We have boiled these elements down to the following two requirements in order for this
new analogy to work as a moral framework for preventive intelligence activities:

(1)  Increased openness of preventive intelligence activities


(2)  Uniformed preventive intelligence

Let us explore the possible practical implications of these two elements in more detail.

Increased openness of preventive intelligence activities


Since public consent constitutes a crucial element if preventive intelligence is to be morally permissible,
increased openness is needed for the public to know what it is consenting to (cf. the basic require-
ments of consent presented above).37 Public consent thus inevitably pushes the ‘limits of appropriate
secrecy’38 of intelligence actions in the direction of more openness, since it requires that all information
necessary for public discussion should be readily available. This is a controversial claim in a domain
predominantly characterised by secrecy and with a notorious fear of revealing sensitive information
about itself or about potential aggressors. Although we do not totally dismiss the need for secrecy,
we wish to discuss ways of combining the need for secrecy with the openness necessary to ensure
accountability, legitimacy, and informed consent.
Simon Chesterman discusses the appropriate level of secrecy in intelligence. He points to three
types of information ‘sufficiently sensitive’ to remain secret. First, he points to ‘sources and methods,’
which in his opinion would need to stay secret from the public. Second, he argues that ‘the identities
and activities of a service’s operational staff’ and, third, ‘information provided in confidence by foreign
governments or services’ would fall into the category of an appropriate level of secrecy.39 He also points
to the fact that intelligence services tend to keep more than these three types secret. Examples could
be ethical codes of conduct and other internal guidelines, which are often very difficult for outsiders
to access.40 If these three types of information sets the standard for an appropriate level of secrecy, it
is however unclear whether our request for increased openness and transparency in order to ensure
active public debate, consent, and legitimacy of intelligence actions can ever be successful.
We would argue that the first category in particular requires reconsideration since it seems rea-
sonable to argue that general information about intelligence sources and methods should not be
488 A. DIDERICHSEN AND K. V. RØNN

kept confidential. Naturally, we do not argue that specific sources should be publicly named. Yet we
find it difficult to accept a general policy of confidentiality concerning types of sources, types of data,
and types of methods used in intelligence activities, especially when it comes to activities of a mainly
preventive nature. We do not need to know whether a particular intelligence service has a source in
a specific place, but an open discussion concerning the various elements in the intelligence toolbox
(i.e., various types of human intelligence; signal intelligence, including electronic mass surveillance;
and even open source intelligence) would be a place to start. Such openness would enable a public
discussion on the pros and cons of each type of method. The public reaction to Edward Snowden’s
revelations of the mass surveillance programme of the NSA might serve as a good example of what
can come out of an open debate about intelligence collections activities. Naturally, the views on the
justifiability of Snowden’s leaks differ a lot; yet the ensuing public scandal led to legislative changes in
terms of more restrictions on bulk collection of telecommunication metadata (The USA Freedom Act
from 2015). Whether the collection of metadata is ethically justifiable is open for discussion, but clear
legislation and political accountability are important for the democratic legitimacy of such activities.
Paradoxically, the legal and institutional changes in the wake of Snowden’s leak may thus in part give
to the activities in question the legitimacy that they lacked before the leak.
When it comes to the use of electronic surveillance, another way of enabling informed consent
would be to render public information about the criteria used for deeming information relevant to
collection via these methods. In other words, enabling an open discussion about electronic profiling,
i.e., when and why individuals are considered potential aggressors, without revealing anything about
specific individuals or actual aggressors. Additionally, if methods also refer to criteria and methods used
in the phase of conceptualising and analysing the collected intelligence, then the argument for secrecy
seems difficult to uphold since such methods do not contain sensitive information about intelligence
operations. Obviously, we do not argue that the identity of operational staff should become publicly
known or that sensitive information from foreign countries should be made public. Yet there seems
to be a considerable range for more openness when it comes to the general methods and principles
of intelligence organisations, thereby enabling a more informed public debate on the activities of
intelligence services. This is admittedly a difficult balance to strike since you could rightfully argue that
even knowledge about types of methods and sources would reveal sensitive information for potential
misuse and exploitation. Yet such arguments should not be accepted without discussion, and our pro-
posal would at least initiate a meta-discussion on the use of secrecy as a shield against public debate.
Note as well that democratic accountability is only one of several reasons why openness is important
when building legitimate public authorities. Another equally important reason is that openness is a
condition for the rule of law, i.e., that the use of power and the exercise of authority is bound by public
and known rules, so that citizens can predict the likely consequences of various possible acts on their
part. The rule of law within the domain of intelligence would thus require considerable openness about
the working methods and other rules covering the activities of intelligence agencies, allowing citizens
to predict when and why they may be targeted by them.
Adding more detail to the concept of openness, we should also draw a distinction between a more
general form of democratic openness and a more specific openness about personal information. The
general variety of openness consists of publicly available information allowing citizens to rationally
assess and give or withhold their consent to the existence, objectives, working methods, and organi-
sational functioning of state agencies and other authorities. This kind of openness must, however, be
supplemented by a more specific openness towards individual citizens regarding the kinds of infor-
mation that intelligence agencies hold about these individuals. For instance, in Denmark, one must
obtain a security clearance when applying for an increasing number of jobs, including not only job
openings in governmental organisations, but also in parts of the private sector such as construction and
transportation. The security and intelligence services have been charged with giving these clearances
and conducting the necessary investigations in order to do so. This is done in secret, and neither the
criteria they apply nor the information they gather and hold are available to outsiders, including the
person being assessed. Furthermore, the decision they reach needs not be justified in any way and
INTELLIGENCE AND NATIONAL SECURITY 489

simply consists of granting or denying the desired security clearance without any detail as to why and
how that particular decision was made. Clearly, this means that the individual may be left in doubt as
to why clearance was denied (or granted) in his or her specific case. It also means that the decision
is not correctible, even if it is based on false or misinterpreted information. In our view, the symbolic
power that intelligence agencies yield in such cases, deciding who is ‘in’ and who is ‘out’, requires both
a general democratic openness about the criteria and methods used in the security assessment and
a more specific openness about the relevant personal information, which should be accessible to the
person in question, giving him or her a chance to correct false information. There are probably cases in
which this latter type of information would be impossible to give without compromising confidential
material and sources. It should, however, be possible to indicate at least the type and nature of the
relevant information, e.g., ‘Security clearance was denied because of information pointing to a possible
link to organised crime’.

Uniformed preventive intelligence: from chilling effect to cooling effect


Our second claim is that preventive intelligence is today a means of providing security as a public good,
which in turn implies that intelligence activities should to a greater degree be turned into a kind of
‘uniformed’ policing.
As mentioned above, the majority of police work constitutes social peacekeeping, i.e., conflict res-
olution, maintenance of social order, and various sorts of regulation. Sometimes the (potential) use of
coercive means plays a part in social peacekeeping, but very often social peacekeeping is based not
on coercive means as such but rather on the fact that the police act as the symbolic representative of
state power. This symbolic presence is closely related to the distinction between private and public: A
public space is partly defined by the fact that the state has a right to be present there even without
any particular reason for being so. The police do not require any specific reason for patrolling a public
street, and the fact that they do not need it is part of what we mean by a ‘public’ street. In contrast, the
police and other state authorities need good and legally valid reasons for intruding into private places,
and this is again part of what it means to say that a place is ‘private’.
This symbolic presence is furthermore essential to the preventive function of policing. The presence
of uniformed police thus both acts as a deterrent to crime and as an assurance to citizens that the state
is able and willing to help them if needed. In other words, the police often prevent conflicts and perform
their peacekeeping mission simply by being present in public space.
We would therefore like to challenge the assumption, explicitly or implicitly present in many discus-
sions of intelligence, that there is a conflict between openness and efficacy as a security provider. This
is sometimes the case – it is difficult to infiltrate a criminal organisation while wearing a police uniform,
which is of course why most police forces also use secret (non-uniformed) agents. But it is emphatically
not always true, especially not when working preventively. When trying to prevent trouble from arising
in the first place and when actively seeking public cooperation and engagement in the provision of
security, a uniform is an asset and an important tool. Finding methods to give intelligence agents a kind
of uniform is therefore not a way of tying their hands, but instead a way of giving them the necessary
symbolic power to operate preventively, in an analogous fashion to the social peacekeeping mission
of the police.
Taking the case of policing cyberspace via electronic intelligence collection, we must thus ask how
security agencies performing this task can be given a symbolic presence, allowing them to prevent
crime and uphold social peace in cyberspace. This would also connect to the first requirement con-
cerning increased public awareness of intelligence activities. Specifically, the uniformed presence could
e.g., be accomplished by informing internet users that they are potentially under surveillance if they
visit specific webpages or search for specific words. Such a presence – part real, part symbolic – of
state power in cyberspace would be a way for intelligence agencies to signal the potential use of
power without actually using it, thereby performing a peacekeeping role. Instead of a chilling effect
on democratic debate, this kind of presence could perhaps have the same positive cooling effect that
490 A. DIDERICHSEN AND K. V. RØNN

police officers sometimes have in the social world outside cyberspace – keeping tempers down and
preventing conflicts from escalating.
There is obviously a range of challenges related to this idea, and the feasibility of uniformed intelli-
gence depends on the type of activity performed by the intelligence actor. The distinctions between
private and public are less clear-cut in cyberspace than elsewhere. The surveillance of personal electronic
communication in particular would belong to the private sphere and cannot therefore be justified by
a general preventive purpose.
This is again a difficult balance to strike since secrecy may be needed in order to identify potential
aggressors (otherwise, they would just change behaviour and adapt to the procedures of the intelli-
gence services). We do not wish to prevent intelligence agencies from functioning effectively. Note,
however, that the latter notions depend on the type of activity in which we take intelligence agencies
to be involved. Wearing a uniform may well hamper their ability to secretly identify threats. However,
if the point of intelligence work is (in part) to prevent threats from emerging rather than solely to
identify existing threats, then citizens’ ability to identify intelligence actors may play an important part
in securing legitimacy and preventing conflicts from escalating.
We thus wish to open up the discussion concerning the appropriate level of secrecy and provide
the opportunity for rethinking the role of preventive intelligence. In particular, we need to connect the
preventive role of intelligence work with the building of legitimate state authorities, as exemplified by
policing by consent. This alternative analogy to a framework for intelligence ethics obviously requires
further development. However, we hope to have laid the groundwork for thinking about preventive
intelligence ethics in an alternative, non-militarised fashion.

Conclusion
In this paper, we have critically assessed the attempt to apply Just War Theory to the field of intelligence
ethics. Our main argument is that JWT cannot account for a large number of intelligence activities
characterised by the absence of a specific aggressor or an imminent threat, and resembling general
risk management, predicting and preventing potential futures. By emphasising that this type of intel-
ligence constitutes not self-defence but instead preventive action, we wish to discuss and delimit the
unquestioned reference to self-defence, urgency, imminence, and national security interests. We also
wish to bring theory closer to the actual practices of present-day intelligence agencies in order to iden-
tify limits of permissibility and reasons for accountability. We suggest a new framework for preventive
intelligence phrased in terms of intelligence by consent, in which we draw upon elements from policing.
We specify that this framework entails two major changes for the way in which intelligence is viewed
and conducted: First, we call for more openness with regard to preventive intelligence activities, espe-
cially in terms of the types of intelligence methods and sources, in order to enable public debate about
the potential pros and cons of such activities. Second, we raise the question as to whether preventive
intelligence can be seen as social peacekeeping and, if so, we argue that this move would require
intelligence services that are in some sense uniformed, i.e., marked and recognisable by the public.

Notes
Ratcliffe, Intelligence-led Policing; and den Boer, “Intelligence-led Policing in Europe”.
1. 
2. 
Phythian, “Policing Uncertainty”, 196.
The term intelligence can in Sherman Kent’s words denote an activity, an organisation, or a product. In this paper,
3. 
we mainly use the term to denote an activity, typically performed by specific organisations producing specific
products. Our working definition of intelligence is therefore close to the one proposed by Gill and Phythian, who
define intelligence as ‘the mainly secret activities – targeting, collection, analysis, dissemination and action –
intended to enhance security and/or maintain power relation relative to competitors by forewarning of threats
and opportunities’ (Gill and Phythian, Intelligence in an Insecure World, 19). We thus refer mainly to the activities
connected to intelligence collecting since these activities constitute the main focus in the existing literature on
INTELLIGENCE AND NATIONAL SECURITY 491

intelligence ethics. Moreover, intelligence collection is generally regarded as entailing the clearest infringements
of e.g., the privacy rights of the targets and thus poses ethical questions.
4.  Gendron, “Just War, Just Intelligence”; Quinlan, “Just Intelligence: Prolegomena to an Ethical Theory”; Omand, “Can
we have the Pleasure of the Grin without Seeing the Cat?”; Omand and Phythian, “Ethics and Intelligence: A Debate”;
and Bellaby, “What’s the Harm?”.
5.  Pfaff and Tiel, “The Ethics of Espionage”.
6.  Macnish, “Just Surveillance?”.
7.  Macnish, “Just Surveillance?”, 142, 143.
8.  Omand and Phythian, “Ethics and Intelligence: A debate”, 42.
9.  Frowe, The Ethics of War and Peace, 3; Walzer, Just and Unjust Wars; and McMahan, Killing in Wars.
10. Frowe, The Ethics of War, 101.
11. Quinlan, “Just Intelligence: Prolegomena to an Ethical Theory”, 3.
12. Bellaby, “What’s the Harm?”, 109.
13. Bellaby, “What’s the Harm?”, 109.
14. Bellaby, “What’s the Harm?”, 110, our emphasis.
15. Bellaby, “What’s the Harm?”, 109.
16. Bellaby, The Ethics of Intelligence, 171.
17. In many cases, there is no clear organisational distinction between intelligence agencies and security services, and
a specific organisation may perform tasks related to both of these functions. In such cases, intelligence agencies
may, of course, cause harms that are more tangible, as it were, than the more intangible type of harm caused by
being under surveillance. In this paper, we shall focus exclusively on the latter type of harm.
18. Ibid., 104.
19. Kleinig, “The Ethical Perils of Knowledge Acquisition”, 202.
20. For example, the differences between ‘conventional surveillance’ such as wiretapping and other items of intelligence
such as counter-intelligence activities, covert action, and bulk data collection, just to mention some of the various
types of intelligence activities.
21. Phythian suggests a similar division of intelligence actions in a continuum dependent on the epistemic status of
intelligence services from cases of ignorance – uncertainties – risks to threats (Phythian, “Policing Uncertainty”, 196).
Other scholars group intelligence activities by the level of harm posed by the intelligence activity (i.e., Bellaby, The
Ethics of Intelligence), which is not specifically addressed in our division. Additionally, our division does not include
considerations with regard to how one should act upon the different levels of intelligence that are collected.
Basically, we wish to allude to the epistemic status of the individuals initiating a specific intelligence activity before
they authorise this activity.
22. Agrell, “Intelligence Analysis after the Cold War”; and Treverton, “The Future of Intelligence”.
23. Phythian, “Policing Uncertainty”, 188.
24. Ericson and Haggerty, Policing the Risk Society.
25. Frowe, The Ethics of War and Peace, 75–7.
26. Aldrich, “Global Intelligence Co-operation versus Accountability”, 38; and Omand and Phythian, “Ethics and
Intelligence: A Debate”, 39.
27. Lomell, “Punishing the Uncommitted Crime”; and McCulloch and Wilson, Pre-crime. Pre-emption, Precaution and
the Future.
28. Rodin and Shue, Preemption; Frowe, The Ethics of War and Peace, 75.
29. Frowe, The Ethics of War and Peace, 75.
30. Frowe, The Ethics of War and Peace, 77. Here Frowe rephrases Walzer’s criteria for preemption.
31. Ibid., 78–79.
32. Kleinig, The Ethics of Policing.
33. Loader and Walker, Civilizing Security.
34. Tyler, Why People Obey the Law; and Crawford and Hucklesby, Legitimacy and Compliance in Criminal Justice.
35. Emsley, The English Police; Reiner, The Politics of the Police.
36. Chesterman, One Nation under Surveillance.
37. By ‘public consent’ we first and foremost mean consent from the state’s own citizens. Preventive intelligence dealing
with domestic matters of concern are the main object of our article. However, we acknowledge that consent might
be a more difficult requirement when dealing with international affairs.
38. Chesterman, One Nation under Surveillance, 69.
39. Ibid., 69.
40. cf. also Born and Wills, “Beyond the Oxymoron”, 36.
492 A. DIDERICHSEN AND K. V. RØNN

Acknowledgements
We are very thankful for valuable comments on previous versions of the article from the participants at the research
seminar: ‘The Ethics of Intelligence’ organized by the Interdisciplinary Ethics Research Group at University of Warwick the
6th of May 2016. Additionally, we are grateful for most helpful comments from the participants at the research seminar
concerning the same topic organized by the Centre for Advanced Security Theory, at the University of Copenhagen, the
25th of May 2016. Finally, we are thankful for most helpful advice from the two reviewers.

Disclosure statement
No potential conflict of interest was reported by the authors.

Funding
The research project of Kira Vrist Rønn is funded by the Danish Research Counsel for Independent Research [grant number
4180-00030B].

Note on contributors

Adam Diderichsen is an associate professor of policing at the University of Aalborg. His research interests are Intelligence
ethics, intelligence studies, police ethics, intelligence-led policing.
Kira Vrist Rønn is a postdoctoral researcher at the Department of Political Science at the University of Copenhagen and
assistant professor at the Metropolitan University College in Copenhagen. Her main research interests are intelligence
ethics, intelligence and security studies and policing.

References
Agrell, Wilhelm. “Intelligence Analysis after the Cold War – A New Paradigm or Old Anomalies?” In National Intelligence
Systems: Current Research and Future Prospects, edited by Wilhelm Agrell and Gregory F Treverton, 93–114. Cambridge:
Cambridge University Press, 2009.
Aldrich, Richard J. “Global Intelligence Co-operation versus Accountability: New Facets to an Old Problem.” Intelligence and
National Security 24, no. 1 (2009): 26–56. doi:10.1080/02684520902756812.
Bellaby, Ross W. The Ethics of Intelligence. A New Framework. New York: Routledge, 2014.
Bellaby, Ross W. “What’s the Harm? The Ethics of Intelligence Collection.” Intelligence and National Security 27, no. 1 (2012):
93–117. doi:10.1080/02684527.2012.621600.
Born, Hans, and Aidan Wills. “Beyond the Oxymoron: Exploring Ethics through the Intelligence Cycle.” In Ethics of Spying:
A Reader for the Professional, edited by Jan Goldman, Vol. 2, 34–56. Lanham, MD: Scarecrow Press, 2010.
Chesterman, Simon. One Nation under Surveillance. A New Social Contract to Defend Freedom without Sacrificing Liberty.
Oxford: Oxford University Press, 2011.
Crawford, Adam, and Anthea Hucklesby eds. Legitimacy and Compliance in Criminal Justice. London: Routledge, 2013.
den Boer, Monica. “Intelligence-led Policing in Europe: Lingering Between Idea and Implementation.” In The Future of
Intelligence: Challenges in the 21st Century, edited by Isabelle Duyvesteyn, B. Ben de Jong, and Joop van Reijn, 113–132.
New York: Routledge, 2014.
Emsley, Clive. The English Police: A Political and Social History. Harlow: Pearson Education Limited, 1996.
Ericson, Richard V., and Kevin D. Haggerty. Policing the Risk Society. Oxford: Clarendon Press, 1997.
Frowe, Helen. The Ethics of War and Peace. 2nd ed. London: Routledge, 2011.
Gendron, Angela. “Just War, Just Intelligence: An Ethical Framework for Foreign Espionage.” International Journal of
Intelligence and CounterIntelligence 18, no. 3 (2005): 398–434. doi:10.1080/08850600590945399.
Gill, Peter. “Security Intelligence and Human Rights: Illuminating the ‘Heart of Darkness’?” Intelligence and National Security
24, no. 1 (2009): 78–102. doi:10.1080/02684520902756929.
Gill, Peter, and Mark Phythian. Intelligence in an Insecure World. Cambridge: Polity Press, 2012.
Kearon, Tony. “Surveillance Technologies and the Crises of Confidence in Regulatory Agencies.” Criminology and Criminal
Justice 13, no. 4 (2013): 415–430. doi:10.1177/1748895812454747.
Kleinig, John. The Ethics of Policing. Cambridge: Cambridge University Press, 1996.
Kleinig, John. “The Ethical Perils of Knowledge Acquisition.” Criminal Justice Ethics 28, no. 2 (2009): 201–222.
doi:10.1080/07311290903181218.
Lever, Annabelle. “Democracy, Privacy and Security.” In Privacy, Security and Accountability. Ethics, Law and Policy, edited by
Adam D. Moore, 105–124. London: Rowman & Littlefield, 2016.
INTELLIGENCE AND NATIONAL SECURITY 493

Loader, Ian, and Neil Walker. Civilizing Security. Cambridge: Cambridge University Press, 2007.
Lomell, Heidi. “Punishing the Uncommitted Crime, Prevention, Pre-emption, Precaution and the Transformation of Criminal
Law.” In Justice and Security in the 21th Century. Risks, Rights and the Rule of Law, edited by Barbara Hudson and Synnøve
Ugelvik, Chap. 6, 83–100. Oxon: Routledge, 2012.
Macnish, Kevin. “Just Surveillance? Towards a Normative Theory of Surveillance.” Surveillance & Society 12, no. 1 (2014):
142–153.
Marx, Gary T.“Ethics for the New Surveillance.”The Information Society 14, no. 3 (1998): 171–185. doi:10.1080/019722498128809.
McCulloch, Jude, and Dean Wilson. Pre-crime. Pre-emption, Precaution and the Future. Oxon: Routledge, 2016.
McMahan, Jeff. Killing in Wars. Oxford: Oxford University Press, 2009.
Moore, Adam D. “Why Privacy and Accountability Trump Security.” In Privacy, Security and Accountability. Ethics, Law and
Policy, edited by Adam D. Moore, 171–182. London: Rowman & Littlefield, 2016.
Omand, Sir David. “Can we have the Pleasure of the Grin without Seeing the Cat? Must the Effectiveness of Secret
Agencies Inevitably Fade on Exposure to the Light?” Intelligence and National Security 23, no. 5 (2008): 593–607.
doi:10.1080/02684520802449476.
Omand, Sir David, and Mark Phythian. “Ethics and Intelligence: A Debate.” International Journal of Intelligence and
CounterIntelligence 26, no. 1 (2013): 38–63. doi:10.1080/08850607.2012.705186.
Phythian, Mark. “Policing Uncertainty: Intelligence, Security and Risk.” Intelligence and National Security 27, no. 2 (2012):
187–205. doi:10.1080/02684527.2012.661642.
Pfaff, Tony, and Jeffrey Tiel. “The Ethics of Espionage.” Journal of Military Ethics 3, no. 1 (2004): 1–15. doi:10.1080/15027
570310004447.
Quinlan, Michael. “Just Intelligence: Prolegomena to an Ethical Theory.” Intelligence and National Security 22, no. 1 (2007):
1–13. doi:10.1080/02684520701200715.
Ratcliffe, Jerry. Intelligence-led Policing. Cullompton : Willian Publishing, 2008.
Reiner, Robert. The Politics of the Police. Oxford: Oxford University Press, 2010.
Shue, Henry, and David Rodin. Preemption. Military Actions and Moral Justification. Oxford: Oxford University Press, 2007.
Treverton, Gregory F. “The Future of Intelligence: Changing Threats, Evolving Methods.” In The Future of Intelligence: Challenges
in the 21st Century, edited by Isabelle Duyvesteyn, Ben de John, and Joop van Reijn, 27–38. New York: Routledge, 2014.
Tyler, Tom R. Why People Obey the Law. 2nd ed. Princeton, NJ: Princeton University Press, 2006.
Walzer, Michael. Just and Unjust Wars. A Moral Argument with Historical Illustrations. 4th ed. New York: Basic Books, 2006.
Wells, Helen. “The Techno-fix versus the Fair Cop: Procedural (In)Justice and Automated Speed Limit Enforcement.” The
British Journal of Criminology 48, no. 6 (2008): 798–817. doi:10.1093/bjc/azn058.

You might also like