0% found this document useful (0 votes)
36 views20 pages

Art Ificial Int Elligence and Predict Ive Policing: Risks and Challenges

About police journal

Uploaded by

vrzacx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views20 pages

Art Ificial Int Elligence and Predict Ive Policing: Risks and Challenges

About police journal

Uploaded by

vrzacx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Art ificial int elligence

and predict ive


policing: risks and
challenges

Recommendat ion paper

1
1
TABLE OF CONTENTS

PREFACE 3

INTRODUCTION 4

1. HOW DOES PREDICTIVE POLICING WORK? 5

2. THE RELEVANT LEGAL FRAMEWORK 7

3. PREDICTIVE POLICING IN THE EUROPEAN UNION 8

4. IMPORTANT CONSIDERATIONS TO KEEP IN MIND 10

RECOMMENDATIONS 14

GLOSSARY OF TERMS 17

ENDNOTES 18

We should aim for a more nuanced perspect ive. AI should not be viewed as a
?panacea?in crime prevent ion, yet at t he same t ime, it s pot ent ial benefit s should
not be ignored eit her. A product ive use of AI in predict ive policing wit h beneficial
out comes is dependent on a human right s compliant use of AI which keeps in mind
t he crit ical areas broken down above: t ransparency, account abilit y and bias.

2
PREFACE

Ar t if icia l Int elligence m a kes it p ossib le t o cr eat e aut onom ous syst em s t hat ca n
execut e highly com p lex t a sks, such a s p r ocessing enor m ous a m ount s of
inf or m at ion, f or eca st ing f ut ur e event s, a nd lea r ning t o a d a p t t hr ough exp er ience.
This op ens up p ossib ilit ies f or p r ed ict ive p olicing: AI a p p licat ions ca n ha nd le la r ge
a m ount s of com p lex d at a ( cr im e d at a , vid eo st r ea m s f r om secur it y ca m er a s, ?)
a nd p r ed ict w hen or w her e cr im es w ill t a ke p la ce. But t her e a r e r isks t o it a s w ell:
such syst em s m ust r esp ect t he f r eed om a nd int egr it y of cit izens, t he p r ot ect ed
nat ur e of p er sona l d at a , a nd m ust not r ep r od uce or int r od uce illega l p r of iling or
ineq uit ies. This p a p er exp la ins t he t echnology b ef ind p r ed ict ive p olicing com p ut er
p r ogr a m m es a nd p r ovid es a n over view of t he op p or t unit ies a nd r isks of AI
a p p licat ions f or t he p ur p ose of p r ed ict ive p olicing.

Cit at ion
EUCPN ( 2 0 2 2 ). Ar t if icia l int elligence a nd p r ed ict ive p olicing: r isks a nd cha llenges.
Br ussels: EUCPN.

Lega l not ice


The cont ent s of t his p ub licat ion d o not necessa r ily r ef lect t he of f icia l op inion of
a ny EU Mem b er St at e or a ny a gency or inst it ut ion of t he Eur op ea n Union or
Eur op ea n Com m unit ies.

Aut hor
Ma jsa St or b eck, Int er n, EUCPN Secr et a r iat

Pa r t of t he p r oject ?EUCPN Secr et a r iat ?, J une 2 0 2 2 , Br ussels


W it h f ina ncia l sup p or t f r om t he Eur op ea n Union?s Int er na l
Secur it y Fund - Police

3
INTRODUCTION

Art ificial Int elligence (AI) is hot . For t he first t ime in human hist ory, it is possible t o creat e
aut onomous syst ems t hat appraoch or exceed human cognit ive capacit y. AI syst ems can
execut e highly complex t asks, such as processing enormous amount s of informat ion, forecast ing
fut ure event s, and learning t o adapt t hrough experience.1 This has creat ed new possibilit ies in
many domains, including healt h care, educat ion, cybersecurit y and environment al prot ect ion. Law
enforcement agencies have shown an increased int erest in AI. In all corners of t he EU, police
depart ment s have put fait h in AI t ools in hopes of rendering law enforcement more effect ive and
cost - efficient .2 In part icular, ?Predict ive Policing?is proclaimed as t he fut ure of policing, in response
t o reduced budget s and st affing.3 sing AI, t he main purpose of predict ive policing is t o generat e
crime predict ions and ult imat ely make a significant cont ribut ion t o crime prevent ion.4 Yet , in spit e
of it s pot ent ial in crime prevent ion, policymakers and human right s groups around t he globe have
expressed concern regarding t he use of predict ive policing, as inappropriat e use leads t o an
erosion of fundament al human right s.5

4
I. HOW DOES PREDICTIVE
POLICING W ORK?
The use of st at ist ics in law enforcement is not hing new. In t he 19 9 0s, emphasis was placed on
int elligence- led policing. Now, new opport unit ies present ed by Big Dat a are changing t he nat ure of
policing.6 Big Dat a refers t o vast amount s of dat a t hat can be analysed and reveal unexpect ed
connect ions and/ or correlat ions.7 Yet , what Big Dat a knows is only one side of t he coin. The ot her
side ent ails t he t echnology used t o manipulat e and organise t hat dat a, t hat is, algorit hms.
Algorit hms are essent ially mat hemat ical processes which make educat ed guesses regarding t he
meaning of correlat ions in t he dat a. Whereas some of t hese algorit hms are relat ively simple,
ot hers are built using machine- learning models.

Machine- learning (ML) algorit hms differ from ?simple?algorit hms in t hat t hey learn and adapt by
experience.This occurs in different ways: insupervised learning,t he ML algorit hm uses t raining dat a
t hat is correct ly pre- labelled by developers. Inunsupervised learning, t he ML algorit hm
independent ly ident ifies pat t erns and correlat ions in ?raw?dat a.8 An easy example of a ML
algorit hm is a music st reaming service. To decide whet her t o recommend a part icular song t o a
list ener, t he ML algorit hm associat es t he list ener?s preferences
Lorem ipwit
suhmotdher
ol orlistsit
eners who
am et have aet u r
, c onset
similar t ast e in music. Thus, t he ML algorit hm not only looks for
sadpat t erns,
ip sc ing itelalso learns
it r, sed from
d iam t hatm y
nonu
dat a, making t he algorit hm progressively bet t er over t ime. eirm od t em p or invid u nt u t
Cert ain branches of machine learning, such as deep learning, are inspired by human brain. Deep
learning models, put simply, can make informed decisions wit hout being given t he rules (an
algorit hm) of performing t hat t ask. They power t he most complex and capable AI syst ems, such
as self- driving cars, drones, and ot her robot ics. AI models used in predict ive policing are most
oft en rule- based machine learning models and rarely deep learning models.

Lorem ip su m d ol or sit am et , c onset et u r


sad ip sc ing el it r, sed d iam nonu m y
eirm od t em p or invid u nt u t

Lorem ip su m d ol or sit am et , c onset et u r


sad ip sc ing el it r, sed d iam nonu m y
4
eirm od t em p or invid u nt u t
5
In t he cont ext of law enforcement , t his pract ically means t hat predict ive policing can be divided in
t wo consecut ive st eps: (1) dat a collect ion and (2 ) dat a modelling. First , wit h dat abase st orage
ever increasing, enormous amount s of (un)st ruct ured Lorem dat a from different 9
ip su m d ol orsources
sit am etis ,collect
c onseted.et ur
Typically, t his includes hist orical crime dat a (t ime, place and tsad
ype),
ip sc ing el it r, sed d iam nonu mhy
somet imes supplement ed wit
socio- economic dat a and opport unit y variables (e.g. close access t o a highway). 10 Romania, for
eirm od t em p or invid u nt u t
inst ance, uses dat a from probat ion and social services in addit ion t o police dat a.11 Second, t he
dat a is analysed using ML algorit hms. This process consist s of a t raining and a predict ion phase,
in which t he model first searches for pat t erns in t he available hist orical dat a (i.e. linking indicat ors
t o t he risk of a crime) and subsequent ly publishes t hese probabilit ies as a risk score.12 Three
t ypes of predict ive policing can be dist inguished, based on t he t ype of predict ions t he underlying
models are able t o make: (1) area- based policing, i.e. predict ing t he t ime and place in which
crimes are more likely t o occur, (2 ) event - based policing, predict ing t he t ype of crime t hat is more
likely t o occur, and (3) person- based policing, predict ing t he individual
Lorem ip su m dwhool orissit
most
am likely to
et , c onset et u r
conduct a criminal act . 13
sad ip sc ing el it r, sed d iam nonu m y
eirm od t em p or invid u nt u t

Lorem ip su m d ol or sit am et , c onset et u r


sad ip sc ing el it r, sed d iam nonu m y
eirm od t em p or invid u nt u t
6
2.THE RELEVANT LEGAL
FRAMEW ORK
Wit hin t he EU, t he prot ect ion of personal dat a is viewed as a fundament al right .14 The General
Dat a Prot ect ion Regulat ion (GDPR) regulat es t he collect ion, processing and usage of personal
dat a in t he European Economic Area. The Law Enforcement Direct ive 2 016 / 6 8 0 (LED) act s as
alex specialist o t he GDPR and applies t o police and judicial cooperat ion in criminal mat t ers
(including crime prevent ion) and dat a processing. Import ant ly, Art . 27 requires t he compet ent
aut horit ies (e.g., t he police) t o carry out a Dat a Prot ect ion Impact Assessment (DPIA) if t he dat a
processing may harm t he right s of European cit izens. A DPIA must cont ain a human right s
assessment and a proposal on how t o mit igat e t hose risks.

In 2 02 1, t he EU proposed t he Art ificial Int elligence Act (AIA), which must become a key piece in t he
regulat ion of AI. It s aim is t wofold: facilit at ing innovat ion by harmonising exist ing nat ional laws
regarding AI , while at t he same t ime prot ect ing fundament al right s in t he digit al realm.15

Lorem ip su m d ol or sit am et , c onset et u r


sad ip sc ing el it r, sed d iam nonu m y
eirm od t em p or invid u nt u t

The AIA proposal has overall been welcomed by expert s, as it is t he world?s first legal
framework for t he responsible development , deployment and use of AI.The proposal
different iat es four risk levels regarding AI applicat ions: (1) unaccept able risk, (2 ) high- risk,
(3) limit ed risk and (4) minimal risk. Under Art icle 5, t he proposal recommends t he
prohibit ion of unaccept able risks. This includes t he pract ice of so- called ?social scoring?
(e.g. on t he basis of people?s social behaviour and/ Lorem ip suerist
or charact m d ol or
ics) sit amaut
by public ethorit
, c onset
ies, et u r
and, wit h some except ions, t he use of ?real- t ime?remotsad ip sc ing
e biomet el itificat
ric ident r, sedion dsyst
iamems nonu m y
in public spaces (i.e. facial recognit ion). AIA est ablishes t hat AI systeirm
ems odused t em
by plaw
or invid u nt u t
enforcement , including predict ive policing models, are ?high- risk?andshall be subject t o
specific t ransparency and fundament al right s requirement s relat ed t odat a qualit y,
t echnical document at ion, t ransparency and informat ion, human oversight , robust ness,
accuracyandcybersecurit y.High- risk applicat ions int ended for t he biomet ric ident ificat ion
of nat ural persons are subject t o t hird part y conformit y assessment ; for all ot her
high- risk syst ems (including predict ive policing) a self- assessment suffices.

Lorem ip su m d ol or sit am et , c onset et u r


sad ip sc ing el it r, sed d iam nonu m y
6
eirm od t em p or invid u nt u t
7
3. PREDICTIVE POLICING IN
THE EUROPEAN UNION

Predict ive policing is current ly applied in a number of European police depart ment s, including t he
Net herlands, Germany, Aust ria, France, Est onia and Romania. Ot her EU Member St at es, such as
Luxembourg, Port ugal and Spain are current ly invest igat ing t he possibilit ies for t he
implement at ion of predict ive policing. 16

Current ly, predict ive policing is primarily used t o prevent domest ic burglary and car t heft . In t his
field, t he Net herlands is viewed as a pioneer as it is t he first count ry in t he world deploying
predict ive policing on a nat ional scale.17 It s Crime Ant icipat ion Syst em (CAS) init ially t arget ed
so- called ?high impact crimes?, i.e.domest ic burglaries, robberies, and mugging, but now covers
also pickpocket ing, car burglaries, violent crimes, commercial burglaries and bicycle t heft .18 It
combines demographic and socioeconomic dat a from t hree sources: (1) t he Cent ral Crime
Dat abase, (2 ) t he Municipal Administ rat ion, and (3) t he Cent ral Bureau of St at ist ics of t he
Net herlands. Dat a is displayed in t he form of so- called ?heat maps?, chart ing areas of increased
crime risk which ult imat ely drive policing int ervent ions.19 Precobs in Germany mainly t arget s
resident ial burglary by means of hist orical dat a, usually Lorem
of t he last
ip su
five d ol or 2sit
m years. 0 Aust
amriaet ,and
c onset et u r
France deploy predict ive policing t o det ect resident ial and vehiclesad ip burglary. 2 1
sc ing el itAust
r, sedria uses
d iam nonu m y
hist orical crime dat a (t he offence t ype, t ime, locat ion, modus operandieirm and odplacet eminformat ion).
p or invid u nt u t
The out put is demonst rat ed on a t hemat ic dashboard showing offences, hot spot s, st at ist ics,
report s and prevent ion measures. In France, t he input comprises filed complaint s, hist orical crime
st at ist ics and geolocat ions of burglaries and car t heft of t he last seven t o t en years. Dat a may
include met eorology and nat ional st at ist ics in t he near fut ure. The out put is displayed on a map
on which ablue t o red gradient indicat es where an offense is likely t o occur.

Est onia st ands out in t hat it deploys predict ive policing t o predict event - based, area- based and
person- based crimes. Input includes previous crime dat a (t ype, t ime and place), dat a relat ed t o
border crossing (place, t ime, migrat ion st at us and relat edLorem ip su m
document d ol or
at ion) and situnnat
am etural
, c onset et u r
deat hs
(drug relat ed, t raffic accident s and homicides). Romania usessad ip scive
predict ingpolicing
el it r, sed d iam nonu m y
t o predict
area- based and person- based crimes. eirm od t em p or invid u nt u t

Lorem ip su m d ol or sit am et , c onset et u r


sad ip sc ing el it r, sed d iam nonu m y
eirm od t em p or invid u nt u t
8
Lorem ip su m d ol or sit am et , c onset et u r
sad ip sc ing el it r, sed d iam nonu m y
eirm od t em p or invid u nt u t

Lorem ip su m d ol or sit am et , c onset et u r


sad ip sc ing el it r, sed d iam nonu m y
eirm od t em p or invid u nt u t

Lorem ip su m d ol or sit am et , c onset et u r


sad ip sc ing el it r, sed d iam nonu m y
8
eirm od t em p or invid u nt u t
9
4 . IMPORTANT CONSIDERATIONS
TO KEEP IN MIND
There are a few caveat s when applying AI for t he purpose of predict ive policing, including
problems wit h t ransparency and account abilit y, possible bias, especially aut omat ion bias, and
posit ive ?feedback loops?. The following sect ion int roduces and discusses t hese challenges t o
provide clarit y on t he exist ing short comings.

- Transparancy: t he ?black- box? problem

The way in which machine- learning models generat e result s can be opaque.2 2 This st ems from a
number of fact ors, t hat oft en conflat e. Algorit hms are oft en very complex and t hus difficult t o
grasp for end users. Addit ionally, self- learning models may t ake decisions on t he basis of rules it
has set for it self. Finally, a degree of opacit y may be built in by developers as an int ent ional form
of self- prot ect ion.2 3 ML algorit hms collect and process vast amount s of dat a and keep learning
during t he calculat ions. St eps made by t he ML algorit hm are t oo complex t o ret race for humans,
even for t hose who designed t he algorit hm. In ot her words, it becomes impossible, bot h in t heory
and in pract ice, t o unveil t he reasons behind a specific result or decision. ML algorit hms are
t herefore oft en depict ed as ?black boxes?.

W hat is t he ?black box??


The ?black box?met aphor has beenevoked
by academics in discussing AI.Due t o it s
Loremdat
high complexit y and ext ensive ip a
suinput
m d ol, or sit am et , c onset et u r

we oft en cannot underst and , evensad ip sc


in ing el it r, sed d iam nonu m y
eirm od t em p or invid u nt u t
hindsight ,whyan algorit hm has made a
cert ain decision. The ?black- box?
phenomenon in t his cont ext symbolises a
syst em in which we can only observe it s
input and out put , wit h t he
decision- making it self remaining secret .

10
Anot her fact or t hat negat ively affect s t ransparency is t he fact t hat developers may keep t he
init ial dat a input and algorit hms hidden from users, for reasons of self- prot ect ion or in pursuit of
a compet it ive advant age: secrecy, t he argument goes, can get you ahead of your commercial
opponent s.24 For inst ance, Predpol, t he pioneer in predict ive policing soft ware from t he US,
makes use of secret ive propriet ary algorit hms.25 Limit ed t ransparency makes it exceedingly
difficult , for policy- makers and cit izens alike, t o comprehend and appreciat e AI- induced
predict ions.

- Account abilit y: t he ?Many hands? problem

The ?black- box? problem feeds int o t he secondissue relat ing AI, somet imes referred as t he ?many
hands? problem, referring t o a scenario in which a range of individuals and organisat ions are
involved in t he development and deployment of complex syst ems. As t his is oft en t he case wit h AI
product s in general and predict ive policing in part icular, it is oft en impossible t o unambiguously
ident ify who is t o blame for t he harms and fundament al right s violat ions result ing from t he AI
implement at ion in predict ive policing.2 6 A pert inent example
Loremisip t he
su risk
m dassessment t ool
ol or sit am et , c onset et u r
?COMPAS? used in t he US court syst em, which t he non- profitsad ProPublica has revealed
ip sc ing el it r, sed d iamt o be notm y
nonu
only ineffect ive in predict ing criminal behaviour but also discriminat ory against
eirm black
od t em defendant
p or invid u nts.u t
ProPublica demonst rat ed t hat t he applicat ion wrongly considered black defendant s t o be t wice
as likely t o commit crimes t han whit e defendant s.27 COMPAS disput ed ProPublica?s int erpret at ion
of t he result s, leaving t he issue unresolved t o t his day.2 8

- Bias

The t hird issue of AI is it s pot ent ial for bias. We can generally ident ify t wo sources of bias when it
comes t o AI syst ems: (a) Algorit hmic bias and (b) Big Dat a bias.
Lorem The
ip su m former refers
d ol or sit am et t o ,tche bias et
onset ofu r
t he algorit hm developers, builders and engineers. Whet her consciouslysad ip sc ingorelunconsciously,
it r, sed d iam tnonu he m y
(predominant ly male and whit e) developers?views and beliefs may ring t hrough in pt he algorituhm. 29
eirm od t em or invid nt u t
The lat t er refers t o t he bias in t he dat a it self, which even in t he age of Big Dat a may not be
represent at ive.30 In t he cont ext of predict ive policing, so- called ?gender- neut ral? risk assessment s
can overst at e t he recidivism risk of women because women t end t o reoffend less oft en t han
men.31 Dat aset s can also disproport ionat ely t arget minorit y groups in t his scenario. If minorit y
neighbourhoods have been overpoliced in t he past , more crime would have been found t here
t han in ot her areas.32

What t his means in pract ice is t hat skewed dat aset s combined wit h algorit hms t hat propagat e
Lorem in
exist ing biases can yield false posit ives. Racial profiling?illegal iptsu
hemEU?becomes
d ol or sit ament
et ,renched
c onset et
in u r
t he predict ive policing.33 sad ip sc ing el it r, sed d iam nonu m y
10
eirm od t em p or invid u nt u t
11
Some researchers have, for inst ance, report ed t hat PredPol was more likely t o t arget
low- income, black communit ies compared t o affluent , whit e communit ies wit h similar rat es of
drug crimes in t he Unit ed St at es.34

- Aut omat ion bias & posit ive ?feedback loops?

A side effect of AI is t he phenomenon of aut omat ion bias, in which humans t end t orely uncrit ically
on comput er- generat ed solut ions. This is largely because humans have a superior view of in
aut omat ed syst ems.35 Even when cont ract ionary informat ion is available, humans t end t o defer
t o aut omat ed decisions eit her because t hey ignored or failed t o verify t hat informat ion.36 The
aut omat ion bias is even st ronger in case of doubt .37 It goes wit hout saying t hat t his might lead t o
false posit ives.

False posit ives are furt hermore suscept ible t o ?posit ive feedback loops? which can furt her
exacerbat e exist ing biases and exclusions. This occurs, for example, when t he syst em is
(unconsciously) t rained t o recognise people of a cert ain age, skin colour or from a cert ain
neighbourhood as pot ent ial criminals. When t his occurs, Lorem ip su
t he syst emmblindly
d ol orlabels
sit am et ,bias
t his c onset
as t et
heu r
sad ip scnot
ground t rut h. Now, a posit ive feedback loop is est ablished, whereby ing only
el it r,
t hesed d iam nonu
personal biases my
of t he operat or are reinforced, but also t hose of t he machine- learningsyst eirm od em.t 38emSimilarly,
p or invid it ucan
nt u t
indicat e cert ain areas as crime- ridden, result ing in increased police visit s and subsequent arrest s.
This, in t urn, t eaches t he algorit hms t hat t hese are areas t he police should be concent rat ing on,
regardless of t he act ual crime rat e. It s effect s are t wofold. First , it pushes t he police t o focus on
t he wrong priorit ies, leading t o significant securit y misses. Second, t he algorit hm learns t hat it is
?correct ? in associat ing race, et hnicit y and/ or socio- economic st at us wit h criminalit y, and will
t herefore rely more heavily on t his associat ion in subsequent predict ions. This can ult imat ely lead
t o t he wrongful st igmat isat ion and discriminat ion of individuals, environment s, and communit y
areas. Lorem
Lorem ip su
ipmsu dmoldor sit sit
ol or amam et ,et
c onset et uet
, c onset r ur
sadsadip sc
iping el itel
sc ing r,itsed
r, sedd iam
d iamnonu
nonu m ym y
eirm od od
eirm t em p orp or
t em invid u ntu nt
invid ut ut

12
Posit ive f eedback loops: an example
An example of t his limit at ion was Microsoft ?s very short - lived
experience in creat ing ?Tay?, an art ificial int elligence chat bot ,
designed t o int eract wit h humans. Users could follow and int eract
wit h t he bot @TayandYou on Twit t er and it would t weet back,
learning as it went from ot her users' post s. As soon as people
underst ood how Tay worked, t hey st art ed t weet ing t he bot hat eful
cont ent . Not long aft er, t he bot st art ed t o repeat and produce
racist , ant i- Semit ic, and sexist hat e speech. In less t han 24 hours
aft er t he launch, Microsoft shut Tay down and put out a st at ement
t hat it was ?deeply sorry? for t he bot ?s racist and sexist t weet s.39

Lorem ip su m d ol or sit am et , c onset et u r


sad ip sc ing el it r, sed d iam nonu m y
eirm od t em p or invid u nt u t

Lorem
Loremip su
ipmsu d
moldor sit sit
ol or amamet ,et
c onset et uet
, c onset r ur
sadsad
ip sc
iping el itel
sc ing r,itsed d iam
r, sed d iamnonu m ym y
nonu
eirm od od
eirm t em p orp or
t em invid u ntu nt
invid ut ut

12

13
RECOMMENDATIONS

There are significant risks associat ed wit h t he applicat ion of AI in predict ive policing. Banning
predict ive policing would help lit t le t o solve t hese problems: prejudice and bias exist ed long before
t he emergence of AI and Big Dat a. We should aim for a more nuanced perspect ive. AI should not
be viewed as a ?panacea?in crime prevent ion, yet at t he same t ime, it s pot ent ial benefit s should
not be ignored eiher. A product ive use of AI in predict ive policing wit h beneficial out comes is
dependednt on a human right s compliant use of AI which keeps in mind t he crit ical areas broken
down above: t ransparency, account abilit y and bias.

1) Avoid t he t r anspar ency pr oblem

To boost t ransparency, pract it ioners should assure t hat t heir algorit hms are explainable as well
as accessible. This st art s wit h t he cit izens?right t o know t hat t hey might be subject ed t o
algorit hms in t heir area. Cit izens should have access t o informat ion about t he dat a collect ion,
dat a processing, t he purpose of t he dat a collect ion and processing, t he developer and user of
t he algorit hm. Publishing cont act informat ion should allow cit izens t o ask quest ions and receive
more informat ion . Promising pract ices in t his regard have been put fort h by The Cit y of Helsinki,
Finland, and t he Cit y of Amst erdam in t he Net herlands, who have been t he first cit ies in t he world
t o launch open AI regist ries. These online regist ries offer an overview of exist ing art ificial
int elligence syst ems and t he algorit hms used by t he municipal government . For example, t he
Amst erdam Algorit hm Regist er cont ains informat ion on applicat ions ranging from aut omat ed
parking cont rol t o illegal holiday housing. The regist ries?cent ral aim, according t o t he t wo
municipalit ies, is t o ?be open and t ransparent about t he use of algorit hms?.40 Besides out lining t he
dat a collect ion and processing, t he regist ries specifically st at e how t heir algorit hms avoid
discriminat ion,t he risks and safeguards, and how human supervision is implement ed.

To furt her facilit at e t ransparency, it is imperat ive t o rely on in- house soft ware developers rat her
t han commercial companies in t he development of predict ive policing soft ware. France and
Est onia implement ed promising pract ices in t his regard, as t hey already deploy in- house soft ware
developers.41 Hiring specialised personnel wit h a background in comput er science may be cost ly
and t ime consuming, but at t he same t ime allows t o keep cont rol of t he ent ire development
process and t he result ing algorit hm, and t hus avoid t he black box problem. If t he employment of
commercial part ies cannot be avoided, developers must be required t o make t he dat a and code
available for crit ical scrut iny, if necessary t hrough regulat ory means.

2) Avoid t he account abilit y pr oblem

To address t he account abilit y problem, independent oversight bodies must be est ablished. These
bodies should be adequat ely funded and st affed. The Unit ed Kingdom has implement ed a

14
promising pract ice in t his respect , as t heir oversight bodies st rengt hened and increased t rust in
t he police. The body?s responsibilit y ext ends beyond t he examinat ion of algorit hms t o all aspect s
of dat a usage by t he police, including t he means of dat a collect ion, t he purpose, t he dat a
processing and st orage, and use of t he result s (including secondary use).42 Most Member St at es
already have oversight bodies in place; when necessary t heir mandat es should be expanded t o
cover all forms of dat a collect ion and processing in t he framework of predict ive policing and t hey
should be provided wit h t he necessary t ools, resources and expert ise.

3) The r esult s of AI ar e conject ur e in t he r ealm of pr obabilit y

To furt her ensure maximum account abilit y, full aut omat ion of predict ive policing should effect ively
be ruled out . Humans must alwaysbe t he ult imat e decision- makers wit h respect t o int ervent ion.
Algorit hmic out put should not be read as conclusive ?fact s?, but rat her as const ruct ed
probabilit ies which can, and somet imes must , be overridden. It is import ant t o consider t hat
probabilit ies are just t hat : probabilit ies, not t o be confused wit h cert aint ies. AI is most definit ely
not a fut ure- predict ing oracle, and especially in light of false posit ives, crit ical reflect ion must be
embraced and promot ed. AI can discover correlat ions t hat are not apparent at first sight ., which
can support policing frameworks by present ing probabilit ies. Predict ive policing must t hus remain
a complement ary law enforcement t ool in crime prevent ion st rat egies, andneverreplace
long- t erm programmes t hat address t he root causes of crime.

4 ) Measur e ef f ect iveness

Anot her way t o boost account abilit y is by invest ing in det ailed comparat ive st udies on t he use
and implement at ion of predict ive policing. The effect iveness is one of t he most underst udied
aspect s of t he applicat ion of predict ive policing. Moreover, t he lack of uniform crit eria makes it
difficult t o t ranslat e evaluat ion result s t o different set t ings. Evaluat ion st udies may include or
exclude different variables, e.g. t he t ype of predict ive policing (area- based, event - based and
person- based), t he t ype of dat a used (e.g. wit h or wit hout facial recognit ion), t he object ive of t he
applicat ion (i.e. risks assessment or risk reduct ion), and circumst ant ial condit ions (e.g. t rust in t he
police and business int erest s of developers). A programme can be highly accurat e in
riskassessment but perform poorly in overall riskreduct ion. There are t hus many pot ent ial
confounding fact ors t hat inhibit t he est ablishment of clear cause- and- effect relat ionships. This
makes it difficult t o det ermine whet her AI applicat ions in predict ive policing are ult imat ely
effect ive and serve t he purpose of rendering policing more efficient and legit imat e, or fails t o do
so and inst ead cont ribut e t o disproport ionat e surveillance. Transparent evaluat ions and det ailed
comparat ive st udiesare needed t o creat e an evidence base regarding t he cost s and benefit s of
14

15
AI applicat ions in predict ive policing.

5) Addr essing t he pr oblem of Big Dat a Bias

It goes wit hout saying t hat t he qualit y of dat a det ermines t he qualit y of t he out put . A dat a
collect ion and qualit y st rat egy can mit igat e many problems. Monit oring t he dat a qualit y and
collect ion is paramount t o avoiding bias and discriminat ory applicat ions. In t his respect , inspirat ion
can be t aken from Aust ria and Est onia, who assess t heir dat a qualit y on a regular basis.43
Est onia, for inst ance, has dedicat ed a special analysis unit t o monit or dat a collect ion and t o make
proposals for t he improvement s of t he soft ware and dat a qualit y. Police personnel and soft ware
operat ors who ent er t he dat a, manipulat e it or int erpret t he result s should be adequat ely t rained,
and such t raining should be inst it ut ionally embedded. The t raining should devot e specific
at t ent ion t o t he limit at ions of t he algorit hms, part icularl t he possibilit y of false posit ives and
aut omat ion bias, as well as t o t he individual and inst it ut ional responsibilit ies in int erpret ing t he
result s. A promising pract ice in t his regard can be found in Aust ria, which offers crime analysis
courses t o police personnel. Finally, it is promising t hat , according t o t he EUCPN quest ionnaire,
European law enforcement agencies are generally aware of t he risks involved in predict ive policing
and t he need t o act responsibly.

16
GLOSSARY OF TERMS

Algorit hm - Sequence of formal rules (logical operat ions, inst ruct ions) applied t o input
dat a in order t o solve a problem.

Art ificial int elligence (AI) - A set scient ific t heories and t echniques whose purpose is for a
machine (a comput er) t o reproduce t he cognit ive abilit ies of a human being wit h t he aim
of support ing decision- making processes or making predict ions.

Art ificial Neural Net work (deep learning) - Algorit hmic syst em design based on neurons in
t he human brain. Neural net s are charact erised by t he presence of one or several hidden
layers of int erconnect ed nodes (neurons) bet ween t he input and t he out put , t he out put
of each of which may serve as input for t he ot hers. This creat es very smart but
pot ent ially opaque AI syst ems.

Big Dat a- The t erm "big dat a" refers t o a large hetLorem
erogeneous dat a set (open dat a,
ip su m d ol or sit am et , c onset et u r
propriet ary dat a, commercially purchased dat a), as well asipt he
sad possibilit
sc ing ies offered
el it r, sed by AI
d iam nonu my
t o handle such dat aset s. eirm od t em p or invid u nt u t

Machine Learning ?Machine learning is a subfield of AI concerned wit h applicat ions t hat
become ?smart er? more accurat e as t hey are being used (hence ?learning?). The
applicat ions will process t he input in ways t hat are not explicit ly programmed t o produce
t he out put .

Lorem ip su m d ol or sit am et , c onset et u r


Personal Dat a ?Any informat ion relat ing t o an ident ifiedsad
or ip
ident ifiable
sc ing el it r,nat
sedural person.
d iam nonuIn my
t he EU, any dat a, even when encrypt ed, t hat could lead t o t he ident ificat
eirm od ionpof
t em or ainvid
person
u nt uist
considered personal dat a and falls wit hin t he scope of t he GDPR.

Personal Dat a Processing - Any operat ion or set of operat ions applied t o personal dat a
or set s of dat a, including collect ing, recording, st ruct uring, st oring, modifying, ret rieval,
consult ing and sharing personal dat a.

Lorem ip su m d ol or sit am et , c onset et u r


sad ip sc ing el it r, sed d iam nonu m y
16
eirm od t em p or invid u nt u t
17
ENDNOTES

1. Preamble of t he Mont réal Declarat ion for Responsible AI: ht t ps:/ / www.mont realdeclarat ion- responsibleai.com/
2. S. Egbert and M. Leese,Criminal Fut ures: Predict ive Policing and Everyday Police Work, London: Rout ledge, 2 02 0, 242 .
3. W .L. Perry, B. McInnis, C.C. Price et al., Predict ive Policing: The Role of Crime Forecast ing in Law Enforcement Operat ions,
Washingt on DC: RAND Corporat ion, 2 013.
4. W . Hardyns and A. Rummens, Predict ive Policing as a New Tool for Law Enforcement ? Recent Development s and
Challenges,European J ournal on Criminal Policy and Research 24:3 (2 018 ), 2 01- 18 ,
ht t ps:/ / dx.doi.org/ 10.1007/ s10 6 10- 017- 9 36 1- 2.
5. C.- P. Yen and T.- W . Hung, Achieving Equit y wit h Predict ive Policing Algorit hms: A Social Safet y Net Perspect ive,Science and
Engineering Et hics 27:3 (2 02 1), art . no. 36 , ht t ps:/ / dx.doi.org/ 10.1007/ s119 4 8 - 02 1- 00312 - x.
6. T.- W . Hung and C.- P. Yen, On t he Person- Based Predict ive Policing of Ai,Et hics and Informat ion Technology 2 3:3 (2 02 1),
16 5- 76 , ht t ps:/ / dx.doi.org/ 10.1007/ s10676 - 02 0- 09 539 - x.
7. T.Z. Zarsky, Government al Dat a Mining and It s Alt ernat ives,Dickinson Law Review 116 :2 (2 011), 2 8 5- 330.
8. Hayward & Maas (2 02 1)
9. R. van Brakel, Pre- Empt ive Big Dat a Surveillance and It s (Dis)Empowering Consequences: The Case of Predict ive Policing, in:
B. van der Sloot , D. Broeders, and E. Schrijvers (Eds.),Exploring t he Boundaries of Big Dat a, Amst erdam: Amst erdam
Universit y Press, 2 016 , 117- 41.
10. A.G. Furgeson, Policing Predict ive Policing,Washingt on Universit y Law Review 9 4:5 (2 017), 1109 - 8 9 ,
ht t ps:/ / journals.library.wust l.edu/ lawreview/ art icle/ id/ 38 51/ ; A. Rummens and W . Hardyns, The Effect of Spat iot emporal
Resolut ion on Predict ive Policing Model Performance,Int ernat ional J ournal of Forecast ing 37:1 (2 02 1), 125- 33,
ht t ps:/ / dx.doi.org/ ht t ps:/ / doi.org/ 10.1016 / j.ijforecast .2 02 0.03.00 6.
11. EUCPN Quest ionnaire.
12 . Hardyns and Rummens, Predict ive Policing as a New Tool for Law Enforcement ? Recent Development s and Challenges.
13. Yen and Hung, Achieving Equit y wit h Predict ive Policing Algorit hms: A Social Safet y Net Perspect ive.
14. European Union, Chart er of Fundament al Right s, Brussels, 2 012 , ht t p:/ / dat a.europa.eu/ eli/ t reat y/ char_2 012 / oj.
15. On t he proposed Art ificial Int elligence Act , see t he following websit e maint ained by t he Fut ure of Life Inst it ut e:
ht t ps:/ / art ificialint elligenceact .eu/
16 . EUCPN Quest ionnaire.
17. L. St rikwerda, Predict ive Policing: The Risks Associat ed wit h Risk Assessment ,The Police J ournal 9 4:3 (2 02 0), 42 2 - 36 ,
ht t ps:/ / dx.doi.org/ 10.1177/ 0032 258 X2 09 47749 . Lorem ip su m d ol or sit am et , c onset et u r
18 . Hardyns and Rummens, Predict ive Policing as a New Tool for Law Enforcement sad ip? Recent
sc ingDevelopment
el it r, seds and Challenges.
d iam nonu m y
19 . I. Mugari and E.E. Obioha, Predict ive Policing and Crime Cont rol in t he Unit ed St at es of America and Europe: Trends in a
Decade of Research and t he Fut ure of Predict ive Policing,Social Sciences 10:6 (2 02 1),eirm 2 34, od t em p or invid u nt u t
ht t ps:/ / dx.doi.org/ 10.339 0/ socsci1006 02 34.
2 0. Ibid.
2 1. EUCPN Quest ionnaire.
22. D. Cast elvecchi, Can We Open t he Black Box of Ai?,Nat ure538 :76 2 3 (2 016 ), 2 0- 3, ht t ps:/ / dx.doi.org/ 10.1038 / 538 02 0a.
2 3. J . Burrell, How t he Machine ?Thinks?: Underst anding Opacit y in Machine Learning Algorit hms,Big Dat a & Societ y 3:1 (2 016 ),
1- 12 , ht t ps:/ / dx.doi.org/ 10.1177/ 2 0539 517156 2 2512 .
24. Ibid.
25. K. Blair, P. Hansen, and L. Oehlberg, "Part icipat ory Art for Public Explorat ion of Algorit hmic Decision- Making" (paper
present ed at t he Companion Publicat ion of t he 2 02 1 ACM Designing Int eract ive Syst ems Conference, Virt ual Event ,
USA2 02 1), 2 3- 6 , ht t ps:/ / doi.org/ 10.1145/ 346 8 002 .346 8 2 35.
26. K. Yeung, A St udy of t he Implicat ions of Advanced Digit al Technologies (Including Ai Syst ems) for t he Concept of
Responsibilit y wit hin a Human Right s Framework, St rasbourg: Council of Europe,
ht t ps:/ / rm.coe.int / a- st udy- of- t he- implicat ions- of- advanced- digitLoremal- t echnologies-
ip su m including
d ol or sit am et , c onset et u r
/ 16 8 09 6 bdab.
27. S. Buranyi, Rise of t he Racist Robot s ? How Ai Is Learning All Our Worst Impulses, sad ip 8scAug.
ing2 01
el7,it r, sed d iam nonu m y
ht t ps:/ / www.t heguardian.com/ inequalit y/ 2 017/ aug/ 08 / rise- of- t he- racist - robot s- how- ai-
is- learning- all- our- worst - impulses (Accessed 2 Aug. 2 02 2 ). eirm od t em p or invid u nt u t
28. R. Rieland, Art ificial Int elligence Is Now Used t o Predict Crime. But Is It Biased?, 5 Mar. 2 018 ,
ht t ps:/ / www.smit hsonianmag.com/ innovat ion/ art ificial- int elligence- is- now- used- predict -
crime- is- it - biased- 18 09 6 8 337/ (Accessed 2 Aug. 2 02 2 ).
29. S. Dillon and C. Collet t , Ai and Gender: Four Proposals for Fut ure Research, 2 019 , ht t ps:/ / dx.doi.org/ 10.178 6 3/ CAM.41459 .
30. K. Crawford, K. Milt ner, and M.L. Gray, Crit iquing Big Dat a: Polit ics, Et hics, Epist emology, Int ernat ional J ournal of
Communicat ion 8 (2 014), 16 6 3- 72 .
31. Dillon and Collet t , Ai and Gender: Four Proposals for Fut ure Research.
32 . A. Christ in, Predict ive Algorit hms and Criminal Sent encing, in: D. Bessner and N. Guilhot (Eds.),The Decisionist Imaginat ion:
Sovereignt y, Social Science and Democracy in t he 2 0t h Cent ury, New York: Berghahn, 2 018 .
33. W .D. Heaven, Predict ive Policing Algorit hms Are Racist . They Need t o Be Dismant led, 17 J uly 2 02 0,
ht t ps:/ / www.t echnologyreview.com/ 2 02 0/ 07/ 17/ 100539 6 / predict ive- policing- algorit hms-
racist - dismant led- machine- learning- bias- criminal- just ice/ (Accessed 2 Aug. 2 02 2 ).
Lorem ip su m d ol or sit am et , c onset et u r
sad ip sc ing el it r, sed d iam nonu m y
eirm od t em p or invid u nt u t
18
34. K. Lum and W . Isaac, To Predict and Serve?,Significance 13:5 (2 016 ), 14- 9 ,
ht t ps:/ / dx.doi.org/ ht t ps:/ / doi.org/ 10.1111/ j.1740- 9 713.2 016 .009 6 0.x.
35. A. Zavr?nik, Algorit hmic J ust ice: Algorit hms and Big Dat a in Criminal J ust ice Set t ings,European J ournal of Criminology18 :5
(2 019 ), 6 2 3- 34, ht t ps:/ / dx.doi.org/ 10.1177/ 14773708 19 8 7676 2.
36 . L.J . Skit ka, K. Mosier, and M.D. Burdick, Account abilit y and Aut omat ion Bias,Int ernat ional J ournal of Human- Comput er
St udies52 :4 (2 000), 701- 17, ht t ps:/ / dx.doi.org/ 10.100 6 / ijhc.19 9 9 .0349 .
37. L.J . Skit ka, K.L. Mosier, and M. Burdick, Does Aut omat ion Bias Decision- Making?,Int ernat ional J ournal of Human- Comput er
St udies 51:5 (19 9 9 ), 9 9 1- 100 6 , ht t ps:/ / dx.doi.org/ ht t ps:/ / doi.org/ 10.1006 / ijhc.19 9 9 .0252 .
38 . S.D. Ramchurn, S. St ein, and N.R. J ennings, Trust wort hy Human- Ai Part nerships,iScience 24:8 (2 02 1),
ht t ps:/ / dx.doi.org/ 10.1016 / j.isci.2 02 1.102 8 9 1.
39 . A. Kraft , Microsoft Shut s Down Ai Chat bot aft er It Turned int o a Nazi, 25 Mar. 2 016 ,
ht t ps:/ / www.cbsnews.com/ news/ microsoft - shut s- down- ai- chat bot - aft er- it - t urned- int o-
racist - nazi/ (Accessed 2 Aug. 2 02 2 ).
40. Helsinki and Amst erdam First Cit ies in t he World t o Launch Open Ai Regist er, 2 0 Sept . 2 02 0,
ht t ps:/ / news.cision.com/ fi/ cit y- of- helsinki/ r/ helsinki- and- amst erdam- first - cit ies- in- t he-
world- t o- launch- open- ai- regist er,c32 04076 (Accessed 2 Aug. 2 02 2 ).
41. EUCPN Quest ionnaire.
42 . K. Macnish, D. Wright , and T. J iya, Predict ive Policing in 2 025: A Scenario, in: H. J ahankhani, B. Akhgar, P. Cochrane, and M.
Dast baz (Eds.),Policing in t he Era of Ai and Smart Societ ies, Cham: Springer Int ernat ional Publishing, 2 02 0, 19 9 - 2 15.
43. EUCPN Quest ionnaire.[1]For a more ext ensive glossary, see t he Council of Europe Glossary on Art ificial
Int elligence:ht t ps:/ / www.coe.int / en/ web/ art ificial- int elligence/ glossary

Lorem ip su m d ol or sit am et , c onset et u r


sad ip sc ing el it r, sed d iam nonu m y
eirm od t em p or invid u nt u t

Lorem ip su m d ol or sit am et , c onset et u r


sad ip sc ing el it r, sed d iam nonu m y
eirm od t em p or invid u nt u t

Lorem ip su m d ol or sit am et , c onset et u r


sad ip sc ing el it r, sed d iam nonu m y
18
eirm od t em p or invid u nt u t
19
Cont act det ails
EUCPN Secr et a r iat
Em a il: eucp n@ib z.eu
Web sit e: w w w .eucp n.or g

t w it t er .com / eucp n
f a ceb ook.com / eucp n
linked in.com / com p a ny/ eucp n

20

You might also like