Ai and Policing-QL0124000ENN
Ai and Policing-QL0124000ENN
Neither the European Union Agency for Law Enforcement Cooperation nor any person acting
on behalf of the agency is responsible for the use that might be made of the following information.
For any use or reproduction of photos or other material that is not under the copyright of
the European Union Agency for Law Enforcement Cooperation, permission must be sought
directly from the copyright holders.
While best efforts have been made to trace and acknowledge all copyright holders, Europol
would like to apologise should there have been any errors or omissions. Please do contact us
if you possess any further information relating to the images published or their rights holder.
Cite this publication: Europol (2023), The Second Quantum Revolution – The impact of quantum computing
and quantum technologies on law enforcement, Europol Innovation Lab observatory report,
Publications Office of the European Union, Luxembourg.
This publication and more information on Europol are available on the Internet.
www.europol.europa.eu
3
Contents
6 Foreword
7 Executive Summary
8 Introduction
8 Background
9 Objectives
20 Digital forensics
21 Computer vision and biometrics
Video monitoring and analysis
AI AND POLICING: THE BENEFITS AND CHALLENGES OF ARTIFICIAL INTELLIGENCE FOR LAW ENFORCEMENT
Image classification
Biometrics
Biometric categorisation
4
43 Implications for law enforcement agencies
46 Innovation and regulatory sandboxes
53 Conclusion
54 AI Glossary
56 Endnotes
5
Foreword Artificial Intelligence (AI) will profoundly alter the landscape of law
enforcement, offering innovative tools and opportunities to enhance
our capabilities in safeguarding public safety. This flourishing
technological field promises to revolutionise how we analyse
complex data sets, improve forensic methodologies, and develop
secure communication channels.
I hope that this report will contribute to shed light on the intricate
dynamics of AI for policing, providing valuable insights for our
stakeholders and helping the law enforcement community on its
path towards adopting AI’s potential responsibly. Together, we
embark on this journey, ready to face the challenges and seize
the opportunities that the AI revolution presents, ensuring that we
continue to protect and serve our communities in an increasingly
digital world.
AI AND POLICING: THE BENEFITS AND CHALLENGES OF ARTIFICIAL INTELLIGENCE FOR LAW ENFORCEMENT
Catherine De Bolle
Executive Director of Europol
6
Executive This report aims to provide the law enforcement community with
a comprehensive understanding of the various applications and
Summary uses of artificial intelligence (AI) in their daily operations. It seeks
to serve as a textbook for internal security practitioners, offering
guidance on how to responsibly and compliantly implement AI
technologies. In addition to showcasing the potential benefits and
innovative applications of AI, such as AI-driven data analytics, the
report also aims to raise awareness about the potential pitfalls and
ethical considerations of AI use in law enforcement. By addressing
these challenges, the report endeavours to equip law enforcement
professionals with the knowledge necessary to navigate the
complexities of AI, ensuring its effective and ethical deployment
in their work. The report focuses on large and complex data sets,
open-source intelligence (OSINT) and natural language processing
(NLP). It also delves into the realm of digital forensics, computer
vision, biometrics, and touches on the potential of generative AI.
7
Introduction Background
1 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024
laying down harmonised rules on artificial intelligence and amending Regulations (EC)
No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and
(EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial
Intelligence Act)
8
adaptive mechanism in those cases in which the application
of AI-based technology could cause harm to the rights
and freedoms of data subjects. By creating controlled
environments where new AI tools can be tested and refined
using representative data, without real-world consequences,
law enforcement agencies can ensure that these tools
meet both operational and regulatory standards before
being deployed. This flexible approach allows for real-
time adjustments and fosters an innovative environment
of continuous improvement, positioning European law
enforcement at the forefront of AI-driven policing.
Objectives
9
Key takeaways for AI has the ability to significantly transform policing; from
advanced criminal analytics that reveal trends in vast amounts
law enforcement of data, to biometrics that allow the prompt and unique
identification of criminals.
10
Compliance with the EU AI Act represents a crucial balancing
act, as it requires law enforcement to adhere to stringent
ethical, legal, and privacy standards, potentially necessitating
the reassessment of existing AI tools.
11
Applications AI technology has the ability to completely transform policing; from
advanced criminal analytics that reveal trends in vast amounts of
of AI in law data, to biometrics that allow the prompt and unique identification
enforcement of criminals. This section explores some of the major applications
of AI within the field of law enforcement. Through this, we aim to
provide insight into the present and future capabilities that AI offers
policing, projecting a course for a more efficient, responsive and
effective law enforcement model.
Data analytics
12
oversights, prolonged investigations, and missed opportunities to
apprehend criminals. For example, simply going through the volume
of data generated by one single smartphone is impossible without
technical assistance.
The following sections will delve into the concepts of large and
complex datasets, OSINT/SOCMINT (Open Source Intelligence/
Social Media Intelligence) and Natural Language Processing (NLP)
and how they can reshape modern law enforcement practices.
This list is not exhaustive. New use cases of analysing large and
complex datasets within law enforcement will emerge as the
technology and criminal landscape evolves.
13
Large and complex datasets in operations:
In 2020, the joint efforts of French and Dutch law enforcement, supported by Europol6, led to the
successful dismantling of the encrypted communications tool EncroChat. This operation not only
dealt a severe blow to criminal networks, but also demonstrated the crucial role of analysing large and
complex datasets in unravelling the intricate web of criminal activities at a global scale.
EncroChat, intended as a network for providing perfect anonymity, discretion, and no traceability to
users, served as a key tool for organised crime groups (OCGs) worldwide. Encrochat-enabled phones,
priced at approximately EUR 1000 each, offered features like automatic message deletion and remote
device wiping capabilities, making them indispensable for criminals seeking secure communications.
Since the dismantling, investigators managed to intercept, share and analyse over 115 million criminal
conversations, by an estimated number of over 60 000 users. User hotspots were prevalent in source
and destination countries for the trade in illicit drugs, as well as in money laundering centres7.
The success of this operation underscores the transformative impact of analysing large and complex
datasets. The vast dataset comprising millions of messages became a critical asset in dismantling
criminal networks. Through advanced analytics, law enforcement agencies were able to identify
patterns, connections, and hotspots, leading to the arrest of 6,558 suspects, including 197 High Value
Targets. The scale of this data-driven approach is evident in the seizure of criminal funds totalling EUR
739.7 million, the freezing of EUR 154.1 million in assets, and the confiscation of substantial quantities of
drugs, vehicles, weapons, and properties.
The EncroChat takedown serves as a paradigm for the effective integration of large and complex
datasets analytics in combating organised crime. Europol’s commitment and collaboration with various
stakeholders showcases the power of collaborative efforts and data-driven intelligence in disrupting
criminal activities around the world. It shows that this can be done while adhering to European data
protection and human rights standards, with consultation from the European Data Protection Supervisor
(EDPS). This operation stands as a testament to the evolving landscape of law enforcement, where
AI AND POLICING: THE BENEFITS AND CHALLENGES OF ARTIFICIAL INTELLIGENCE FOR LAW ENFORCEMENT
advanced analytics play a pivotal role in dismantling criminal networks and upholding the rule of law.
Making the most of what AI solutions have to offer does not rest
solely on the technology itself. Crucially, AI systems may only run
properly on appropriate, extensive technological infrastructure. This
requires significant budget and specific expertise to create and run,
which can be challenging to obtain, especially for smaller agencies8.
14
complex datasets. Collaboration and data sharing among agencies,
as well as the development and adoption of common standards
are vital to harnessing the full potential of data-driven insights, but
achieving this in practice often proves difficult.
PREDICTIVE POLICING
Within law enforcement, decision-making processes are
increasingly reliant on intelligence derived from large and complex
datasets11. A recent advancement is “predictive policing”, employing
sophisticated statistical methods to extract valuable new insights
from vast datasets, for instance on crime records, events and
environmental factors identified in criminological insights. This
approach empowers police agencies to identify patterns related to
the occurrence of crime and unsafe situations, and to deploy forces
according to these insights to minimise risks.
2 A system designed to achieve Artificial Intelligence (AI) via a model solely based on
predetermined rules. Two important elements of rule-based AI models are “a set of rules”
and “a set of facts” and by using these, developers can create a basic Artificial Intelligence
model. These systems can be viewed as a more advanced form of robotic process
automation (RPA). Rule-based AI models are deterministic by their very nature, meaning
they operate on the simple yet effective ‘cause and effect’ methodology. This model can
only perform the tasks and functions it has been programmed for and nothing else. Due to
this, rule-based AI models only require very basic data and information in order to operate
successfully (Source: https://wearebrain.com/blog/rule-based-ai-vs-machine-learning-
whats-the-difference/).
15
connections between locations, occurrences, and historical crime
statistics to forecast the likelihood of crimes occurring at specific
times and places. For instance, they can predict increased crime
rates during certain weather conditions or at major sporting events.
Individual-based predictive policing anticipates persons most likely
to engage in criminal activities. This approach has gained traction
in various EU member states, including the Netherlands, Germany,
Austria, France, Estonia, and Romania, with others exploring its
potential implementation14.
Real-world applications:
The Dutch Police, developed and operationalised the Crime Anticipation System (CAS)15 to address
a range of crimes beyond initial targets, including domestic burglary, robberies, pickpocketing,
car burglaries, violent crimes, commercial burglaries, and bicycle theft. The system conducts
weekly analyses using both local and recent data, enhancing this with external information about
neighbourhoods and their inhabitants. This is further enriched by the police’s own insights into criminal
activities, local conditions, and data from statistics in the Netherlands. It aims to identify crime patterns,
such as a higher frequency of bicycle thefts in a specific area occurring between 9:00 p.m. and
midnight. With these insights, the police can allocate their resources more efficiently and tackle these
crimes more effectively, as reported by local news16.
AI AND POLICING: THE BENEFITS AND CHALLENGES OF ARTIFICIAL INTELLIGENCE FOR LAW ENFORCEMENT
___________________________________________________________________________________________
16
In conclusion, predictive policing represents a transformative
approach to law enforcement through the integration of AI
technologies. As its implementation continues to evolve,
policymakers must navigate the delicate balance between
harnessing the potential benefits and addressing the ethical and
legal concerns associated with this innovative policing tool.
17
ideologies. Moreover, AI-enabled systems can be trained on
known propaganda materials to proactively spot new content that
shares similar characteristics and signal this to law enforcement
officers, ensuring a more responsive and efficient takedown of
harmful content before it spreads. It should be noted, under the
new Digital Services Act (DSA), OPSs are not only encouraged but
required to enhance their monitoring capabilities to ensure safer
digital environments21. The DSA mandates a higher degree of
accountability and transparency from platforms, pushing OSPs to
disclose their content moderation practices and outcomes22.
18
similar texts are close to each other. In this task, no labels are
required. Additionally, clustering can also consider factors such
as time and location, to provide a holistic view of crime trends.
For example, in burglary scenarios, clustering could reveal
emerging methods, like hooking keys through letterboxes or
exploiting particular lock weaknesses.
19
criminal communication, deciphering hidden meanings, or flagging
potentially harmful online content. For example, major concerns in
fighting cybercrime include spotting predatory communications,
identifying internet criminals, and preventing child abuse and
online grooming. NLP can be a game changer for policing in that
regard28. A subtask of NLP, Named Entity Recognition (NER), assists
analysts in labelling entities in crime reports, according to their type
such as persons, organisations and vehicles. This allows for more
refined crime grouping and analysis. In the context of burglaries, for
instance, NER could distinguish between different entry methods,
such as breaking a window versus tampering with a specific lock
type. Furthermore, when sifting through massive databases of
unstructured text, NLP tools can extract crucial information (entity
extraction), allowing the police to act upon critical situations such as
threats to life, promptly and efficiently.
Digital forensics
Several tools and techniques for data recovery and analysis have
been developed with AI components. These tools can recover
deleted files, access data from damaged devices, and restore
fragmented pieces of information into coherent formats. Their
efficiency lies in their ability to adapt and learn from each case,
improving accuracy over time.
3 Hash values are akin to digital fingerprints for files. By running a file’s contents through a
cryptographic algorithm, a distinct numerical identifier – the hash value – is generated that
represents the file’s content. Altering the content in any manner would drastically change
this hash value. Presently, the MD5 and SHA-256 algorithms are the predominant methods
for generating these hash values.
20
By continuously learning from new data, AI models can identify
between regular network traffic and potential threats, even if the
malicious activities evolve or employ new tactics.
4 A brute force attack employs a method of trial and error to crack login credentials,
encryption keys, or locate concealed web pages. Attackers systematically try every possible
combination in the hopes of making a correct guess. Such attacks are carried out using
‘brute force’, which involves continuous and forceful attempts to break into private accounts.
21
The advancement of imaging technology, coupled with AI including
ML developments, has transformed the realm of law enforcement.
Some of the potential applications for law enforcement include:
IMAGE CLASSIFICATION
In the realm of computer vision, image classification is increasingly
emerging as a critical field. Essentially, AI tools trained to categorise
images based on the dominant content or objects they detect,
help LEAs overwhelmed with imagery to promptly and effectively
22
analyse this data. Image classification helps swiftly sort through
such data, categorising images into groups such as ‘suspicious’
or ‘non-suspicious’ or even organising them by different themes,
events, or timeframes. This streamlined approach significantly
expedites investigative processes.
BIOMETRICS
In an age where personal identification and verification are of
paramount importance, biometric technologies have ascended
as key instruments in the toolkit of law enforcement. Biometric
technologies allow the identification of individuals, using their
unique physiological (e.g. facial features, fingerprints, iris
patterns) or behavioural attributes (e.g. gait5, handwriting).
Facial Recognition: The technique of using facial images for
criminal identification is as old as modern policing. Until the early
1960s, the procedure was primarily manual and relied on individual
perception and human capacity to recognise familiar faces.
However, advances in imaging technology and computer science33
5 Gait refers to the manner or pattern of movement of the limbs during locomotion over a solid
substrate. Essentially, it is the way an individual walks or moves. Gait analysis is often used
in medical, sports, and rehabilitation contexts to understand and address various issues
related to movement.
23
allowed for Automated Facial Recognition (AFR)34; computer
algorithms now assist the police, and digital images captured
through various means have long replaced printed photographs.
f
FACE RECOGNITION IN POLICING: REAL-WORLD USE CASES
24
f Biometric identification technologies, particularly facial recognition,
play a crucial role in law enforcement for prompt and efficient
identification of unknown persons. Two primary scenarios within
this context:
f Solving cold cases: In the investigation of a murder case, CCTV
footage identifies a suspect, leading to a facial image search
against a database of known and unknown individuals. Initial
results are negative, but the image is stored. Two years later,
a biometric query triggers a match during another murder
investigation, ultimately linking the suspect to the earlier case. This
demonstrates the power of biometrics in solving cold cases over
time.
f Uncovering child exploitation networks: In another scenario,
police confiscate a child sex offender’s computer, initiating a
biometric analysis of extracted images. Matches with victims from
previous investigations help unveil a broader network of criminals
involved in child exploitation. This underscores the significant
role biometrics play in combating heinous crimes and protecting
vulnerable populations such as missing children.
6 It is essential to clarify the circumstances under which biometric searches are allowed,
especially when a person’s identity is questionable or self-declared. Consideration should
be given to cases where inconsistencies in identification, such as self-declared or fake
identities, may necessitate biometric searches to ensure accurate identification and prevent
potential threats.
25
and remains unchanged throughout life, making it a reliable means
of identification. Traditional fingerprint analysis relied heavily on
trained experts who would manually compare prints, which was
time-consuming and sometimes subjective36.
26
of speaking. Voice recognition technology deciphers these minute
differences, converting spoken words into digital models that can be
compared against stored voiceprints. In law enforcement contexts,
this can be utilised to match voice samples from phone calls or
recordings, confirm identities in security systems.
Iris scans: The intricate patterns in the iris, the coloured part of
the eye, are as unique as fingerprints. Captured through a simple
photograph, these patterns offer a quick and non-intrusive means of
identification. Although iris identification technologies were initially
adopted for military applications such as biometric registration of
vulnerable populations in battlefields39, the adoption rates by law
enforcement gradually increase40.
BIOMETRIC CATEGORISATION
A further application of AI that holds potential for law enforcement;
that of systems that facilitate the categorisation of individuals
based on their biometric characteristics have become increasingly
important. These systems, whose application is fundamentally
different from systems used for identification, serve as invaluable
tools for both prevention and investigation.
27
political orientations, religious beliefs, disabilities, or affiliations to
trade unions. Instead, the focus primarily rests on age and gender
estimation. Still, such estimations, especially when integrated into
high-risk systems, necessitate robust regulatory overview, ensuring
that the technology’s potential is harnessed responsibly and
ethically, especially considering data protection concerns.
28
analyse past events, crowd dynamics, entry/exit bottlenecks,
and even social media chatter to help design a comprehensive
security plan.
Generative AI
The frontier of AI does not lie in just the analysis of existing data
and information, but also in the creation of entirely new content.
Generative AI, a rapidly advancing domain, employs algorithms to
generate content, including texts, images and other forms of media.
These technologies learn patterns, structures, and intricacies from
vast datasets and then produce new data that adheres to the same
patterns. For instance, after analysing thousands of images of
cats, a generative model can create a new, synthetic image of a
cat that, while entirely fictional, looks indistinguishably real. Some
of the most prominent forms of Generative AI include Generative
Adversarial Networks (GANs) and Large Language Models (LLMs).
29
to balance the benefits with ethical considerations and privacy
protections, ensuring that the use of synthetic media supports
public safety while respecting individual rights.
30
Despite the benefits for law enforcement, the integration of AI faces
Technological several technical constraints that challenge its effectiveness and
limitations and efficiency:
challenges Data quality and accessibility are fundamental to the effectiveness
of AI in law enforcement, but challenges arise from disparities in
data collection and storage practices across jurisdictions. These
variations result in inconsistent datasets that may be incomplete
or biased, compromising the integrity of AI outputs. Additionally,
existing data often lacks the granularity required for AI applications,
as it was not originally collected with AI in mind. For instance,
police reports, though informative, may not capture unreported
or undetected incidents, skewing AI training and outcomes.
Standardised data collection protocols, coupled with data cleansing
and enrichment processes are essential for creating comprehensive
and unbiased datasets. Moreover, integrating robust data protection
measures is crucial to safeguarding individuals’ privacy and
ensuring compliance with applicable data protection regulations.
By addressing these issues, AI reliability in law enforcement can
be improved, better reflecting and addressing the complexity of
criminal activity while upholding ethical and legal standards.
31
technology developers, policymakers, and the community is
crucial to navigate these technological limitations. Through such
collaboration, innovative solutions can be developed, tested, and
refined to enhance the efficiency, reliability, and overall effectiveness
of AI applications in policing practices. Additionally, investing in
research and development, focusing on ethical AI use, and fostering
an environment of continuous learning and adaptation among law
enforcement personnel are key steps toward overcoming these
obstacles.
Data is the core of any AI system, and the quality of the data directly
influences the outcomes produced by the system. Any skew in
data can unintentionally lead to unfair or biased outcomes. Fair and
AI AND POLICING: THE BENEFITS AND CHALLENGES OF ARTIFICIAL INTELLIGENCE FOR LAW ENFORCEMENT
32
phrases as offensive. Since these terms are more frequently utilised
by the respective ethnic groups, there is an increased likelihood of
their content being wrongly flagged as offensive and subsequently
removed, due to their overrepresentation in the training data. On the
other side, groups that are underrepresented in the data may not
benefit from the same level of policing protection.
33
compromising individual rights. This coexistence can serve as
a global model, ensuring that technology remains a tool for the
improvement of society.
34
body? The definition of responsibility is vital to ensure that AI tools
in law enforcement remain both effective and just.
Resolving the black box predicament is not solely a technical challenge; it is a profound ethical
imperative. Innovative solutions, like explainable AI (XAI)50, are actively under development to bridge this
gap and render these algorithms more transparent and comprehensible. However, until such solutions
become universally accessible and standardised, the black box issue remains an indispensable focal
point in the ongoing pursuit for an AI-driven policing framework that is accountable, fair, and transparent
to all stakeholders involved.
It should also be mentioned, AI algorithms can be opaque due to their protected status as trade secrets.
Data controllers in these cases avoid sharing details of the inner algorithmic workings to protect trade
secrets and avoid system manipulation.
35
Returning to the broader landscape, it becomes evident that for
AI to truly benefit law enforcement in the European Union and
maintain public trust, a rigorous commitment to accountability and
transparency is essential51. The development of frameworks to
explain AI’s decision-making processes, together with well-defined
regulatory standards and clarity in assigning responsibility, are
indispensable for establishing this balance.
36
The EU Artificial essential. This iterative scrutiny and feedback enables real-time
adjustments, ensuring that AI-driven initiatives in law enforcement
Intelligence Act: consistently mirror and uphold the EU’s dedication to equal rights,
overview and context justice, and human dignity.
This section will delve deeper into the objectives, scope, and
key provisions of the EU AIAct, exploring its implications for law
enforcement agencies.
37
generate outputs such as content, predictions, recommendations,
or decisions influencing ‘environments they interact with’.
that affect the ‘environments they engage with55. Moreover, the Act
applies and imposes certain obligations to a wide range of actors,
including, providers (i.e. developers), deployers (i.e. users) and
distributors of AI systems (Art. 2(1) of the EU AI Act).
PROHIBITED USES OF AI
In recognising the potential pitfalls and harms associated with
certain AI systems, the EU AI Act outlines certain AI practices that
are strictly prohibited (Art. 5). These prohibitions aim to prevent the
deployment of AI in ways that could cause potential harm, infringe
on individual rights, or undermine the foundational principles of the
EU. Applications that fall within this category include AI systems
7 Bayesian statistics is a method of data analysis that utilises Bayes’ theorem to revise existing
knowledge about model parameters using the information gained from observed data.
38
that manipulate human behaviour8, social scoring systems9 and AI
systems used to exploit the vulnerabilities of people (due to their
age, disability, social or economic situation).10
39
of an investigation for the targeted search of a person convicted or
suspected of having committed a criminal offence, the deployer of
an AI system for post-remote biometric identification shall request
an authorisation, prior to use, or without undue delay and no later
than 48 hours. The authorisation should be carried out, by a judicial
authority or an administrative authority whose decision is binding
and subject to judicial review. No such authorisation is needed if
the system is used for the initial identification of a potential suspect
based on objective and verifiable facts directly linked to the offence.
Moreover, the EU AI Act explicitly prohibits the untargeted use of
post remote biometric identification in law enforcement.
40
f Specifically targeted individuals: The use is limited to
confirming the identity of specifically targeted individuals. This
implies that real-time RBI should not be used for indiscriminate
surveillance or broad identification purposes.
11 Exceptions are allowed in urgent situations where obtaining prior authorisation is not
feasible, but even in these cases, the use must be restricted to the absolute minimum
necessary. If such authorisation is rejected, the use of real-time biometric identification
systems linked to that authorisation should be stopped with immediate effect and all the
data related to such use should be discarded and deleted.
41
enforcement. While the Act is designed to ensure that relevant
technologies are used in a way that upholds fundamental rights
and fosters trust among the public, this may also slow down the
adoption process, as law enforcement agencies must navigate
through the additional regulatory requirements, ensuring that their
AI tools are compliant with the new standards.
HIGH-RISK AI SYSTEMS
The EU AI Act identifies certain AI applications in the realm of law
enforcement as ‘high-risk’ due to their significant potential to impact
individual rights, freedoms, and safety. By classifying these systems
as high-risk, the new regulatory framework mandates a set of
stringent requirements to ensure their ethical and responsible use.
42
THE FILTER MECHANISM FOR THE EVALUATION OF HIGH-RISK
SYSTEMS
The EU AI Act introduces a filter system to address concerns that
the classification of high-risk for certain AI applications might be
overly broad (Art. 6(3)). This system allows providers of AI systems
which could fall in the high-risk category, but which do not pose a
significant risk of harm to the health, safety or fundamental rights of
natural persons, to conduct self-assessments. The filter mechanism
focuses on AI systems tailored for less risky, specific functionalities,
including those for narrow procedural tasks, review or enhancement
of human-completed tasks, identification of decision-making
patterns, and preliminary task performance in critical assessment
preparation12.
12 The European Commission is tasked with developing guidelines for applying these filters,
aiming to clarify and simplify the compliance process for AI system providers. It should be
noted that AI systems used for the profiling of natural persons should always be considered
as high-risk applications.
43
will the transition be managed for legacy systems under the new
regulations?
44
Additional safeguards in the context of RBI for law enforcement.
Given the sensitive nature of RBI deployments and the implications on privacy and other fundamental
rights, the EU AI Act foresees additional safeguards on the use of such systems. According to the
regulation, law enforcement agencies using such systems, shall ensure that no decision that produces
an adverse legal effect on a person may be taken solely based on the output of these post remote
biometric identification systems (See Art. 26 (10) of the EU AI Act). Therefore, the new regulation
mandates an additional layer of verification and confirmation.
Achieving this requires a combination of technological, procedural, and legal safeguards such as human
validation, implementing multi-modal biometric systems, establishing appropriate confidence thresholds
and educate human operators on the capabilities and limitations of biometric identification technology.
Moreover, the EU AI Act considers an enhanced human overview as a requirement for those systems
so that no action or decision may be taken unless the output of the RBI system has been separately
verified and confirmed by at least two natural persons, with the necessary competence, training and
authority (Art. 14 (5)).
The requirement for a separate verification by at least two natural persons shall not apply to high risk AI
systems used for the purpose of law enforcement, migration, border control or asylum, in cases where
Union or national law considers the application of this requirement to be disproportionate. The mandate
of the so-called 4-eyes principle, reflects the peer-review process which features prominently in the
forensic sciences.
Nonetheless, adhering to this principle for rather basic investigative measures such as a criminal
identification, presents potential challenges for law enforcement agencies such as operational efficiency,
availability of resources, subjectivity in verification, expertise and training timeliness in critical scenarios
and others.
13 EU JHA agencies (CEPOL, Eurojust, EUAA and EU FRA) from the EU Innovation Hub for
Internal Security and the Centre of Excellence in Terrorism, Resilience, Intelligence and
Organised Crime Research (CENTRIC)
45
technological aspects, coupled with an increased awareness of its
legal dimensions within the realm of law enforcement.
46
As highlighted throughout the report, the fusion of AI and policing
Balancing the promises enhanced efficiency, new or improved capabilities,
benefits and and resource optimisation. However, it also brings critical issues
to the fore, such as potential biases, potential infringements of
restrictions fundamental rights and freedoms, and questions of accountability.
47
tools but also understand their implications on society, ensuring
that the technology is used responsibly.
48
Engaging with the public and fostering a culture of transparency
can also help build trust. Open dialogue about the scope and
limitations of AI in law enforcement, its implications on privacy, and
measures taken to safeguard data will foster community trust and
collaboration.
49
models to be trained and deployed, enhancing the performance
of AI applications in law enforcement.
50
rates or quicker response times, they are more likely to embrace
the technology.
51
solutions to support the work of internal security actors in the
EU and its Member States, including justice, border security,
immigration and asylum and law enforcement practitioners.
52
from more than 100 countries interacting and collaborating with
each other in virtual communities.
53
AI Glossary ACCOUNTABILITY: The responsibility and explainability for the actions
and decisions made by Artificial Intelligence systems. In the context
of AI-driven policing, it involves ensuring that the use of AI in law
enforcement adheres to ethical standards and legal regulations.
themselves.
54
DEEPFAKES: AI generated or manipulated image, audio or video content
that resembles existing persons, objects, places or other entities or
events and would falsely appear to a person to be authentic or truthful
(Art. 3 of the EU AI Act).
55
REGULATORY SANDBOX: A concrete and controlled framework set up by
a competent authority which offers providers or prospective providers
of AI systems the possibility to develop, train, validate and test, where
appropriate in real world conditions, an innovative AI system, pursuant
to a sandbox plan for a limited time under regulatory supervision (Art. 3
of the EU AI Act).
SOCIAL SCORING: The use of AI and data analytics to assess and score
individuals based on their behaviour, activities, or social interactions.
56
Endnotes 1 Europol, 2021, Serious and Organised Crime Threat Assessment (SOCTA) 2021,
accessible at https://www.europol.europa.eu/publication-events/main-reports/europe-
an-union-serious-and-organised-crime-threat-assessment-socta-2021
2 European Parliament press release, June 2023, MEPs ready to negotiate first-ever rules
for safe and transparent AI, accessible at https://www.europarl.europa.eu/news/en/press-
room/20230609IPR96212/meps-ready-to-negotiate-first-ever-rules-for-safe-and-transpar-
ent-ai
3 European Parliament, 2023, Study to support the technical, legal and financial concep-
tualisation of a European Security Data Space for Innovation, accessible athttps://home-af-
fairs.ec.europa.eu/system/files/2023-02/Data%20spaces%20study_0.pdf
4 Y. J. Tan, S. Ramachandran, M. A. Jabar, et al., 2022, Financial Fraud Detection Based
on Machine Learning: A Systematic Literature Review, Applied Sciences, vol. 12, no. 19, pp.
9637, accessible at https://doi.org/10.3390/app12199637
5 K. Gülen, 2023, The parallel universe of computing: How multiple tasks happen simulta-
neously?, Dataconomy, accessible at https://dataconomy.com/2023/04/18/what-is-paral-
lel-processing/
6 Europol press release, 2020, Dismantling of an encrypted network sends shockwaves
through organised crime groups across Europe, accessible at https://www.europol.europa.
eu/media-press/newsroom/news/dismantling-of-encrypted-network-sends-shockwaves-
through-organised-crime-groups-across-europe
7 Europol press release, 2023, Dismantling encrypted criminal EncroChat communica-
tions leads to over 6 500 arrests and close to EUR 900 million seized, accessible at https://
www.europol.europa.eu/media-press/newsroom/news/dismantling-encrypted-criminal-en-
crochat-communications-leads-to-over-6-500-arrests-and-close-to-eur-900-million-seized
8 A. Babuta, 2017, Big Data and Policing: An Assessment of Law Enforcement Require-
ments, Expectations and Priorities, RUSI, accessible at https://static.rusi.org/201709_rusi_
big_data_and_policing_babuta_web.pdf
9 Ibid.
10 Ibid.
11 W. Hardyns, A.Rummens, 2018, Predictive Policing as a New Tool for Law Enforcement?
Recent Developments and Challenges, Eur J Crim Policy Res 24(2018), pp. 201–218, acces-
sible at https://doi.org/10.1007/s10610-017-9361-2
12 P. Walter et al., 2013, Predictive Policing: The Role of Crime Forecasting in Law En-
forcement Operations, RAND, accessible at https://www.rand.org/pubs/research_reports/
RR233.html
13 W.Hardynn & A. Rummens, 2017, Predictive Policing as a New Tool for Law Enforce-
ment? Recent Developments and Challenges, Eur J Crim Policy Res 24(2018), pp.201-2018,
accessible at https://doi.org/10.1007/s10610-017-9361-2
14 EUPCN, 2022, Artificial Intelligence and predictive policing: risks and challenges, acces-
sible at https://eucpn.org/sites/default/files/document/files/PP%20%282%29.pdf
15 Dutch Police, 2021, Crime Anticipation System, accessible at https://www.politie.nl/
wet-open-overheid/woo-verzoeken/landelijke-eenheid/woo-verzoeken-per-jaar/2021/
crime-anticipation-system.html
16 S. Oosterloo and G. van Schie, 2022, The Politics and Biases of the “Crime Anticipation ,
System” of the Dutch Police, accessible at https://ceur-ws.org/Vol-2103/paper_6.pdf
17 Europol, 2021, Internet Organised Crime Threat Assessment (IOCTA) 2021, accessible
at https://www.europol.europa.eu/publications-events/main-reports/internet-organ-
ised-crime-threat-assessment-iocta-2021
18 Europol, 2022, EU Terrorism Situation and Trend Report (TE-SAT) 2022, accessible at
https://www.europol.europa.eu/publication-events/main-reports/european-union-terror-
ism-situation-and-trend-report-2022-te-sat
19 Medium, 2023, Using OSINT to Improve Your Competitive Intelligence Strategy, acces-
sible at https://goldenowl.medium.com/using-osint-to-improve-your-competitive-intelli-
gence-strategy-eeb6114f9c71
20 Cambridge Consultants, 2019, Use of AI in online content moderation, accessible
at https://www.ofcom.org.uk/__data/assets/pdf_file/0028/157249/cambridge-consul-
tants-ai-content-moderation.pdf
21 European Commission, 2023, The impact of the Digital Services Act on digital plat-
57
forms, accessible at https://digital-strategy.ec.europa.eu/en/policies/dsa-impact-plat-
forms
22 Meta, 2023, New Features and Additional Transparency Measures as the Digital Ser-
vices Act Comes Into Effect, accessible at https://about.fb.com/news/2023/08/new-fea-
tures-and-additional-transparency-measures-as-the-digital-services-act-comes-into-ef-
fect/
23 DeepLearning.AI, 2023, Natural Language Processing, accessible at https://www.
deeplearning.ai/resources/natural-language-processing/
24 P. Sarzaeim et al., 2023, A Systematic Review of Using Machine Learning and Natural
Language Processing in Smart Policing, Computers 2023; 12(12):255, accessible at https://
doi.org/10.3390/computers12120255
25 A. Dixon & D. Birks, 2021, Improving Policing with Natural Language Processing, In
Proceedings of the 1st Workshop on NLP for Positive Impact, pp. 115–124, accessible at
https://aclanthology.org/2021.nlp4posimpact-1.13.pdf
26 Ibid.
27 R. Abhijit, 2020, Understanding Automatic Text Summarization-1: Extractive Methods,
Towards Data Science, accessible at https://towardsdatascience.com/understanding-auto-
matic-text-summarization-1-extractive-methods-8eb512b21ecc
28 Roxanne project, NLP technologies against online crime, accessible at https://www.
roxanne-euproject.org/news/blog/nlp-technologies-against-online-crime
29 J. M. James et al., 2022, Digital Forensics AI: Evaluating, Standardizing and Optimizing
Digital Forensics Investigations, SpringerLink, accessible at https://link.springer.com/arti-
cle/10.1007/s13218-022-00763-9
30 H. Ravichandran, 2023, How AI Is Disrupting And Transforming The Cybersecu-
rity Landscape, Forbes, accessible at https://www.forbes.com/sites/forbestech-
council/2023/03/15/how-ai-is-disrupting-and-transforming-the-cybersecurity-land-
scape/?sh=5aab864c4683
31 J.S. Hollywood et al., 2018, Using Video Analytics and Sensor Fusion in Law Enforce-
ment, RAND, accessible at https://www.rand.org/pubs/research_reports/RR2619.html
32 Ibid.
33 A. Jain, K. Nandakumar & A. Ross, 2016, 50 Years of Biometric Research: Accomplish-
ments, Challenges, and Opportunities, Pattern Recognition Letters, 79, doi: 10.1016/j.
AI AND POLICING: THE BENEFITS AND CHALLENGES OF ARTIFICIAL INTELLIGENCE FOR LAW ENFORCEMENT
patrec.2015.12.013.
34 P. Grother et al.,2024, Face Recognition Technology Evaluation (FRTE). Part 2: Identifi-
cation, NIST, available at https://pages.nist.gov/frvt/reports/1N/frvt_1N_report.pdf
35 P. Grother et al.,2019, Face Recognition Vendor Test (FRVT) Part 3: Demographic
Effects, NIST, accessible at https://nvlpubs.nist.gov/nistpubs/ir/2019/nist.ir.8280.pdf
36 J. L. Mnookin, 2003, Fingerprints: Not a Gold Standard, Issues, XX (1) Fall, 2003, acces-
sible at https://issues.org/mnookin-fingerprints-evidence/
37 European Union Agency for the Operational Management of Large-Scale IT Systems
in the Area of Freedom, Security and Justice(eu-LISA), 2020, The digital transformation
of internal security in the EU, AI and the role of eu-LISA, accessible at https://www.eulisa.
europa.eu/Newsroom/News/Pages/The-digital-transformation-of-internal-security-in-the-
EU-AI-and-the-role-of-eu-LISA.aspx
38 Y. Rawat, et al., 2023, The Role of Artificial Intelligence in Biometrics, 2023 2nd Inter-
national Conference on Edge Computing and Applications, pp. 622-626, doi: 10.1109/ICE-
CAA58104.2023.10212224.
39 A. Mitchell, 2013, Distinguishing Friend from Foe: Law and Policy in the Age of Bat-
tlefield Biometrics, Canadian Yearbook of international Law/Annuaire canadien de droit
international, 50, pp. 289-330, doi: 10.1017/S0069005800010869.
40 J. Thorpe, 2023, FBI amongst key adopters of innovative iris recognition technology for
law enforcement, International Security Journal, accessible at https://internationalsecurity-
journal.com/fbi-iris-recognition-law-enforcement/
41 J. Lunter, 2023, Synthetic data: a real route to eliminating bias in biometric, Biomet-
ric Technology Today 2023(1), accessible at https://www.magonlinelibrary.com/doi/
58
full/10.12968/S0969-4765%2823%2970001-5
42 A. Gomez-Alanis, J.A. Gonzalez-Lopez & A.M. Peinado, 2022, GANBA: Generative
Adversarial Network for Biometric Anti-Spoofing, Applied Sciences, 12(3):1454, accessible
at https://doi.org/10.3390/app12031454
43 Z.Zhang et al.,2023, An Improved GAN-based Depth Estimation Network for Face
Anti-Spoofing, ICCAI ‘23: Proceedings of the 2023 9th International Conference on
Computing and Artificial Intelligence, March 2023, pp. 323–328, accessible at https://doi.
org/10.1145/3594315.3594661
44 Europol, 2023, ChatGPT - the impact of Large Language Models on Law Enforcement,
A Tech Watch Flash Report from the Europol Innovation Lab, accessible at https://www.
europol.europa.eu/publications-events/publications/chatgpt-impact-of-large-language-
models-law-enforcement
45 P. Tomczak, 2018, Machine Learning and the Value of Historical Data, accessible at
https://kx.com/blog/machine-learning-and-the-value-of-historical-data/
46 C. Veal, M. Raper & P. Waters, 2023, The perils of feedback loops in machine learning:
predictive policing, Gilbert +Tobin, accessible at https://www.lexology.com/library/detail.
aspx?g=c8fff116-2112-48dd-841c-f9d1688d722b
47 N. Shahbazi, Y. Lin, A. Asudeh, & H. V. Jagadish, 2023, Representation Bias in Data:
A Survey on Identification and Resolution Techniques, ACM Computing Surveys (55)13,
accessible at https://doi.org/10.1145/3588433
48 EU Fundamental Rights Agency, 2022, Bias in algorithms - Artificial intelligence and
discrimination, doi:10.2811/25847
49 J. Burrell, 2016, How the machine ‘thinks’: Understanding opacity in ma-
chine learning algorithms, Big Data & Society, 3(1), accessible at https://doi.
org/10.1177/2053951715622512
50 S. Ali et al., 2023, Explainable Artificial Intelligence (XAI): What we know and what is
left to attain Trustworthy Artificial Intelligence, Information Fusion (99)2023, accessible at
https://doi.org/10.1016/j.inffus.2023.101805
51 B. Akhgar, et al., 2022, Accountability Principles for Artificial Intelligence (AP4AI) in the
Internal Security Domain AP4AI Framework Blueprint, accessible at https://www.ap4ai.eu/
sites/default/files/2022-03/AP4AI_Framework_Blueprint_22Feb2022.pdf
52 European Commission, 2019, Ethics guidelines for trustworthy AI, accessible at https://
digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
53 European Commission, 2021, Proposal for a Regulation laying down harmonised rules
on artificial intelligence, accessible at https://digital-strategy.ec.europa.eu/en/library/pro-
posal-regulation-laying-down-harmonised-rules-artificial-intelligence
54 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June
2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC)
No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and
(EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial
Intelligence Act)
55 L. Edwards, 2022, The EU AI Act: a summary of its significance and scope, Ada
Lovelace Institute, accessible at https://www.adalovelaceinstitute.org/wp-content/up-
loads/2022/04/Expert-explainer-The-EU-AI-Act-11-April-2022.pdf
56 Europol, 2023, The Second Quantum Revolution: the impact of quantum computing and
quantum technologies on law enforcement, accessible at https://www.europol.europa.eu/
publication-events/main-reports/second-quantum-revolution-impact-of-quantum-comput-
ing-and-quantum-technologies-law-enforcement#downloads
57 MIT Technology Review, 2023, AI-powered 6G networks will reshape digital interac-
tions, accessible at https://www.technologyreview.com/2023/10/26/1082028/ai-pow-
ered-6g-networks-will-reshape-digital-interactions/
59
About the Europol Innovation Lab
Technology has a major impact on the nature of crime. Criminals quickly integrate
new technologies into their modus operandi, or build brand-new business models
around them. At the same time, emerging technologies create opportunities for
law enforcement to counter these new criminal threats. Thanks to technological
innovation, law enforcement authorities can now access an increased number
of suitable tools to fight crime. When exploring these new tools, respect for
fundamental rights must remain a key consideration.
In October 2019, the Ministers of the Justice and Home Affairs Council called
for the creation of an Innovation Lab within Europol, which would develop a
centralised capability for strategic foresight on disruptive technologies to inform
EU policing strategies.
Strategic foresight and scenario methods offer a way to understand and prepare
for the potential impact of new technologies on law enforcement. The Europol
Innovation Lab’s Observatory function monitors technological developments
that are relevant for law enforcement and reports on the risks, threats and
opportunities of these emerging technologies. To date, the Europol Innovation
Lab has organised three strategic foresight activities with EU Member State law
enforcement agencies and other experts.
www.europol.europa.eu