0% found this document useful (0 votes)
43 views11 pages

AI and Criminal Justice System

This theoretical review explores the intersection of neuroprediction and artificial intelligence (AI) within the criminal justice system, emphasizing their potential to transform understanding of criminal behavior and recidivism. It discusses ethical concerns, the importance of bias-free data, and the implications of AI-generated evidence in legal contexts. The paper encourages further sociolegal and technological research to ensure responsible integration of these technologies in the Indian Judiciary.

Uploaded by

Ekta Dev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views11 pages

AI and Criminal Justice System

This theoretical review explores the intersection of neuroprediction and artificial intelligence (AI) within the criminal justice system, emphasizing their potential to transform understanding of criminal behavior and recidivism. It discusses ethical concerns, the importance of bias-free data, and the implications of AI-generated evidence in legal contexts. The paper encourages further sociolegal and technological research to ensure responsible integration of these technologies in the Indian Judiciary.

Uploaded by

Ekta Dev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

International Journal of All Research Education and Scientific Methods (IJARESM),

ISSN: 2455-6211, Volume 13, Issue 3, March-2025, Available online at: www.ijaresm.com

Exploring the Nuptial Bond between Neuroprediction


and AI in Criminal Justice: A Theoretical Review
Ekata Deb
LLM-Criminal & Security Laws, Reva University, Bengaluru, 560064

-----------------------------------------------------------****************-------------------------------------------------------------

ABSTRACT

The prognostic abilities of artificial intelligence and neuroscience in forensics and the criminal justice
system stand as a reformatory paradigm for understanding any criminal conduct. While the use of artificial
intelligence has been labeled transformational data analytical capabilities, neural predictive approaches also
enable an intricate understanding of culpability and criminal propensities. The literature on the complex nature
of neuroprediction and artificial intelligence, its ethical deliberations and its usability in curving recidivism are
analyzed. This theoretical review elucidates the complex interplay, nuptial relationships and convergence of
these relationships in the quest for justice. The consequences of not protecting individual rights in the criminal
justice system are surveyed using grounded theory. The degree of acceptability and dependability of AI-
generated evidence in legal proceedings are also reviewed. All these topics are yet to be contemplated under one
roof to offer an argumentative view. The author expects to prompt readers and new commers to embrace more
sociolegal and technological research before incorporating such research in the Indian Judiciary. The review
focuses on the question of whether to blame such technology inclusion wholly or rather to prioritize the
acquisition of bias-free pretrained datasets and processing models.

Keywords: Neuroprediction, Artificial Intelligence, Criminal Justice, Digital Forensics, Predictive Policing,
Recidivism Risk Assessment, Ethics.

INTRODUCTION

The combination of Neuroprediction with other AI 1 and ML2 tools gives rise to significant ethical considerations
regarding privacy, autonomy, and the possible improper exploitation of delicate neurological information. There is a
likelihood of sociolegal repercussions over benefits per se on the involvement of the same in the criminal investigation
and justice system. Neuroprediction in criminal justice incorporates the application of neuroscience to predict possible
criminal conduct, while AI utilizes machine learning tools for data analysis and decision-making (Fernando et
al.,2023). Such algorithms are designed to transform the criminal justice system by delivering predictive insights into
human behavior and decision-making processes. (Kanwel et al.,2023). As this convergence develops, integrating
technical breakthroughs with ethical concerns and legal protections becomes important for harnessing revolutionary
potential while protecting basic rights and ethical norms in the criminal justice realm (Morse,2015) (Jones et al.,2014).

The use of AI and ML algorithms for predictive policing and deterministic judgments is increasingly gaining
prominence. Predicting the risk of recidivism in the criminal justice system has been of paramount importance. This is
especially true for stages involving pretrial, bail, and sentencing on acquittal on the plea of innocence, conviction or
even parole (Gijs van Dijck, 2022). This review highlights these issues. First, bias is prevalent in datasets and training
models (Mark MacCarthy, 2017). Second, in-built processing models are evaluated to eliminate added bias from
subsequent HMI3 (Jiaming Zeng et al, 2017). Finally, defaults in deterministic or predictive models can lead to
hallucinating or imperfect AI systems (Anthony W. Flores et al, 2016). The use of advanced technologies to
comprehend the Recidivism Risk Assessment Scale [GRRS 4 V. VRRS5] (Northpointe, 2016), together with the
implementation of fairness models and ethical data preservation, presents a significant challenge in achieving a flawless
AI algorithm. The use of artificial intelligence (AI) in the context of predictive policing has been the subject of
substantial scientific research analysis. One example of algorithmic bias in predictive police models was highlighted in
this research (Lum and Isaac, 2016). The study revealed the existence of possible racial discrepancies in crime
predictions, hence raising issues about the fairness and accuracy of such algorithms.

1 AI-Artificial Intelligence
2 ML-Machine Learning
3 HMI- Human Machine Interface
4 GRRS- General Recidivism Risk Assessment Scale
5 VRRS- Violent Recidivism Risk Assessment Scale
Page | 3537
International Journal of All Research Education and Scientific Methods (IJARESM),
ISSN: 2455-6211, Volume 13, Issue 3, March-2025, Available online at: www.ijaresm.com

Evaluating the precision and efficacy of Neuroprediction and AI technologies in forecasting behavior or assisting
investigations may have a substantial influence on law enforcement procedures, sentencing, and case results. An
essential task is to analyze the existence of biases in AI algorithms for use in criminal justice systems. Anticipating
progress in the amalgamation of Neuroprediction and AI will help in planning for possible problems and associated
possibilities, leading to continued research and development in the area. Facilitating cooperation among neuroscientists,
ethicists, policymakers, and legal professionals is essential for developing inclusive strategies that harmonize technical
advancement with ethical deliberation in the judicial system. This paper includes an literature survey between 2013 and
2023 to frame a summary idea in line with predictive policing, recidivism risk assessment, and incorporated
technologies, as well as their sociolegal and ethical repercussions.

Research Objectives
1. Assessing the effectiveness of current Neuroprediction and AI technologies in enhancing criminal investigations and
influencing judicial decision-making processes.

2. The convergence of these initiatives within the criminal justice system was analyzed, focusing on aspects such as
fairness, bias mitigation, data storage, and processing techniques.

3. Exploring global public perceptions regarding their adoption in predictive policing and deterministic judgments
while analyzing the associated ethical and legal implications.

4. Investigating potential future trajectories and collaborative opportunities for their methodologies and tools within the
context of the Indian Judiciary.

Research Questions
1. What is the optimal prioritization strategy: verifying humanly biased pretrained datasets or evaluating algorithmic
learning/training models?

2. Should processing models in AI technologies undergo scrutiny alongside algorithms and training datasets to
guarantee freedom from biases likely to be introduced by subsequent human–machine interactions?

3. What contributes more to the increase in false positives and false negatives in deterministic/predictive methods:
pretrained datasets or the default settings of the algorithmic training model?

Review Analysis
State of the Art- AI-based Neuroimaging Technology: Neuroprediction involves the use of structural or functional
brain characteristics to forecast the results of therapy, prognoses, and behavioral predictions. The use of
Neurovariables, though a new technology, does not raise ethical issues until a certain period (Morse, 2015). Effective
brain-mapping technologies are likely to overcome a number of challenges, such as the challenge of continually
observing and changing neural activity. Additionally, simple open-loop neurostimulation devices with a closed-loop
approach describe the moment-to-moment state of the brain (Herron et al., 2017). Novel experimental frameworks
leveraging clever computational approaches that can rapidly perceive, understand, and modify vast volumes of data
from behaviorally important brain circuits are needed (Redish and Gordon, 2016). AI/ML in computational psychiatry
and other emerging approaches are such examples.

Explainable artificial intelligence, a relatively new set of methodologies, combines sophisticated AI and ML algorithms
with potent explanatory methodologies to produce explainable solutions that have been successful in a variety of
domains (Fellous et al., 2019). Recent studies have shown that basic brain circuit changes and therapeutic interventions
may be guided by the XAI6 (Holzinger et al., 2017; Langlotz et al., 2019). XAI for neurostimulation in mental health is
a development of the BMI 7 design (Vu et al., 2018). Data analysis of the nature of multivoxel pattern analysis involves
the study of multivoxel patterns in the human brain to distinguish between more delicate cognitive activities or subject
areas, combining data from several voxels within a region (Ombao et al., 2017). Noninvasive anatomical and functional
neuroimaging technologies have advanced significantly over the last 10 years, providing a significant quantity of data
and statistical software. High-dimensional dataset modeling and learning approaches are crucial for employing
statistical machine learning techniques for neuroimaging of enormous volumes of neuronal data with increasing
accuracy and high-dimensional dataset modeling (Alexandre et al., 2014). BMI intervention may stop moving up to
200 ms after it has started in the instance of motor decision-making both before and during movement execution. The
introduction of MVPA8 methods has gained popularity in neuroimaging in health and clinical research (Hampshire and
Sharp, 2015). Neural data existing in populations relating to veto self-initiated movements after being triggered within
200 ms can be utilized to decode (Schultze-Kraft et al., 2016). To some extent, intentions, perceptual states, and healthy

6 XAI-Explainable Artificial Intelligence


7 BMI- Brain Machine Interface
8 MPVA-Multi-Voxel Pattern Analysis
Page | 3538
International Journal of All Research Education and Scientific Methods (IJARESM),
ISSN: 2455-6211, Volume 13, Issue 3, March-2025, Available online at: www.ijaresm.com

and diseased brains can be distinguished via lie-detection methods (Blitz, 2017). Clinical applications are focused on
neurological disorders due to the broad agreement of response inhibition as an emergent property of a network of
distinct brain regions (Jiang et al., 2019).

Behavioral traits can be associated with aspects of the human brain, opening up new opportunities for predictive
algorithms to be constructed and allowing the prediction of the criminal dispositions of an individual (Mirabella and
Lebedev, 2017). The validity of the prediction models is judged by their ability to generalize; for most learning
algorithms, the standard practice is to estimate the generalization performance. The adoption of Neuroprediction, as has
been defined, requires approaches to frame inference from the group level to individual predictions (Tortora et al.,
2020). Scientific advancements have played a crucial role in shaping our understanding of the world. The progress of
neuroimaging in conjunction with AI, particularly the use of ML techniques, such as brain mapping, fMRI9, CNN10,
NLP11 and speech recognition techniques, has resulted in the development of brain-reading gadgets with cloud-based
neuro biomarker banks. Potential future applications of these technologies may include the areas of deception
detection, neuromarketing, and BCI12. Some of these methods are possibly useful in the field of forensic psychiatry
(Meynen, G. 2019). The prospective use of fMRI has been demonstrated in forecasting rates of recidivism among
individuals with criminal backgrounds (Aharoni et al., 2013). Thus, studies have focused on the use of neural data for
prediction functions within the field of criminal justice.

Convergence of AI and Neuroprediction in Forensics: Structural and functional neuromarkers of personality


disorders whose main characteristic is persistent antisocial conduct, such as ASPD 13 and psychopathy, as they are most
correlated with high rates of recidivism. (Umbach et al., 2015). Because of the need to collect biomarkers of the
"criminal" brain and integrate neuro-biology, neuro-prediction should aid in Socio-rehabilitation strategies rather than
curbing individual rights (Coppola, 2018). By using various techniques, the accuracy of risk evaluations and
uncovering effective therapies in the field of forensic psychiatry can be improved. This method, known as "A.I.
Neuroprediction" (Zico Junius Fernando et al, 2023), involves identifying neurocognitive factors that might predict the
likelihood of reoffending. It is necessary to identify the enduring effects of these tools while recognizing the
contributions of neuroscience and artificial intelligence to the assessment of the risk of violence (Bzdok, D., and
Meyer-Lindenberg, A., 2018).

The combination of Neuroprediction and AI has potential for supporting law enforcement and judicial institutions in
early risk assessment, intervention, and rehabilitation initiatives (Gaudet et al., 2016) (Jackson et al., 2017) (Greely &
Farahany, 2019) (Hayward & Maas, 2020). However, this confluence also presents ethical, legal, and privacy problems.
For example, Privacy (Farayola et al., 2023), Bias and Discrimination (Ntoutsi et al., 2020) (Srinivasan & Chander,
2021) (Belenguer, 2022) (Shams et al., 2023), Consent and Coercion (Ghandour et al., 2013) (Klein & Ojemann,
2016)(Rebers et al., 2016), and Cognitive Libertry (Muñoz, 2023) (Shah et al., 2021) (Daly et al., 2019) (Lavazza,
2018) (Ienca & Andorno, 2017) (Sommaggio et al., 2017) (Ienca, 2017). The ethical consequences of anticipating
criminal propensities and the possible exploitation of such insights underscore the necessity for rigorous ethical
frameworks and strict laws (Poldrack et al., 2018) (Eickhoff & Langner, 2019). Moreover, guaranteeing openness,
accountability, and fairness in the employment of these technologies inside the criminal justice system becomes crucial
(Meynen, 2019). The use of AI-powered brain-mapping technology (L. Belenguer, 2022) to predict acts of violence and
subsequent rearrests is a cause for concern and distress. Such methodologies may be used in the future within the fields
of forensic psychiatry and criminal justice; however, diluting the right to privacy (Ligthart SLTJ, 2019) can lead to
potential ethical and legal consequences.

Technologies used in crime detection, investigation and prediction: This section includes traditional AI, computer
vision, data mining and AI-decision-making models for the criminal justice system. In recent years, between 2018 and
2023, there was a large influx of literature reviews across interdisciplinary domains discussing various technologies and
software instruments that are used in the Criminal Justice System (Varun Mandalpu et al., 2023). The field of machine
learning is a subset of artificial intelligence, while deep learning and data mining methods are a subset of ML. Machine
learning methods use various statistical models and algorithms to first analyze and subsequently predict from a set of
data. On the other hand, deep learning uses neural networks with multiple layers to construct complex and intricate
relationships between the inputs and outputs (C. Janiesch et al., 2021) (W. Safat et al., 2021). ML techniques involve
training datasets, which are generated mainly through supervised and unsupervised learning methods. Traditional AI
and ML technologies, such as support vector machines decision trees, random forests and logistic regression, have been
heavily exploited for analysis of the facts of the crime, and identification of the pattern and identifying patterns to
further predict similar criminal activities (S. Kim et al. 2018). These traditional AI tools also achieve very high case
accuracy in anomaly detection and crime data analysis with limited datasets (S. Goel et al., 2021). A few notable

9 fMRI-Functional Magnetic Resonance Imaging


10 CNN- Convolutional Neural Network
11 NLP- Natural Language Processing
12 BCI- Brain Computer Interface
13 ASPD- Antisocial Personality Disorders
Page | 3539
International Journal of All Research Education and Scientific Methods (IJARESM),
ISSN: 2455-6211, Volume 13, Issue 3, March-2025, Available online at: www.ijaresm.com

examples of ML regression techniques include the use of the ARIMAX 14 (E.P. Utomo et al., 2018) method in the city of
Yogyakarta, with an RMSE of 6.68; the use of crime data (C. Catlett et al., 2019) via ARIMA15, (RF)16 mRepTree and
ZeroR (D.M. Raza et al., 2021); and the use of RSME 17-CDR181-57.8, CDR2-29.85, and CDR3-16.19 in Chicago
crimes (C. Catlett et al.,2014). Clustering methods (V. Ingilevich and S. Ivanov, 2018) include LR19, LOR20 and
gradient boosting, which are used in Saint-Petersburg Russia Crime, with an R-square21 of 0.9. The RFR22 (LK.G.A.
Alves et al., 2018) used by Dept. of Informatics of the Brazilian Public Health System (DATASUS), which has up to
97% accuracy with an adjusted R-square of 80% on average. Machine learning methods such as deep learning
algorithms such as convolution and RNNs23 are promising for crime prediction (Sarker, 2021). Using these algorithms
and training on crime data with either spatial or temporal components, predictive policing has been found to be quite
accurate in specific cities in the USA (A. Meijer and M.Wessels, 2019). Predictive models often use pretrained data
such as time, location, and type of crime incident to predict future criminal activities and identify criminal hotspots (S.
Hossain et al, 2020).

With computer vision and video analysis used for crime prediction (Neil Shal et al, 2021), technologies analyze video
footage from surveillance cameras from various locations to detect, identify and classify criminal activities such as
theft, assault and robbery. Even when monitoring a city‘s safety and security, surveillance is conducted by drone and
aerial technologies. Deep learning algorithms (M. Saraiva et al., 2022) are used for analyzing criminal data from
various sources, enhancing the ability and responsiveness to crime prevention in real time. The methods used in data
mining (T. Chandrakala et al, 2020) stand as an amazing asset offering tenets of criminal investigative procedures. With
respect to digital forensics, a very well-known technology known as the NSVNN 24 (Umar Islam et al, 2023) is currently
being developed. This approach is assumed to be reliable for anomaly detection in this field of criminal investigation.
Additionally, other deep learning mechanisms, such as the DBN 25 and clustering-based methods (Ashraf et al, 2022),
provide novel approaches for anomaly identification in digital forensics. Additionally, DNN26 use a feature-level data
fusion method (Kang HW, Kang HB, 2017) that can efficiently fuse multimodal data from several domains within
related environmental contexts. Researchers have also used Google TensorFlow to forecast crime hotspots and
evaluated three options in the RNN (Zhuang Y, 2017) architecture: precision, accuracy and recall. A comparative study
(McClendon L, Meghanathan N, 2015) between violent crime patterns was carried out using the open-source data
mining software WEKA27. Here, three algorithms, namely, linear regression, additive regression and decision stump,
were implemented to determine the efficiency and efficacy of the ML algorithms. This approach was intended to
predict violent crime patterns and determine criminal hotspots, criminal profiles and criminal trends.

Fairness versus Biasness: The process models circumventing these technologies are often accused of being biased
with no profound fairness in predictive or deterministic algorithms. The word fairness in the justice system is the rule
of law. When AI-based investigations and justice delivery occur, fairness and unbiases are of paramount importance. AI
algorithms must prioritize fairness as their use expands across many jurisdictions worldwide in forecasting recidivism
risk. In this study, the discrimination, bias, fairness and trustworthiness of AI algorithms were measured to ensure the
absence of prejudice (Daniel Varona et al, 2022). However, uncensored discrimination creates unfairness in AI
algorithms for predicting recidivism (Ninareh Mehrabi et al, 2021). Scholars have attributed the logical argumentation
of GIGO28 or RIRO29 to the quality of pretrained datasets, leading to unfair AI algorithms. The term ―discrimination in
AI/ML algorithms‖ was defined as follows (Verma & Rubin, 2018): ―biasness in modeling, training, and usage‖
(Ferrer, 2021). Arguably, algorithms cannot eliminate discrimination alone because the outcomes are shaped according
to the initial data received. When the underlying data are unfair, AI systems can perpetuate widespread inequality
(Chen, 2023). Frameworks for discovering and removing two types of discrimination (Lu Zhang et al., 2016) are
conducted where indirect discrimination is caused by direct discrimination. Like a group classifier (direct
discrimination based on historical data), tuning a neutral nonprotected attribute in the system (indirect discrimination)

14 ARIMAX-Autoregressive Integrated Moving Average with Explanatory Variable


15 ARIMA-Autoregressive Integrated Moving Average
16 RF- Random Forest
17 RMSE- Root Mean square Error
18 CDR- Crime Dense Region
19 LR- Linear Regression
20 LOR- Least Outstanding Requests
21 R2- the coefficient of determination
22 RFR- Random Forest Regressor
23 RNN- Recurrent Neural Networks
24 NSVNN- Novel Support Vector Neural Network
25 DBN-Deep Belief Network
26 DNNs-Deep Neural Networks
27 WEKA-Waikato Environment for Knowledge Analysis
28 GIG0- Garbage In, Garbage Out.
29 RIRO- Rubbish In, Rubbish Out.
Page | 3540
International Journal of All Research Education and Scientific Methods (IJARESM),
ISSN: 2455-6211, Volume 13, Issue 3, March-2025, Available online at: www.ijaresm.com

causes unfairness and inequality. Analysis of direct discrimination to audit black-box algorithms to mitigate biasness
based on pretrained datasets or attributes, such as discrimination, biasness, unfairness, and untrustworthiness, has been
conducted (Daniel Varona et al, 2022). Additionally, a novel probabilistic formation has been introduced for indirect
(unintended and necessarily not unfair) data preprocessing to limit control group discrimination and distortion in
individual datasets (Flavido du Pin Calmon et al, 2018). Sources of unfairness are not limited to discrimination but also
to biasness. The types of bias included data bias, model bias and model evaluation bias, as described in the review
(Michael Mayowa Farayola et al., 2023). In one study (Richard et al., 2023; Dana Pessach et al., 2022; Eike Peterson et
al., 2023), the use of historical data was found to cause measurement bias. Even having fair data is not sufficient, as
there can be a trigger from the model being biased, causing unfair prediction without justification (Davinder kaur et al,
2022). In this study (Arpita Biswas and Suvam Mukherjee, 2021), there is a use case scenario in which unfairness can
increase due to incorrect evaluation metrics, i.e., biased feedback. The fairness pipeline model, which includes
preprocessing, in-processing and postprocessing steps, has been constructed (Mingyang Wan et al, 2023; Felix
Petersen, 2021). While preprocessing guarantees the ethical growth of the AI model, the in-processing phase focuses on
tuning the algorithm. The postprocessing phase aims at the assessment stage of the AI lifecycle to address concerns
relating to prejudice and biasness.

AI delivering Justice: Using the neurodata and other neural biomarkers used to predict recidivism can clearly be
of interest for additional objectives, such as for health insurers or when evaluating potential employees, also raising
consent issues (Caulfield and Murdoch, 2017). Artificial intelligence should not be allowed to hallucinate in critical
arenas, such as those of the criminal justice system. Additionally, it is imperative that data integrity holds importance,
as a thorough examination of pretrained data is needed to detect and correct biases at their origin. The admissibility of
neurological evidence gathered using neuroimaging methods, such as fMRI, in court has been doubted by legal cases in
the most developed nations compared with the United States v. Jones (2012). Additionally, adherence to algorithmic
transparency can never be negated; thus, closed-source risk assessment tools need to be overridden. The courts
encountered challenges in assessing the dependability and pertinence of the evidence. Additionally, AI plays an
impactful role in sentencing and decision-making across many nations around the globe. There has been a range of
judicial rulings concerning the utilization of AI algorithms in the context of sentencing. The case of Wisconsin v.
Loomis (2016) in the United Nations highlighted the need for openness in the use of AI-generated risk assessments
within the context of sentence determinations. Additionally, Carpenter v. United States (2018) highlighted the
constitutional consequences of using people's brain data for predictive objectives, therefore addressing apprehensions
around privacy and the gathering of data.

The COMPAS30 algorithm (L. Belenguer, 2022), developed by Northpointe, is a tool used in U.S. courts to assess the
likelihood of a defendant committing another offense. It uses risk assessment scales to predict general and violent
recidivism, as well as pretrial offending. The algorithm's practitioner's guide uses behavioral and psychological aspects
to predict reoffending and criminal paths. The General Recidivism Scale predicts the probability of engaging in new
criminal behavior after being released, while the Violent Recidivism Scale assesses the probability of reoffending
violent crimes after a prior conviction. However, a ProPublica investigation (C. Rudin, 2019) revealed that individuals
who were mainly of black origin, such as those of African descent, were almost twice as likely to be classified as
having a higher risk by COMPAS, even if they did not actually reoffend. The COMPAS_AI algorithm promises to
demonstrate a superior degree of precision compared to that of individuals without criminal justice expertise, but it
does not reach the same level of accuracy.

Existing AI technologies in India: In India, Punjab Police, in collaboration with Staque Technologies, has
implemented an artificial intelligence-powered facial recognition system. The Cuttack Police has used AI-powered
devices to assist investigative officers in adhering to investigative protocols. The Uttar Pradesh police has introduced an
AI-powered facial recognition application named 'Trinetra' to effectively resolve criminal cases. The government of
Andhra Pradesh has introduced 'e-pragati', a database containing electronic Know Your Customer (e-KYC) information
for millions of individuals in the state. In collaboration with IIT Delhi, the Delhi Police has established an artificial
intelligence center to manage criminal activities. (Varun VM, 2020). It is important to note that the right to privacy
holds paramount importance guaranteed in Article 21 of the Indian Constitution, and banking of neuro-biomarkers may
not be allowed if there is such violation per se. Utilizing artificial intelligence in judicial settings has the potential to
impact the results of cases and may also lead to disparities in the imposition of sentences. Additionally, without any
succinct neuro-biobanks, designing such AI algorithms for predictive policing, assessing the risk of recidivism and
offering deterministic judgments is likely to be impossible. The use of neuroprediction and artificial intelligence in the
criminal justice system, if incorporated in India, will likely give rise to ethical considerations about biases and the
possibility of prejudice.

30 COMPAS-Correctional Offender Management Profiling for Alternative Sanctions.


Page | 3541
International Journal of All Research Education and Scientific Methods (IJARESM),
ISSN: 2455-6211, Volume 13, Issue 3, March-2025, Available online at: www.ijaresm.com

CONCLUSION

Summary of Key Findings: The key findings of the review shed light on the optimal prioritization strategy for
addressing biases in AI technologies, particularly focusing on the context of humanly biased pretrained datasets and
algorithmic learning/training models. The incorporation of techniques such as model bias evaluation and processing in
phase checks is needed to identify biases inside the learning and training algorithms, guaranteeing that they do not
perpetuate or magnify preexisting prejudices. The need for ongoing assessment holds quintessential and needs a
consistent evaluation and improvement in both the data and algorithms to minimize any biases that may arise or
remain.

To ascertain the default outcome in the setting of inaccurate predictions, it is necessary to comprehend the origins of
biases and their dissemination inside the AI system. Ensuring responsibility and correction mechanisms are in place
throughout both the data curation and algorithmic learning phases, which is also essential for establishing fairness and
accuracy in decision-making powered by artificial intelligence. However, thorough cross-validation techniques,
recalibration, scrupulous data gathering and simultaneous verification are essential for a wide range of brain data
sources. This approach ensures privacy, promotes fairness, confronts prejudices and simultaneously enumerates
human‒machine dependability. Undoubtedly, a fair and unbiased trial demands an equitable and flawless algorithm.
Pretrained data previously impacted by human biases might naturally introduce biases into the system. This principle
applies to all logical argumentation: soundness implies validity, but validity does not imply soundness.

While the optimal strategy depends on the specific context, addressing biases in pretrained datasets is considered
foundational due to their direct impact on biased outputs regardless of the model used. Once datasets are verified for
biases, evaluating algorithmic learning/training models becomes crucial to ensure that they do not introduce additional
biases. Furthermore, the review emphasizes the importance of scrutinizing processing models alongside algorithms and
training data to safeguard against biases introduced during human–machine interactions. Additionally, the review
highlights that the increase in false positives and false negatives in deterministic/predictive methods can be influenced
by both pretrained datasets and default settings of training models. Biased datasets are identified as a fundamental issue
leading to biased predictions, while adjusting model settings such as decision thresholds can impact the balance
between false positives and negatives.

These findings underscore the importance of meticulous consideration and calibration of both datasets and model
settings to minimize errors and maintain unbiasedness and accuracy to ensure delivery of Justice/ Governance by Fair
Algocracy. The concept of "bias in, bias out" elucidates the fundamental challenge in AI development, emphasizing the
necessity of unbiased and representative data to mitigate the perpetuation of systemic biases. In contexts such as
criminal justice, where AI-driven risk assessment tools can exacerbate existing biases, meticulous attention should be
given to data collection and processing to foster fairness and accuracy in AI systems.

Closing Remarks: In conclusion, the review literature has focused mainly on existing software currently used across
the globe, with its performance analysis and criticism across the public domain. From Bytes to Bar, the review here
describes the use of the AI algorithm to either send or keep criminals in Jail or at least to predict their likelihood of
committing crimes in a similar manner. AI algorithms are thus now under public periciliary, and their deterministic
approach is likely to be under public auction. Such an examination of AI algorithms is due to their perturbed efficacy
for predictive policing, crime pattern analysis, and resource allocation.

This highlights the importance of careful calibration to minimize errors and ensure equitable outcomes, as these
algorithms use previous crime data to forecast upcoming criminal activity and alert law enforcement. Nevertheless, the
presence of biases in historical data poses issues, as already discussed above, which may lead to the continuation of
excessive policing in some groups or classes of citizens. In the current scenario, AI now uses advanced algorithms to
analyze large datasets and detect trends and irregularities in criminal behavior.

Nevertheless, the effectiveness of these methods depends on the precision of the data, the strength of the algorithms,
and the capacity to comprehend the results. AI aids in optimizing resource allocation by forecasting regions that need
heightened law enforcement. Additionally, ethical issues, algorithmic transparency, and accountability are of utmost
importance. The use of AI in judicial courts needs to be closely examined since it may lead to inconsistencies in
sentencing. To fully use the promise of AI while ensuring fairness and ethical norms, it is crucial to adopt a
comprehensive strategy that includes the collaboration of AI specialists, legal professionals, ethicists, and lawmakers.
There is definite difficulty in determining the underlying source of biases that result in false-positive and false-negative
outcomes. The learning and training algorithms may also unintentionally magnify these biases or be ineffective at
mitigating them if the training is achieved under an unsupervised learning model. The pursuit of fairness, equality and
equity now requires a comprehensive methodology, as per this study. Thus, the key takeaway is finding, addressing and
removing any form of bias at every stage of AI algorithms to maintain fairness and accuracy in decision-making
processes.

Page | 3542
International Journal of All Research Education and Scientific Methods (IJARESM),
ISSN: 2455-6211, Volume 13, Issue 3, March-2025, Available online at: www.ijaresm.com

REFERENCES

[1]. A.K. Zakaria, ―AI applications in the criminal justice system: the next logical step or violation of human rights,‖
Journal of Law and Emerging Technologies, vol. 3, no. 2, pp. 233–257, Nov. 2023, doi:
10.54873/jolets.v3i2.124.
[2]. Aharoni, E., Vincent, G. M., Harenski, C. L., Calhoun, V. D., Sinnott‐Armstrong, W., Gazzaniga, M. S., & Kiehl,
K. A. (2013). Neuroprediction of future rearrest. Proceedings of the National Academy of Sciences of the United
States of America, 110(15), 6223–6228. https://doi.org/10.1073/pnas.1219302110
[3]. Alexandre, A., Pedregosa, F., Eickenberg, M., Gervais, P., Mueller, A., Kossaifi, J., Gramfort, A., Thirion, B., &
Varoquaux, G. (2014b). Machine learning for neuroimaging with scikit-learn. Frontiers in Neuroinformatics, 8.
https://doi.org/10.3389/fninf.2014.00014
[4]. Ali, S., Abuhmed, T., El–Sappagh, S., Muhammad, K., Alonso-Moral, J. M., Confalonieri, R., Guidotti, R., Del
Ser, J., Díaz-Rodríguez, N., & Herrera, F. (2023). Explainable Artificial Intelligence (XAI): What we know and
what is left to attain Trustworthy Artificial Intelligence. Information Fusion, 99, 101805.
https://doi.org/10.1016/j.inffus.2023.101805
[5]. Anthony W. Flores, Kristin Bechtel, Christopher T. Lowenkamp ―False Positives, False Negatives, and False
Analyses: A Rejoinder to, United States Courts.‖ https://www.uscourts.gov/federal-probation-
journal/2016/09/false-positives-false-negatives-and-false-analyses-rejoinder
[6]. Arpita Biswas and Suvam Mukherjee. 2021. Ensuring fairness under prior probability shifts. In Proceedings of
the 2021 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, New York,
NY, USA, 414–424. https://doi.org/10.1145/3461702.3462596
[7]. Ashraf, N.; Mehmood, D.; Obaidat, M.A.; Ahmed, G.; Akhunzada, A. Criminal Behavior Identification Using
Machine Learning Techniques Social Media Forensics. Electronics 2022, 11, 3162.
[8]. Belenguer, L. (2022, February 10). AI bias: exploring discriminatory algorithmic decision-making models and
the application of possible machine-centric solutions adapted from the pharmaceutical industry. AI and ethics,
2(4), 771-787. https://doi.org/10.1007/s43681-022-00138-8
[9]. Blitz, M.J. (2017). Lie Detection, Mind Reading, and Brain Reading. In: Searching Minds by Scanning Brains.
Palgrave Studies in Law, Neuroscience, and Human Behavior. Palgrave Macmillan, Cham.
https://doi.org/10.1007/978-3-319-50004-1_3
[10]. Bzdok, D., and Meyer-Lindenberg, A. (2018). Machine learning for precision psychiatry: opportunities and
challenges. Biol. Psychiatry 3, 223–230
[11]. Catlett, C., Malik, T., Goldstein, B., Giuffrida, J., Shao, Y., Panella, A., Eder, D. N., Van Zanten, E., Mitchum, R.
M., Thaler, S., & Foster, I. (2014). Plenario: an open data discovery and exploration platform for urban science.
IEEE Data(Base) Engineering Bulletin, 37(4), 27–34. http://sites.computer.org/debull/A14june/p27.pdf
[12]. Catlett, E. Cesario, D. Talia, and A. Vinci, 2019, ‗‗Spatiotemporal crime predictions in smart cities: A data-driven
approach and experiments,‘‘ Pervas. Mobile Comput., vol. 53, pp. 62–74, Feb. 2019.
[13]. Caulfield, T., & Murdoch, B. (2017). Genes, cells, and biobanks: Yes, there's still a consent problem. PLoS
biology, 15(7), e2002654. https://doi.org/10.1371/journal.pbio.2002654
[14]. Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities &
Social Sciences Communications, 10(1). https://doi.org/10.1057/s41599-023-02079-x
[15]. Cognitive neural prosthetics, Annual Review of Psychology, 61, 169–190,
https://ab.harvard.edu/2018arXiv180109808A (accessed January 01, 2018).
[16]. Coppola F. (2018). Mapping the brain to predict antisocial behavior: new frontiers in neurocriminology,
‗new‘challenges for criminal justice. U.C.L. J. Jurisprud. Spec. 1 106–110.
[17]. D.M. Raza and D. B. Victor, ‗‗Data mining and region prediction based on crime using random forest,‘‘ in Proc.
Int. Conf. Artif. Intell. Smart Syst. (ICAIS), Mar. 2021, pp. 980–987.
[18]. Daly, A., Hagendorff, T., Li, H., Mann, M., Marda, V., Wagner, B., Wang, W W., & Witteborn, S. (2019,
January 1). Artificial Intelligence, Governance and Ethics: Global Perspectives.
https://doi.org/10.2139/ssrn.3414805
[19]. Dana Pessach and Erez Shmueli. 2022. A review on fairness in machine learning. ACM Computing Surveys
(CSUR) 55, 3 (2022), 1–44.
[20]. Daniel Varona and Juan Luis Suárez. 2022. Discrimination, Bias, Fairness, and Trustworthy AI. Applied
Sciences 12, 12 (2022), 5826
[21]. Davinder Kaur, Suleyman Uslu, Kaley J Rittichier, and Arjan Durresi. 2022. Trust worthy artificial intelligence:
a review. ACM Computing Surveys (CSUR) 55, 2 (2022), 1–38.
[22]. Douglas, T., Pugh, J., Singh, I., Savulescu, J., and Fazel, S. (2017). Risk assessment tools in criminal justice and
forensic psychiatry: the need for better data. Eur. Psychiatry 42, 134–137. doi: 10.1016/j.eurpsy.2016.12.009
[23]. E. P. Utomo, ‗‗Prediction the crime motorcycles of theft using ARIMAXTFM with single input,‘‘ in Proc. 3rd
Int. Conf. Informat. Comput. (ICIC), Oct. 2018, pp. 1–7
[24]. Eickhoff, S B., & Langner, R. (2019, November 14). Neuroimaging-based prediction of mental traits: Road to
utopia or Or well?. PLoS biology, 17(11), e3000497-e3000497. https://doi.org/10.1371/journal.pbio.3000497
[25]. Eike Petersen, Melanie Ganz, Sune Holm, and Aasa Feragen. 2023. On (assessing) the fairness of risk score
models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 817–829
Page | 3543
International Journal of All Research Education and Scientific Methods (IJARESM),
ISSN: 2455-6211, Volume 13, Issue 3, March-2025, Available online at: www.ijaresm.com

[26]. F. Contini, ―Artificial intelligence and the transformation of humans, law and technology interactions in judicial
proceedings,‖ Law, Technology and Humans, vol. 2, no. 1, pp. 4–18, May 2020, doi: 10.5204/lthj.v2i1.1478.
[27]. F. Lagioia, R. Rovatti, and G. Sartor, ―Algorithmic fairness through group parities? The case of COMPAS-
SAPMOC,‖ AI & SOCIETY, vol. 38, no. 2, pp. 459–478, Apr. 2022, doi: 10.1007/s00146-022-01441-y.
[28]. Farayola, M M., Tal, I., Bendechache, M., Saber, T., & Connolly, R. (2023, August 29). Fairness of AI in
Predicting the Risk of Recidivism: Review and Phase Mapping of AI Fairness Techniques.
https://doi.org/10.1145/3600160.3605033
[29]. Felix Petersen, Debarghya Mukherjee, Yuekai Sun, and Mikhail Yurochkin. 2021. Postprocessing for individual
fairness. Advances in Neural Information Processing Systems 34 (2021), 25944–25955
[30]. Fellous, J., Sapiro, G., Rossi, A. F., Mayberg, H. S., & Ferrante, M. (2019). Explainable artificial intelligence for
neuroscience: behavioral neurostimulation. Frontiers in Neuroscience, 13.
https://doi.org/10.3389/fnins.2019.01346
[31]. Ferrer, X. (2021, August 9). Bias and Discrimination in AI: A Cross-Disciplinary Perspective - IEEE
Technology and Society. IEEE Technology and Society. https://technologyandsociety.org/bias-and-
discrimination-in-ai-a-cross-disciplinary-perspective/
[32]. Flavio du Pin Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R
Varshney. 2018. Data preprocessing for discrimination prevention: Information-theoretic optimization and
analysis. IEEE Journal of Selected Topics in Signal Processing 12, 5 (2018), 1106–1119. https://doi.org/10.
1109/JSTSP.2018.2865887
[33]. G.Van Dijck, ―Predicting Recidivism Risk Meets AI Act,‖ European Journal on Criminal Policy and Research,
vol. 28, no. 3, pp. 407–423, Jun. 2022, doi: 10.1007/s10610-022-09516-8.
[34]. Gaudet, Lyn M. and Kerkmans, Jason and Anderson, Nathaniel and Kiehl, Kent, Can Neuroscience Help Predict
Future Antisocial Behavior? (September 29, 2016). Fordham Law Review, Vol. 85, No. 2, 2016, Available at
SSRN: https://ssrn.com/abstract=2862083
[35]. Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. 2017. On fairness and
calibration. Advances in neural information processing systems 30 (2017).
[36]. Ghandour, L., Yasmine, R., & El-Kak, F. (2013, July 1). Giving Consent without Getting Informed: A Cross-
Cultural Issue in Research Ethics. https://doi.org/10.1525/jer.2013.8.3.12
[37]. Greely, H. T., & Farahany, N. A. (2019). Neuroscience and the criminal justice system. Annual Review of
Criminology, 2(1), 451–471. https://doi.org/10.1146/annurev-criminol-011518-024433
[38]. H. R. S. A. Shamsi and S. Safei, ―Artificial intelligence adoption in predictive policing to predict crime
mitigation performance,‖ International Journal of Sustainable Construction Engineering and Technology, vol. 14,
no. 3, Sep. 2023, doi: 10.30880/ijscet.2023.14.03.025.
[39]. Hampshire, A., & Sharp, D. J. (2015). Contrasting network and modular perspectives on inhibitory
control. Trends in cognitive sciences, 19(8), 445–452. https://doi.org/10.1016/j.tics.2015.06.006
[40]. M. Lindquist, W. Thompson, J. Aston , 2017, Handbook of Neuroimaging Data Analysis, New York: Chapman
& Hall/CRC, 2017.
[41]. Hassani, X. Huang, E. S. Silva, and M. Ghodsi, ―A review of data mining applications in crime,‖ Statistical
Analysis and Data Mining, vol. 9, no. 3, pp. 139–154, Apr. 2016, doi: 10.1002/sam.11312.
[42]. Hayward, K., & Maas, M. M. (2020). Artificial intelligence and crime: A primer for criminologists. Crime,
Media, Culture, 17(2), 209–233. https://doi.org/10.1177/1741659020917434
[43]. Herron, J. A., Thompson, M. C., Brown, T., Chizeck, H., Ojemann, J. G., & Ko, A. L. (2017). Cortical Brain–
Computer Interface for Closed-Loop Deep Brain Stimulation. IEEE Transactions on Neural Systems and
Rehabilitation Engineering, 25(11), 2180–2187. https://doi.org/10.1109/tnsre.2017.2705661
[44]. Holzinger A., Malle B., Kieseberg P., Roth P. M., Müller H., Reihs R., et al. (2017b). Toward the Augmented
Pathologist: Challenges of Explainable-AI in Digital Pathology. arXiv [Preprints] Available
at: https://ui.adsabs.harvard.edu/abs/2017arXiv171206657H (accessed December 01, 2017)
[45]. Ienca, M. (2017, August 1). Preserving the Right to Cognitive Liberty.
https://www.scientificamerican.com/article/preserving-the-right-to-cognitive-liberty/
[46]. Ienca, M., & Andorno, R. (2017, December 25). Toward new human rights in the age of neuroscience and
neurotechnology. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5447561/
[47]. Jackson, B. A., Banks, D., Woods, D., & Dawson, J. C. (2017, January 10). Future-Proofing Justice: Building a
research agenda to address the effects of technological change on the protection of constitutional rights. RAND.
https://www.rand.org/pubs/research_reports/RR1748.html
[48]. Janiesch, P. Zschech, and K. Heinrich, ‗‗Machine learning and deep learning,‘‘ Electron. Mark., vol. 31, no. 3,
pp. 685–695, Apr. 2021.
[49]. Jiaming Zeng, Berk Ustun, and Cynthia Rudin. 2017. Interpretable classification models for recidivism
prediction. Journal of the Royal Statistical Society: Series A (Statistics in Society) 180, 3 (2017), 689–722.
https://doi.org/10.1111/rssa.12227
[50]. Jiang, J., Shang, X., Wang, X., Chen, H., Li, W., Wang, Y., & Xu, J. (2021). Nitrous oxide‐related neurological
disorders: Clinical, laboratory, neuroimaging, and electrophysiological findings. Brain and Behavior, 11(12).
https://doi.org/10.1002/brb3.2402

Page | 3544
International Journal of All Research Education and Scientific Methods (IJARESM),
ISSN: 2455-6211, Volume 13, Issue 3, March-2025, Available online at: www.ijaresm.com

[51]. Johanson, M., Vaurio, O., Tiihonen, J., & Lähteenvuo, M. (2020). A Systematic Literature Review of
Neuroimaging of Psychopathic Traits. Frontiers in Psychiatry, 10. https://doi.org/10.3389/fpsyt.2019.01027
[52]. Jones, O D., Bonnie, R J., Casey, B J., Davis, A., Faigman, D L., Hoffman, M B., Montague, R., Morse, S J.,
Raichle, M E., Richeson, J A., Scott, E S., Steinberg, L., Taylor-Thompson, K., Wagner, A D., & Yaffe, G. (2014,
June 1). Law and neuroscience: recommendations submitted to the President's Bioethics Commission. Journal of
law and the biosciences, 1(2), 224-236. https://doi.org/10.1093/jlb/lsu012
[53]. Kang HW, Kang HB (2017) Prediction of crime occurrence from multimodal data using deep learning. PLoS
One 12(4):e0176244. https://doi.org/10.1371/journal.pone.0176244
[54]. Kanwel, S., Khan, M. I., & Usman, M. (2023). From Bytes to Bars: The Transformative Influence of Artificial
Intelligence on Criminal Justice. Qlantic Journal of Social Sciences, 4(4), 84-
89. https://doi.org/10.55737/qjss.059046443
[55]. Klein, E., & Ojemann, J G. (2016, June 1). Informed consent in implantable BCI research: identification of
research risks and recommendations for development of best practices. Journal of neural engineering, 13(4),
043001-043001.
[56]. L. Tortora, G. Meynen, J. W. J. Bijlsma, E. Tronci, and S. Ferracuti, ―Neuroprediction and A.I. in Forensic
Psychiatry and Criminal justice: A Neurolaw perspective,‖ Frontiers in Psychology, vol. 11, Mar. 2020, doi:
10.3389/fpsyg.2020.00220.
[57]. L.G. A. Alves, H. V. Ribeiro, and F. A. Rodrigues, ‗‗Crime prediction through urban metrics and statistical
learning,‘‘ Phys. A, Stat. Mech. Appl., vol. 505, pp. 435–443, Sep. 2018.
[58]. Langlotz C. P., Allen B., Erickson B. J., Kalpathy-Cramer J., Bigelow K., Cook T. S., et al. (2019). A Roadmap
for Foundational Research on Artificial Intelligence in Medical Imaging: From the 2018 NIH/RSNA/ACR/The
Academy Workshop. Radiology 291 781–791. 10.1148/radiol.2019190613
[59]. Lavazza, A. (2018, February 19). Freedom of Thought and Mental Integrity: The Moral Requirements for Any
Neural Prosthesis. Frontiers in neuroscience, 12. https://doi.org/10.3389/fnins.2018.00082
[60]. Ligthart SLTJ, ‗Coercive Neuroimaging, Criminal Law, and Privacy: A European Perspective‘ (2019) 6 Journal
of Law and the Biosciences.
[61]. Loll, A. Automated Fingerprint Identification Systems (AFIS). In Encyclopedia of Forensic Sciences, 2nd ed.;
Academic Press: Cambridge, MA, USA, 2013; pp. 86–91.
[62]. Lu Zhang, Yongkai Wu, and Xintao Wu. 2016. A causal framework for discovering and removing direct and
indirect discrimination. arXiv preprint arXiv:1611.07509 (2016)
[63]. Lum, K., & Isaac, W. (2016). To predict and serve? Significance, 13(5), 14–19. https://doi.org/10.1111/j.1740-
9713.2016.00960.x
[64]. M. Saraiva, I. Matijosaitiene, S. Mishra, and A. Amante, ‗‗Crime prediction and monitoring in Porto, Portugal,
using machine learning, spatial and text analytics,‘‘ ISPRS Int. J. Geo-Inf., vol. 11, no. 7, p. 400, Jul. 2022.
[65]. Mark MacCarthy. 2017. Standards of fairness for disparate impact assessment of big data algorithms. Cumb. L.
Rev. 48 (2017), 67
[66]. McClendon L, Meghanathan N (2015) Using machine learning algorithms to analyze crime data. Mach Lear
Appl Int J 2(1):1–12. https://doi.org/10.5121/mlaij.2015.2101
[67]. Meijer and M. Wessels, ‗‗Predictive policing: Review of benefits and drawbacks,‘‘ Int. J. Public Admin., vol. 42,
no. 12, pp. 1031–1039, Sep. 2019.
[68]. Meynen, G. (2019). Forensic psychiatry and neurolaw: Description, developments, and debates. International
Journal of Law and Psychiatry, 65, 101345. https://doi.org/10.1016/j.ijlp.2018.04.005
[69]. Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015.
Certifying and removing disparate impact. In proceedings of the 21th ACM SIGKDD international conference
on knowledge discovery and data mining. 259–268
[70]. Mingyang Wan, Daochen Zha, Ninghao Liu, and Na Zou. 2023. In-processing modeling techniques for machine
learning fairness: A survey. ACM Transactions on Knowledge Discovery from Data 17, 3 (2023), 1–27.
[71]. Mirabella, G., & Lebedev, M. А. (2017). Interfacing to the brain's motor decisions. Journal of
neurophysiology, 117(3), 1305–1319. https://doi.org/10.1152/jn.00051.2016
[72]. Morse, S. J. (n.d.). Neuroprediction: new technology, old problems. Penn Carey Law: Legal Scholarship
Repository. https://scholarship.law.upenn.edu/faculty_scholarship/1619/
[73]. Mugdha Dwivedi, ―The Tomorrow of Criminal Law: Investigating the Application of Predictive Analytics and
AI in the Field of Criminal Justice‖ 11 International Journal of Creative Research Thoughts a499-a501 (2023).
[74]. Muñoz, J M. (2023, March 17). Achieving cognitive liberty.
https://www.science.org/doi/10.1126/science.adf8306
[75]. Neil Shah, Nandish Bhagat, Manan Shah, ―Crime forecasting: a machine learning and computer vision approach
to crime prediction and prevention‖, Visual Computing for Industry, Biomedicine and Art, vol. 4:9, 2021
[76]. Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A survey on
bias and fairness in machine learning. ACM Com puting Surveys (CSUR) 54, 6 (2021), 1–35
[77]. Northpointe Inc., Dieterich, W., Ph. D., Mendoza, C., M. S., & Brennan, T., Ph. D. (2016). COMPAS Risk
Scales: Demonstrating accuracy equity and predictive parity performance of the COMPAS risk scales in
Broward County. https://go.volarisgroup.com/rs/430-MBX-
989/images/ProPublica_Commentary_Final_070616.pdf
Page | 3545
International Journal of All Research Education and Scientific Methods (IJARESM),
ISSN: 2455-6211, Volume 13, Issue 3, March-2025, Available online at: www.ijaresm.com

[78]. Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M., Ruggieri, S., Turini, F.,
Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder-Kurlanda, K., Wagner, C., Karimi, F., Fernández,
M., Alani, H., Berendt, B., Kruegel, T., Heinze, C., . . . Staab, S. (2020, February 3). Bias in data‐driven
artificial intelligence systems—An introductory survey. https://doi.org/10.1002/widm.1356
[79]. Ombao H., Lindquist M., Thompson W., Aston J. (2017). Handbook of Neuroimaging Data Analysis. New
York: Chapman and Hall/CRC
[80]. Poldrack, R A., Monahan, J., Imrey, P B., Reyna, V F., Raichle, M E., Faigman, D L., & Buckholtz, J W. (2018,
February 1). Predicting Violent Behavior: What Can Neuroscience Add?. Trends in cognitive sciences, 22(2),
111-123. https://doi.org/10.1016/j.tics.2017.11.003
[81]. Rebers, S., Aaronson, N K., Leeuwen, F E V., & Schmidt, M K. (2016, February 6). Exceptions to the rule of
informed consent for research with an intervention. BMC medical ethics, 17(1).
https://doi.org/10.1186/s12910-016-0092-6
[82]. Redish, A. D., Gordon, J. A., et al., (2016). Breakdowns and Failure Modes: An Engineer‘s View. In
Strüngmann Forum Reports (Vol. 20). MIT Press.
https://archives.esforum.de/publications/sfr20/chaps/SFR20_02%20Redish%20and%20Gordon.pdf
[83]. Richard A Berk, Arun Kumar Kuchibhotla, and Eric Tchetgen Tchetgen. 2023. Fair Risk Algorithms. Annual
Review of Statistics and Its Application 10 (2023), 165–187.
[84]. Rudin, ―Stop explaining black box machine learning models for high stakes decisions and use interpretable
models instead,‖ Nature Machine Intelligence, vol. 1, no. 5, pp. 206–215, May 2019, doi: 10.1038/s42256-019-
0048-x.
[85]. S. Goel, R. Shroff, J. Skeem, and C. Slobogin, ‗‗The accuracy, equity, and jurisprudence of criminal risk
assessment,‘‘ in Research Handbook on Big Data Law. Cheltenham, U.K.: Edward Elgar Publishing, 2021, pp.
9–28.
[86]. S. Hossain, A. Abtahee, I. Kashem, M. M. Hoque, and I. H. Sarker, ‗‗Crime prediction using spatiotemporal
data,‘‘ in Computing Science, Communication and Security. Gujarat, India: Springer, 2020, pp. 277–289.
[87]. S. Kim, P. Joshi, P. S. Kalsi, and P. Taheri, ‗‗Crime analysis through machine learning,‘‘ in Proc. IEEE 9th Annu.
Inf. Technol., Electron. Mobile Commun. Conf. (IEMCON), Nov. 2018, pp. 415–420.
[88]. Schultze-Kraft, M., Birman, D., Rusconi, M., Allefeld, C., Görgen, K., Dähne, S., Blankertz, B., & Haynes, J. D.
(2016). The point of no return in vetoing self-initiated movements. Proceedings of the National Academy of
Sciences of the United States of America, 113(4), 1080–1085. https://doi.org/10.1073/pnas.1513569112
[89]. Shah, U S., Dave, I., Malde, J., Mehta, J., & Kodeboyina, S. (2021, April 2). Maintaining Privacy i n Medical
Imaging with Federated Learning, Deep Learning, Differential Privacy, and Encrypted Computation.
https://doi.org/10.1109/i2ct51068.2021.9417997
[90]. Shams, R A., Zowghi, D., & Bano, M. (2023, January 1). Challenges and Solutions in AI for All.
https://doi.org/10.48550/arxiv.2307.10600
[91]. Sidra Kanwel, Muhammad Imran Khan, et al., ―Ginal Research Article Open Access from Bytes to Bars: The
Transformative Influence of Artificial Intelligence on Criminal Justice‖ 4 JOURNAL of LEGAL STUDIES and
RESEARCH, the Law Brigade (Publishing) Group 84-89 (2023).
[92]. Sommaggio, P., Mazzocca, M., Gerola, A., & Ferro, F. (2017, November 1). Cognitive liberty. A first step
toward a human neuro-rights declaration. BioLaw Journal - Rivista di BioDiritto, 11(3), 27-45.
https://doi.org/10.15168/2284-4503-255
[93]. Soto, J. M. D., & Borbón, D. (2022). Neurorights vs. neuroprediction and lie detection: The imperative limits
to criminal law. Frontiers in Psychology, 13. https://doi.org/10.3389/fpsyg.2022.1030439
[94]. Srinivasan, R., & Chander, A. (2021, July 26). Biases in AI systems. https://doi.org/10.1145/3464903
[95]. T. Chandrakala, S. Nirmala Sugirtha Rajini, K. Dharmarajan, K. Selvam, Development of Crime and Fraud
Prediction using Data Mining Approaches. International Journal of Advanced Research in Engineering and
Technology, 11(12), 2020, pp. 1450-1470.
http://www.iaeme.com/IJARET/issues.asp?JType=IJARET&VType=11&IType=12
[96]. Taylor, ―Justice by Algorithm: The limits of AI in criminal Sentencing,‖ Criminal Justice Ethics, vol. 42, no. 3,
pp. 193–213, Sep. 2023, doi: 10.1080/0731129x.2023.2275967.
[97]. U. Islam et al., ―Investigating the effectiveness of novel support vector neural network for anomaly detection in
digital forensics data,‖ Sensors, vol. 23, no. 12, p. 5626, Jun. 2023, doi: 10.3390/s23125626.
[98]. Umbach, R., Berryessa, C. M., & Raine, A. (2015). Brain imaging research on psychopathy: Implications for
punishment, prediction, and treatment in youth and adults. Journal of Criminal justice, 43(4), 295–306.
https://doi.org/10.1016/j.jcrimjus.2015.04.003
[99]. V. Ingilevich and S. Ivanov, ‗‗Crime rate prediction in the urban environment using social factors,‘‘ Proc.
Comput. Sci., vol. 136, pp. 472–478, Jan. 2018.
[100]. V. Mandalapu, L. Elluri, P. Vyas, and N. Roy, ―Crime Prediction using Machine Learning and Deep Learning: A
Systematic review and Future Directions,‖ IEEE Access, vol. 11, pp. 60153–60170, Jan. 2023, doi:
10.1109/access.2023.3286344.
[101]. Varun VM, ―Role of Artificial Intelligence in Improving the Criminal justice System in India‖ 6 JOURNAL of
LEGAL STUDIES and RESEARCH, the Law Brigade (Publishing) Group 63-69 (2023).

Page | 3546
International Journal of All Research Education and Scientific Methods (IJARESM),
ISSN: 2455-6211, Volume 13, Issue 3, March-2025, Available online at: www.ijaresm.com

[102]. Vu M. T., Adali T., Ba D., Buzsaki G., Carlson D., Heller K., et al. (2018). A shared vision for machine learning
in neuroscience. J. Neurosci. 38 1601–1607. 10.1523/JNEUROSCI.0508-17.2018
[103]. W. Safat, S. Asghar, and S. A. Gillani, ‗‗Empirical analysis for crime prediction and forecasting using machine
learning and deep learning techniques,‘‘ IEEE Access, vol. 9, pp. 70080–70094, 2021.
[104]. Z. J. Fernando, R. Rosmanila, L. Ratna, A. Cholidin, and B. P. Nunna, 2023, ―The role of Neuroprediction and
Artificial intelligence in the Future of Criminal Procedure Support Science: A New Era in Neuroscience and
Criminal justice,‖ Yuridika, vol. 38, no. 3, pp. 593–620, Sep. 2023, doi: 10.20473/ydk.v38i3.46104.
[105]. Zhuang Y, Almeida M, Morabito M, Ding W (2017) Crime hot spot forecasting: a recurrent model with spatial
and temporal information. Paper presented at the IEEE international conference on big knowledge. IEEE, Hefei
9-10 August 2017. https://doi.org/10.1109/ICBK.2017.

Page | 3547

You might also like