0% found this document useful (0 votes)
121 views945 pages

Artificial Intelligence and Law - Compressed

Uploaded by

chennaileena
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
121 views945 pages

Artificial Intelligence and Law - Compressed

Uploaded by

chennaileena
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 945

Artificial Intelligence, Law and Justice

Session 1

Introductory Session

Dr. Krishna Ravi Srinivas


Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Topics and Themes
• Introduction to AI in Law and Justice
• AI in Law and Justice in India
• AI in Law and Justice in select Jurisdictions
• AI, Algorithmic Decision Making & Governance
• AI Ethics, Responsible AI and Explainable AI
• AI and Intellectual Property Rights
• AI and applications in Law and Practice
• Legal Tech, AI and Future of Law
• Governance of AI and Its Relevance for Law and Justice
Uses of AI in Our Lives

• Emails – Spam – Non Spam


• Online Queries
• Chat Bots
• Alexa
What is Artificial Intelligence (AI)
• Defining AI
• We need a definition of AI to understand
• AI, its functions, limitations and Applications
• But technology is developing fast
• Difficult to Define and Predict
Defining AI

Artificial intelligence refers to the development of computer systems


that can perform tasks that typically require human intelligence
Defining AI

International Telecommunication Union’s Definition

AI refers to the ability of a computer or a computer-enabled


robotic system to process information and produce outcomes in
a manner similar to the thought process of humans in learning,
decision-making, and problem solving.
Objective of Developing AI

In a way, the goal of AI systems is to develop systems capable of


tackling complex problems in ways similar to human logic and
reasoning
Generative AI

AI that can create new contents Audio, Videos, audio-


visuals, Code, images, text, simulations, and other similar
outputs.
Generative AI

This heralds a new dimension AI creating what humans


create AI simulating human creativity in various forms
Three Components of AI

1) Machine Learning
2) Natural Language Processing
3) Deep Learning
Machine Learning (ML)

A subset of AI that involves Training algorithms to learn


from data and make predictions or decisions
Natural Language Processing (NLP)
A subset of AI that deals with the interaction between
Computers and humans in Natural Language
Deep Learning
According to IBM Deep Learning is a subset of machine learning that uses
multilayered neural networks, called deep neural networks, to simulate the
complex decision-making power of the human brain.
https://www.ibm.com/think/topics/deep-learning
Algorithms

• An algorithm is a set of instructions for performing tasks including


calculations.

• This is similar to step by step calculation and decision making.


Human and Computer Learning

• A child can recognize a cat from mouse, a mouse from dog, a dog from a
bird, a bird from a toy easily. A child need not see hundreds of dogs, cats,
and mice to learn this

• Children learn from environment and others


Human and Computer Learning
• Children are taught and then they learn on their own.

• Can Computers do this. Can they learn on their own.


Human and Computer Learning

• But a Computer or AI system cannot learn so.

• It needs much data such as images to know to recognize a cat and how to distinguish it from a dog.
.
Human and Computer Learning

• They need to be trained. For that data and data sets are necessary.
• The data sets can include pictures and images.
• That is why Machine Learning needs huge quantity of data to learn and analyze
Algorithm and Rules

• Rule based decision making is possible with algorithms. The rules can be
simple.
➢ Only a major can get a driving license.
➢ Only a child above 5 years can be admitted in 1st standard
➢ To qualify a minimum mark is needed
Algorithm and Rules
• Rules can be simplified as decision trees depending upon condition and answer
next steps can be added
• Multiple conditions can be part of a decision tree
Law and Decision Trees
• A decision tree is similar to using legal rules which have conditions.
• Based on facts and conditions decision can be arrived at.
Decision Tree on Vote
Algorithm and Decision Making

• We can use Algorithms to help in Rule Based Decision Making.


• Rules can be reworked as Algorithms
Complex Conditions and Loops

• In a complex tree there will be many conditional nodes and sub—nodes.


• Many trees can be combined and output from one tree can be input for another
AI, Algorithm and Decision Making

• Using Algorithms AI systems can arrive at Decisions.


• Algorithmic Decision Making (ADM) is an important use of AI in Law and Justice
Algorithmic Decision Making
• There are pros and cons in using ADM in Law and Justice.
• Will ADM partially or fully replace Human element in Decision Making
Examples

ADM is being used to decide on Grant of Parole, Bail and Sentencing.


An AI system can decide on these with Data and Algorithms and
Can give reasons for Decisions.
Positive & Negative Aspects
• Quick Decision
• Unbiased, no human emotion
• Reason and Data based
• Reduce burden on Judicial System
Positive & Negative Aspects

• Algorithm can be faulty


• Biases can be in inherent in system’s learning
• Lack of contextual understanding
• Missing Human Empathy and Consideration
Summing Up

In this Session 1 we learnt


• What is AI
• What are Components of AI
• Algorithm and Decision Making
• Use of Algorithm in Law and Justice
In the Next Session

In the next session we will learn more on


• Machine Learning
• Deep Learning
• Use of Machine Learning in Law
• Fundamental Ideas in Law and Justice
Artificial Intelligence, Law and Justice

Session 2

RULE OF LAW
Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Centre of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Rule of Law

• The Rule of law is a fundamental legal principle.


• In simple terms it means that nobody is above law
• Supreme court has declared that as basic feature of
constitution of India
• The core idea is that the nation is governed by law and not
by arbitrary decision of any authority
Rule of Law

• The Supreme Court has upheld and emphasized the


rule of law through landmark judgments.
• In Kesavananda Bharati v. State of Kerala (1973),
Rule of Law was established as a basic feature of the
Constitution.
• This case is also known as the case that resulted in the
affirmation of basic structure doctrine
• Rule of Law is a dynamic concept and evolved over
centuries
Origins of Rule of Law Concept

• Its conceptual origins can be traced to Aristotle


• Later many thinkers including John Locke ,Machiavelli
developed it
• Subsequently many including A. V. Dicey, F.A. Hayek, John
Rawls enriched it
• Today it is a well recognized constitutional law principle
Prof. Dicey and Rule of Law

• Prof. Dicey is a key theoretician of this principle


• According to him three essential principles are,
supremacy of law, equality before law, and predominance
of legal spirit
• He discussed this in 'Law and the Constitution‘ (1885)
• Subsequently many others have elaborated and enriched
this concept
Prof.Dicey and Rule of Law

• Supremacy of Law: No person can be subjected to


punishment or deprived of property or forced to suffer in
body unless (s)he has violated the law(s) and same is proven
in duly established court of law.
• Equality before Law: No person is above the law or totally
exempt from the laws of the land
• These two constitute the core of any constitutional
democracy where Rule of Law is the guiding principle
The Spirit of Law

• The courts act as independent enforcers and interpreters of


the rule of law.
• They are autonomous, free from external influences and are
not above law.
• Independence of Judiciary and Judicial Powers emanate
from Constitution
• Thus Judiciary is not institution that is not subservient to
the Government
Constitution of India & Rule of Law
• Preamble : The Preamble underscore the principles of
equality, justice and liberty
• Article 14 guarantees equality before law and equal
protection under law
• This is almost identical to Dicey’s ideas on Equality and
Rule of Law
• The Right to Life and personal liberty under Article 21
cannot be negated or limited or curtailed except through a
process established by law
• The Supreme Court has expanded the scope and meaning of
Right to Life under Article 21
Key Judgements on Rule of Law
• In addition to Kesvananda Bharati Judgement many
judgements have affirmed and interpreted Rule of Law
• Indira Gandhi v. Raj Narain: In this very significant case,
Supreme Court held that the doctrine of “Rule of Law”
under Article 14 forms the ‘Basic Structure’ of the
Constitution. In other words no body can assert that (s)he
is above law nor any law can give that right
• ADM Jabalpur v. Shivkant Shukla: This is also known as
‘Habeas Corpse’ case.
• The majority view upheld the absolute right of the state
but dissent by Justice H.R.Khanna emphasized that even
in the absence of Article 21 , State has no absolute power
to deprive freedom of any person
Key Judgements on Rule of Law

• Other key cases in which the Supreme Court has affirmed,


elaborated upon and expanded the understanding of Rule of
Law include Maneka Gandhi Vs. Union of India
• In this case the Court held that any law restricting personal
liberty must be reasonable, fair, and just. It also held that
such a law should also adhere to the principles of equality,
freedom, and the right to a fair procedure enshirned
respectively in Articles 14, 19, and 21.
Limits to Rule of Law

• The Constitution of India grants discretionary power to the


President regarding commuting, suspension and pardon for
convicts (Article 72 and 161)
• Immunities to the President and Governors
• Immunities for Diplomats under International Law and
Practice
• Discretionary Power including power to arrest under
different laws
Rule of Law, Society and State

• Roger Brownsword has conceptualized Rule of Law as a


contract between law makers, law interpreters and law
enforcers, and, citizens.
• In this view actions of the former will always be as per law
and the latter will abide by decisions made in accordance of
law and legal rules
• Thus none is above law and law is binding on the former as
well as the latter
Rule of Law-Institutions and
Principles
• According to World Justice Project “The rule of law is a
durable system of laws, institutions, norms, and community
commitment that delivers four universal principles:
accountability, just law, open government, and accessible and
impartial justice.”
• https://worldjusticeproject.org/about-us/overview/what-rule-
law
Digital Technologies and Rule of
Law
• Digital Technologies can impact Rule of Law positively as
well as negatively.
• https://press.un.org/en/2023/gal3694.doc.htm
• They can enhance efficiency of legal system, deliver
effective and quicker justice.
• But they can also adversely affect Rule of Law as they can
negatively impact human rights, reduce accountability, and
enhance bias and discrimination
• In this course we will focus on role of AI on both aspects
Next Session : AI and Data

• Key concepts and practices relating to AI and Data


• As Data is an important resource for AI and governance of data has
implications for AI
• Discussion on how issues in data can impact positively and
negatively development and application of AI
Thank You
Artificial Intelligence, Law and Justice

Session 3

Data and AI
Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Centre of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap-Rule of Law

• Development of the Rule of Law and its key features.


• Contextualization of Rule of Law for India and Indian
Constitution.
• Some important cases on Rule of Law
• Recent analysis on Rule of Law
• Role of Digital Technologies in Rule of Law
Why Data Matters in/for AI

• Data can be considered as the resource for AI to ‘train’


machines
• AI algorithms need access to data to perform tasks
• For machine learning and other purposed huge quantity
of data is needed in AI
Datafication, Society and AI

• Day and day out we consume, generate and share data


knowingly or unknowingly
• Earlier societies too had generated and used data and
processed them
• With progress in technology the capacity to deal with data
increased
• But digital technologies have enabled generation of data,
organizing them and using them in a scale that was not
possible before
Datafication, Society and AI
• Datafication can be linked with rise of what is known as
‘Big Data’
• Datafication can be understood transforming something
into Data
• According to Mejias and Couldry “datafication combines
two processes: the transformation of human life into data
through processes of quantification, and
• The generation of different kinds of value from data.
• Despite its clunkiness, the term datafication is necessary
because it signals a historically new method of
quantifying elements of life that until now were not
quantified to this extent.” Mejias, U. A. & Couldry, N.
(2019). Datafication. Internet Policy Review,8(4).DOI:
10.14763/2019.4.1428
Datafication and AI

• The relationship between AI and Data is two fold


• AI needs huge quantity of data and hence datafication
enables development and deployment of AI
• But AI also facilitates datafication by creating fresh data
and outputs and makes it an efficient and wide spread
process
• So AI is inseparable from Datafication
Metadata

• Metadata is data about data


• A digital image often has information about location, date
and time and phone/device used to capture
• Unstructured data has little value however huge it may be
• But structured data or information can be used in ML
• They can be evaluated, analysed and compared by
algorithms
Data Quality, Algorithms

• However algorthims need data is accurate, reliable and fit


for the purpose
• Hence quality of data including authenticity and accuracy
makes a difference
• If data is of poor quality or un reliable then it can result in
‘GIGO’ outcome
• GIGO is Garbage In Garbage Out
Data Quality, Algorithms

• There can be issues with data – errors, irrelevance,


inaccurate and inadequate
• Data may have biased information and may not be
representational as needed
• If data from clinical trials does not have the right type of
data because clinical trial did not include women, rural
population or their representation was inadequate
• Then Algorithm that uses that data for training might
result in faulty or wrong inferences
Bias and Discrimination

• If the data is inherently biased and discriminatory then


the algorithms trained on this data might give outputs
that reflect the same bias and discrimination
• This is a huge issue in use of data and algorithms in
criminal justice system
• If the data shows a skewed representation of a certain
category of persons in data pertaining to crimes and
punishment the algorithm imbibing this bias can repeat it
in outcomes or recommendations
• This is also related to ‘Data Invisibility’
FAIR Data

• The issues could also be due to lack of digitization or


inadequate digitization
• So when using and accessing digital data it is better to ask
‘FAIR’
• Is it Findable
• Is it Accessible
• Is it Interoperable, and,
• Is it Reusable
• Developing adequate quality controls, check lists and
criteria for use of data is desirable
Data Governance and Ownership

• As data gets digitized sharing, storing and using it is


becoming easier, cheaper and storing in multiple media
and locations is feasible
• But data ownership and usage can be restricted by
Intellectual Property and Other rights
• Although data may be in digital form it can be treated as
similar to physical property for ownership and other
claims
• More importantly privacy is also a factor and matter of
concern
Data Governance, Law and AI

https://barc.com/data-governance/
Data Governance, Law and AI

• In a broader sense Data Governance means governing


data as a resource through policies, strategies, rights,
responsibilities and institutions.
• Data Protection Laws are key component in this
• The Digital Personal Data Protection Act in India, GDPR
in Europe are examples
• There are data governance principles that are applicable
depending upon the context, use, type and value of data
Next Session
 AI in Law and Justice in India focusing primarily on initiatives of
Governments including that of Courts

Thank You
Artificial Intelligence, Law and Justice

Session 4

AI, Judicial System in India -Part-I

Dr. Krishna Ravi Srinivas


Adjunct Professor of Law &
Director, Centre of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap – Data and AI

• In the last session we discussed the importance of data


for AI and datafication
• Highlighted various issues in using data for algorithms
and in AI.
• Discussed about biases in data and how it affects AI and
algorithm based applications and data invisibility
• Finally we touched upon data governance and it’s
meaning application in two different contexts
AI in Indian Judicial System

• The use of AI in Indian Judicial System is in the initial


stages
• But we should not conflate this with use of AI in legal
system in India
• Use of AI in Indian Judicial System is part of
modernization and Digitization of Justice System
• AI has a major role in this but there is more to it than
adoption of AI
Examples of Adoption of AI

https://static.pib.gov.in/WriteReadData/specificdocs/docu
ments/2025/feb/doc2025225508901.pdf
At the Supreme Court

• Supreme Court uses AI and ML tools for transcribing oral


arguments particularly in Constitution Bench cases .
• AI is being used to translate Judgments in to 16 languages.
• So far 3114 translations have been done
• About 200 lawyers given access to prototypes of AI/ML
tools developed by IIT-Madras
• Registry in association with IIT-M developed and deployed
AI and ML based tools and these are to be integrated with
the electronic filing software of the Registry to help in
identification of defects
At the Supreme Court
• The AI based tool, Supreme Court Portal Assistance in
Court Efficiency (SUPACE), has been developed. The
idea is to develop aimed at developing a module to
understand the factual matrix of cases combined with an
intelligent search of the precedents as well as in
identifying the cases.
• This is in experimental stage under testing. Fuller
implementation after getting the requisite hardware and
computing capacity.
• Different tools using AI and ML are being developed and
tested in association with IIT-M
• These may be integrated with Integrated Case
Management & Information System (ICMIS)
Decision Making and AI
• In a reply given to a series of questions, the Hon’ble
Minister of State stated
• “As per the information provided by the Supreme Court
of India, no AI and ML based tools are being used by the
Supreme Court of India in the decision-making
processes, as of now.”
https://sansad.in/getFile/annex/267/AU2356_MLQkO0.p
df?source=pqars 20th March 2025
• This is understandable as Judges have been cautioning
against such uses
• The other factor is deployment of AI is not to the extent
that it can play a major role in decision making or AI
written Judgements
Cautions and Concerns

• While the efficient use of AI has been acknowledged


Judges of the Supreme Court have cautioned and
expressed concerns about replacing role of Judges with
AI. Example Former CJI Chandrachud and incoming CJI
Gavai
• This is also a fundamental concern on use of AI for all
Judicial purposes
• So where and how do we draw the line?
Hallucinations and Fake Citations
• AI systems are known to ‘Hallucinate’ and come up with
fake/non-existent data including cases, judgments.
• Karnataka High Court has ordered a probe against a trial court
judge for passing orders relying on the non-existent apex court
judgments.
• Justice Pratibha Singh refused to accept ChatGPT generated
responses in a case and cautioned against such use and their
reliability
https://www.scconline.com/blog/post/2023/08/28/delhi-hc-
artificial-intelligence-cannot-substitute-human-intelligence-in-
adjudicatory-process/
• An order of a Tax Tribunal was withdrawn as it seemed that it
cited non existent Judgments
https://www.livemint.com/money/personal-finance/chatgpt-
artificial-intelligence-ai-itat-bengaluru-bench-tax-buckeye-
trust-case-errors-11740544685677.html
Hallucinations and Fake Citations

• They raise serious questions about use of AI based tools


for writing orders, and, for other purposes where
accuracy and trustworthiness are important
• Should a lawyer or firm disclose if (s)he/it has used an AI
tool and if so what should be level of disclosure
• Who is liable in case of Hallucinations and Fake Citations
• Using ChatGPT to understand vs. Using it as an
authoritative source
AI, E-courts and Digitization
• Use of AI is not a stand alone initiative, part of larger
project to modernize and digitize .
• The Government of India has allocated a total of ₹7210
Crore for the e-Courts Phase III project. Of this, ₹53.57
Crore is allotted for the use and integration of AI and
Blockchain technologies in High Courts in India.
• A key objective is to use AI for translation of Judgments
from English
• High courts have translated about 5000 Judgments from
English
AI applications in E-Courts

• While the scale and range of applications vary the key


applications are.
• Automated Case Management
• AI in Legal Research and Documentation
• AI Assisted Filing and Use in Court Procedures
• User Assistance and Chatbots
• As data regarding number of applications, their use for
various purposes and rates of adoption across courts is
not known it is difficult to assess their adoption and
utility
AI and Judicial System in India

• The current level and scope of application is interesting


but preliminary .
• AI use is part of the larger Project and there is no ‘AI
Mission’
• Prima facie it seems that Supreme Court and High Courts
are the major users
• Is AI the tool that can make the huge difference in
modernizing Judicial System in India?
• In the next Session we will dwell further on AI and
Judicial System in India
Thank You
Artificial Intelligence, Law and Justice

Session 5

AI,Judicial System in India – Part-II

Dr. Krishna Ravi Srinivas


Adjunct Professor of Law &
Director, Centre of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap – AI in Judicial System
• In the last session we discussed the use of AI in India’s Judicial System
particularly in the Supreme Court
• Highlighted various concerns in using AI tools for writing Judgments or passing
orders , including hallucination .
• Learnt about various applications based on AI in E-courts
• Contextualized AI in Judicial System as part of Modernization and Digitization
Limited Use and Rationale

• The limited use may be surprising given India’s capability in


AI as well as in law and justice
• Most of the uses and applications are aimed at making the
system more efficient and responsive than to directly help
Judges in decision making or writing Judgments
• Use of AI in Indian Judicial System is limited to select
aspects and major themes in Civil and Criminal Law are not
covered
• In particular there is no application that is relatable to
matters in criminal law such as Bail, and Parole
Limited Use and Rationale

• Although it is technically feasible, Supreme Court is not


envisaging use of AI or ML-based prediction systems regarding
court proceedings
• This cautious and perhaps what some would consider as
conservative approach is rooted in the apparent consensus on
use of AI tools with a nuanced understanding of risks
• For example the CJI Mr. Gavai has stated “Relying on AI for
legal research comes with significant risks, as there have been
instances where platforms like ChatGPT have generated fake
case citations and fabricated legal
facts,” https://www.medianama.com/2025/03/223-justice-
gavai-flags-ai-risks-when-chatgpt-gets-legal-facts-wrong/
Limited Use and Rationale

• The credibility and acceptance of the Judicial System is based on the


accuracy and veracity of what is stated in the Judgments and orders.
• Unreliable AI tools when used can result in Judgements and Orders
that may read prima facie correct, but riddled with non existent
cases and facts.
• Further as AI tools are black boxes the questions of accountability,
responsibility and liability cannot be wished away
• The problem will be all the more acute when such tools are used in
Criminal Justice System
Innovation, Risk and Accountability

• As we will see in the subsequent sessions not all uses or AI


systems deployed in Judicial or Justice related matters have been
success or non-controversial
• While there is great scope to innovate and deploy, many issues
plague adoption of AI in critical and sensitive sectors like
Judiciary
• Judicial System is an autonomous one and operates on the basis
of core principles of Rule of Law
• But if careless wider adoption of AI system results in risks and
crisis in accountability of the Judicial System that impacts not
just the Judicial System and Judiciary but also the Rule of Law
Innovation, Risk and Account
• Hence the approach of beginning with limited use of AI
systems but in critical areas to improve efficiency and
access makes sense
• As India is yet to develop and apply a governance
framework for AI it is better to wait and watch and then
respond than to rush in haste where angels fear to tread
• Similarly the Data Governance Framework is not yet fully
in place
• Hence while the current pace of deploying AI is steady
and certain it has more merits and demerits
Capacity Building and Other Factors

• While there has been good progress in digitization of


Judicial System it has been uneven
• Using AI has helped to a great extent in addressing
issues in translating from English to other languages ,
most of the judgements written and pronounced in
other languages remain untranslated or not digitized
• Judges and Judicial Officers need to made aware of
and trained in using AI in Judicial System
• Many of the stakeholders of the law and justice system
are yet to be familiarized with AI and its deployment
Capacity Building and Other Factors

• There are initiatives like 24 hour online courts but they


do not seem to be fully AI based
• https://www.newindianexpress.com/states/kerala/2024
/Nov/21/first-in-india-24-hour-online-court-opens-in-
keralas-kollam
• Digitization and fully integration of AI tools in them
will take time as there are technical and other issues
• Use of AI in other applications like Online Mediation
and Arbitration is uneven
Alternative Perspectives

• Critics have pointed out some fundamental issues


with the use and deployment of AI in Judicial
System
• For example Siddharth Peter de Souza, an
academic at University of Warwick has called for a
rights based approach and termed the current
approach as a case of techno-managerial approach
• https://www.thehinducentre.com/publications/pol
icy-watch/ai-and-the-indian-judiciary-the-need-
for-a-rights-based-approach/article68885319.ece
• He calls for looking beyond ‘productivity matrix’
and understanding the impacts of AI in ‘people’s
worlds and lives.’
Alternative Perspectives

• Similarly Urvashi Aneja and Dona Mathew in their paper call for
‘”We also need upstream interventions to foster a culture of
responsible innovation. These may be anchored around five values –
purposeful openness, community-led design, capacity
strengthening, responsible investment, and iterative accountability”

• https://assets-global.website-
files.com/60b22d40d184991372d8134d/646315ae7153859ff45652c0_D
FL%20FINAL%20web.pdf
Alternative Perspectives

• Vidhi Center in its’ paper outlined a strategy outlining


medium term and long term steps to be taken in
integrating AI in Judicial System .
• It considered the potential benefits and risks and
cautioned against the risks posed by unmindful
application of AI that could alter the role of Judiciary
• https://vidhilegalpolicy.in/research/responsible-ai-for-
the-indian-justice-system-a-strategy-paper/
AI and Judicial System
• Our discussion on use of AI and its use in Judicial System
indicates that AI is here to stay and will be part and
parcel of Judicial System
• India has taken a steady but cautious approach in
deploying AI
• There are merits as well as demerits with this approach
but this is a reasonable approach given the risks of using
AI and unresolved issues in deploying AI
• What will be Responsible AI in the context of Judicial
System
• Is AI a panacea for all problems with the legal and
Judicial System
Is AI a Panacea ?

• While applying AI is an alluring solution it cannot be a


Panacea.
• Because there are structural constraints and issues related
to Values
• Sweeping solutions using AI may be technically feasible
and attractive
• But in the long run they will do more harm than good
• Multiple solutions are needed to address issues ranging
from ever increasing number of cases to inadequate
infrastructure
• AI is and can be part of them but it is not a magic wand
that can solve all problems
Next Session

• In the next session we will discuss the use of AI in legal sector in India
• Highlight the range and scope of AI applications
• Discuss on how AI based innovations are impacting the legal sector
• Contextualize that in light of global developments
Artificial Intelligence, Law and Justice

Session 6

Use of AI in Law in India– Part-I

Dr. Krishna Ravi Srinivas


Adjunct Professor of Law &
Director, Centre of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap – AI in Judicial System

• In the last session we highlighted why despite potential


for using AI there is a cautious approach
• Discussed unresolved issues in using AI in Judicial System
particularly for important tasks like writing Judgment or
passing orders.
• Also touched upon three critical analyses on the use of AI
in Justice System
• Finally we flagged other issues including capacity
building and cautioned against considering AI as a
panacea
Applications of AI
• AI can be used in many many ways in the practice of law
• But there is no point in using AI for the sake of using AI
• I have discussed about AI’s tendency to hallucinate and
give fake citations
• The advantages of using AI are too many
Applications and A Disclaimer
• But like any sophisticated technology adopting AI also
entails a deep learning curve and expertise to make the
best use
• The examples given in this Course are for illustrative
purposes and for academic purposes
• They do not indicate any recommendation or
endorsement for them
• Nor the mention of examples means that they have been
assessed and validated by NALSAR or by me
Typical Applications of AI

• AI in Legal Research
• Conducting Due Diligence
• Contract Analysis and Management
• Using Predictive Analysis for litigation purposes
• E-Discovery
Typical Applications

• Automated Research: This is a time and labor saving


use. Legal Research is more complex than what it is
assumed to be.
• AI-powered tools can analyze enormous quantum of
legal data, classify, identify relevant cases, summarize.
This will help in getting leads and in sharper
understanding
• Due Diligence: This again is a critical task but demands
mental attention and deeper understanding of law and
business needs. Contracts are often complex running to
pages with multiple parties involved
• AI tools can examine and point out potential issues, risks,
and raise queries and provide recommendations. By this
what a team can do in many hours an AI tool can do in
much less time and with desired outputs
Typical Applications of AI

• Entity Recognition and Extraction : AI-powered tools can


handle data including texts and identify and extract
relevant and exact entities such as names, dates, and
relationships from large documents which often could be
too long to read and comprehend.
• AI tools can map status of entities , chain of transactions
and corresponding entities and linkages and relationships
among entities
• While humans do this there is scope for errors,
misunderstanding and wrong assumptions
Contracts, Agreements and
Management
• Commercial Contracts are Complex and often have long
term commitments, clauses that can result in penalties if
a task is not done or a commitment is not met and also
have consortium arrangements in case of big projects.
• AI tools can deal with them by reviewing them, flagging
ambiguities, potential risks and liabilities and map the
terms with obligations, incomes and distribution of
benefits and resources
• Usually in big law firms there are teams to do this and
manage them from initial stages to final agreement
Predictive Analysis

• Humans do predict out of interest and curiosity and


based on their knowledge and analysis
• But AI can do this in a much more efficient way.
• Predictive Analysis involves analysis of previous cases,
locate and identify relevant patterns, map the key
points/arguments and cases, and predict the possible
outcome
• AI tools can do a Risk Analysis of a litigation to identify
the strength and weakness
• AI in these uses what it has learnt. But can AI replicate
human thinking and analytical capacity 100% ?
AI in Real World
• The Covid Pandemic was a transformation point as it
made law firms and others to realize that digitization is to
stay and legal system was in the cusp of transformation.
• It spurred many lawyers and firms to move to
understanding and using digital technologies as courts
became virtual courts and digitization transformed court
practices.
• The advent of ChatGPT and rapid growth of AI
applications in different sectors accelerated adoption of
AI
• Many large legal firms had already digitized documents
and were using digital technologies while digital data
bases and availability of huge literature including
judgments made a big difference
AI in Real World
• Cyril Amarchand Mangaldas (CAM) a major law is using
AI extensively.
• “As part of its AI-first strategy, the firm has adopted
Harvey on a pilot basis and Lucio to enhance legal
capabilities, alongside Co-pilot and ChatGPT+ to
optimise all its business operations”
• https://law.asia/ai-adoption-indian-law-firms/
• It has launched Prarambh, a legal tech incubator,
AI in Real World

• There are different AI tools that are available for lawyers


and citizens
• “A prime example is Lawyer Desk, a legaltech startup,
which started with a platform for advocates and
practising lawyers but later launched a platform,
Prajalok, to provide citizens with legal information,
guidance, and resources, including case tracking”
https://inc42.com/features/how-are-legaltech-startups-
making-their-case-in-india/
AI in Real World

• There is a AI powered Legal Chatbot NyayGuru which is


available for free
• This is available from https://nyayguru.com/
Legal Tech Innovation and AI

• While large firms and others are involved in developing


AI innovations, the elephant in the room is the legal tech
start ups.
• But can they really make a difference and spur many
innovative products and services based on AI
• If the scope for AI in law is enormous why there are not
many innovative products and services in this sector
• Will the advent AI based legal tech transform legal
profession and law as a career in the decades to come
Next Session

• Going forward will discuss more applications based on AI


in India
• A glance at legal tech start ups and their role
• Challenges in deploying AI in legal field
• We will also contextualize that in light of global
developments in deploying AI in law and practice
Artificial Intelligence, Law and Justice

Session 7

Use of AI in Law in India– Part-II

Dr. Krishna Ravi Srinivas


Adjunct Professor of Law &
Director, Centre of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap – Use of AI in Law in India

• In the last session we learnt about important applications


of AI in law
• How AI can make a real difference in practice of Law and
why there is a move towards greater utilization of AI in
legal sector
• Also touched upon how a large legal firm is using AI
besides giving some examples
• Finally we flagged other issues like legal tech innovation
and whether AI can transform legal profession and law as
a career
AI and non traditional applications
• AI can be used for online mediation and arbitration
• Such uses combine traditional benefits with advantages
brought by AI
• This in turn can give a boost to mediation and arbitration
• There are innovations that are happening in this
• For example WebNyay https://www.webnyay.in/
• Another potential area is consumer law and consumer
dispute resolution
• “Key developments include the introduction of the AI-
enabled National Consumer Helpline, the e-Maap Portal,
and the Jago Grahak Jago mobile application, all designed to
expedite the resolution of consumer complaints and
empower citizens to make informed choices.”
• https://dig.watch/updates/india-launches-ai-driven-
consumer-protection-initiatives
AI, Innovation and fragmentation

• There is a proliferation of start ups developing AI based


innovation
• But there is no big legal tech
, firm focusing on killer apps
or large scale Innovation
• In legal research there are new tools mostly from
traditional providers of databases and literature
• Thus innovation scattered across applications and firms
with no firm offering a comprehensive boutique of
services and products
AI, Innovation and fragmentation
• Big law firms develop proprietary AI tools, databases and other
innovations for internal use. It is also true that many firms use
tools developed elsewhere such as ROSS which is an AI
powered legal research service but India based alternatives are
in the offing
• https://www.thehindubusinessline.com/news/lexlegisai-
launches-indias-first-ai-driven-legal-platform-with-large-
language-model/article68555508.ece
• Interestingly developing legal innovations using AI for broader
public use or for lawyers is not part of Ecourts modernization
or any other public sector project
• https://economictimes.indiatimes.com/news/india/ai-can-
assist-but-cannot-replace-human-judgment-says-former-cji-
chandrachud/articleshow/118607172.cms?from=mdr
• There is a divide in terms of innovation focus, accessibility and
use
Legal Tech Start Ups
• While no precise data is available a study done in 2022
highlighted the potential and scope
• With 650+ startups, India ranks 2nd in terms of the
number of legal tech startups in the world. USA ranks 1st
with over 2500 startups.
• Legal tech in India mainly encapsulates four product
categories - Legal Service Delivery, Process Efficiency,
Access to Legal Recoursea and Do-it-Yourself (DIY) tools
- that service three customer segments - citizen, legal
service providers and judiciary.
• While Artificial Intelligence is believed to hold promise for
many legal tech models, the vernacular nature of
documentation currently poses a challenge.
Legal Tech Start Ups
• The opening up of a large domestic market to tech
interventions, increasing
• investments and acquisitions of startups, and validation
of new technology led
• models like Online Dispute Resolution (ODR) are
making the sector buoyant.
• The next wave of legal tech startup growth could
potentially come from ODR,
• Succession Management, Litigation Finance, Court
Management, Due Diligence
• Management and Legal Transcription and Translation”
• Beyond the Bench: Promise of Indian Legal Tech
Startups CIIE, IIM-A 2022
Legal Tech Start Ups
• While the number has grown, investments are in a
limited scale and most legal tech firms are in early stages
of innovation.
• There are news stories about legal tech firms attracting
investments whether will be there an legal tech unicorn is
a big question
• On the other hand big players like IBM are also in the
legal tech domain, although as part of their global
operations
• Thus legal tech in India is yet to attract massive
investments from Venture Capital Firms or similar
investors
https://inc42.com/features/how-are-legaltech-startups-
making-their-case-in-india/
Will there be a AI lawyer (and an AI
Judge ?)
• So some roles are likely to be redundant and realigned
• Some skills will be more valuable with new topics such as
legal analytics or lawyers with skills in AI tools
development becoming more valuable
• Hybrid roles with enhanced skills and responsibilities – A
lawyer who can work or collaborate with AI tools or
manage a team with AI tools than just as a lawyer
Will there be a AI lawyer (and an AI
Judge ?)
• AI is slowly and steadily transforming legal profession
• This is happening in many ways
• For example use of AI tools can automate routine tasks
with savings in time and energy boosting efficiency
• Legal tech innovations need interdisciplinary
collaborations
• Legal research with AI can augment thinking and
analytical capabilities
• AI based tools can streamline and manage tasks in large
law firms thereby
bringing changes in workflows, roles and organizational
structures
Ethics, Responsibility and
Accountability
• With advent of AI and its large scale adoption Ethical
norms, professional responsibility and accountability will
have to rethought and revised
• Policies on Ethical Development and Deployment of AI
products and services have to be developed and applied
• Emerging expectations from society and stakeholders on
the legal profession will have to be taken note and
addressed
• Similarly new concerns on data governance, privacy and
risks and liability from
AI will arise and have to be dealt with
AI in Real World
• The Covid Pandemic was a transformation point as it
made law firms and others to realize that digitization is
to stay and legal system was in the cusp of
transformation.
• It spurred many lawyers and firms to move to
understanding and using digital technologies as courts
became virtual courts and digitization transformed court
practices.
• The advent of ChatGPT and rapid growth of AI
applications in different sectors accelerated adoption of
AI
• Many large legal firms had already digitized documents
and were using digital technologies while digital data
bases and availability of huge literature including
judgments made a big difference
AI agents and Law

• AI agents are next level of innovation in legal services but


AI agents are emerging in other sectors as well
• “AI agents in legal services are intelligent, autonomous systems that
perform legal tasks with minimal human intervention. These agents can
understand legal language, analyse documents, track regulatory
changes, conduct legal research, and provide strategic
recommendations — all based on predefined goals and continuous
learning”
• https://www.xenonstack.com/blog/agentic-ai-legal-firm
AI agents and Law
AI agents in Legal Services
• Although there do not seem to be any development of AI
agents in legal services in India they being developed
cannot be ruled out as AI agents are being developed and
deployed in other sectors
• But AI agents raise many concerns about ethics,
responsibility and accountability particularly when they
operate as autonomous entities
• Do AI agents foretell the next wave of autonomy or
liability? https://www.thehindu.com/sci-
tech/technology/do-ai-agents-foretell-the-next-wave-of-
autonomy-or-liability/article68598314.ece
Summing Up

• Till now we have explored how AI is altering the legal


landscape in India and the many issues that are likely to
emerge in this
• We flagged the challenges like potential displacement,
demand for new skills and the fragmented nature of
innovations and absence of any major initiative for AI
innovations as public good
• We ended with a discussion on AI agents in legal sector
and why this can make a huge difference
Next Session

• In the next session we will discuss more applications


based on AI in India
• Provide an overview of the legal tech start ups and their
role
• Discuss the issues and challenges in deploying AI in legal
field
• Contextualize that in light of global developments in
deploying AI in law and practice
Artificial Intelligence, Law and Justice

Session 8

Algorithmic,AI,Decision Making

Dr. Krishna Ravi Srinivas


Adjunct Professor of Law &
Director, Centre of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
⚫In the last session we discussed how AI is being
applied in legal sector in India and the role of legal
tech in furthering innovation
⚫We highlighted the various unaddressed issues in
their use and how large scale adoption can impact
legal services and sector
⚫Further we closed with a discussion on AI agents in
legal services, the risks and what roles they can play.
Definitions
⚫Algorithm: An algorithm is a series of instructions
for performing calculations or tasks. In AI, it helps a
computer learn and perform tasks
⚫Algorithmic Decision Making (ADM): ADM refers
to using outputs produced by algorithms to make
decisions
⚫Historical Context: Algorithms have a long history
and have been applied differently over time. The
innovation in information theory and technology in
the 20th century redefined basic ideas.
History
⚫“Algorithms have been around since the beginning
of time and existed well before a special word had
been coined to describe them. Algorithms are simply
a set of step by step instructions, to be carried out
quite mechanically,so as to achieve some desired
result […]. The Babylonians used them for deciding
points of law, Latin teachers used them to get the
grammar right, and they have been used in all
cultures for predicting the future, for deciding
medical treatment, or for preparing food. Everybody
today uses algorithms of one sort or another, often
unconsciously, when following a recipe, using a
knitting pattern or operating household gadgets”
⚫Matteo Pasquinelli From Algorism to Algorithm
Algorithms
⚫Algorithms take a set of inputs, such as age,
residence, marital status, or income, and process
them through a series of steps to produce outputs or
decisions .
⚫These algorithms are used in various sectors,
including healthcare, public benefits, infrastructure
planning, and budget allocation
AI Algorithms
1. Learning from Training Data: AI algorithms are designed to
learn from training data, which can be either labelled or
unlabelled. This information is used to enhance the
algorithm's capabilities and perform tasks
2. Continuous Learning: Some AI algorithms can continuously
learn and refine their process by incorporating new data,
while others need a programmer's intervention to optimize
their performance .
3. Task Execution: The slide emphasizes that AI algorithms
use the information from training data to carry out their
tasks effectively
Data and Algorithms
⚫Data Quality Issues: data used to train AI systems can
be inaccurate, incomplete, or biased, which can lead to
significant consequences if used for critical tasks like
analyzing skin images or prioritizing patient care
⚫Bias in Data: data might be infused with bias, often
stemming from erratic and biased realities, such as
clinical trials excluding women and people of color
⚫Consequences of Biased Data: using biased data can
result in flawed decisions by AI algorithms, which can
have severe consequences in critical applications
⚫Importance of Representative Data: It is important to
ensure that AI algorithms are trained using representative
data to avoid biases and ensure equitable outcomes for all
⚫Mutual Reinforcement of Issues: problems in AI
systems can arise from data, algorithms, or a combination
of both, mutually reinforcing each other
Data and Representation
⚫Digital Divides: digital divides in many Global South
countries have led to "data invisibility," impacting historically
marginalized groups such as women, castes, tribal
communities, religious and linguistic minorities, and migrant
labor .
⚫Biases in AI Algorithms: there are potential biases in AI
algorithms due to these invisible data, emphasizing the need for
algorithmic transparency and accountability .
⚫Algorithmic Transparency Audits: whether the AI system
underwent transparency audits and how to make them less
biased and more useful ?.
⚫Socio-Technical Issue: algorithmic transparency is not just a
technical issue but an examination of socio-technical systems
that can significantly impact society .
⚫Impact on Marginalized Groups: It is important to address
data invisibility to ensure that AI algorithms do not perpetuate
biases against marginalized groups
Direct and Proxy Data
⚫Proxy data is often used when direct data is unavailable
or insufficient. However, using proxy data requires
caution as it can introduce unintended biases .
⚫Examples of proxy data include using location as a
proxy for income level or status. This can lead to biased
decision-making even if the bias is not directly evident .
⚫AI systems may make predictions based on proxy data
that resemble restricted categories of data, such as race,
even if race is not explicitly included as a parameter .
⚫It is crucial to ensure that proxy data is used exclusively
for legitimate purposes to avoid unintended biases and
ensure fairness
Evaas – Contesting Algorithm
⚫Between 2011 and 2015, Houston teachers' work performance
was evaluated using a data-driven algorithm called EVAAS . The
program enabled the board of education to automate decisions
regarding bonuses, penalties, and terminations based on the
algorithm's evaluations .
⚫The source codes for EVAAS are trade secrets owned by SAS, a
third-party vendor, preventing teachers from contesting the
decisions or understanding how the algorithm reached its
conclusions In 2017, a US federal judge ruled that the use of the
secret algorithm violated teachers' constitutional rights, requiring
transparency in the evaluation process .
⚫SAS declined to reveal the internal workings of the EVAAS
algorithm, leading the Houston school system to stop using it .
⚫The court decision emphasized the need for teachers and the
Houston Federation of Teachers to independently check and
contest the evaluation results produced by the algorithm
https://files.eric.ed.gov/fulltext/EJ1234497.pdf The Education Value-
Added Assessment System (EVAAS) on Trial: A Precedent-Setting
Lawsuit with Implications for Policy and Practice
Explainable AI and Algorithms
⚫The US NIST has issued guidance on AI explainability,
which might be part of impact assessment systems .
⚫The NIST draft guidelines suggest four principles for
explainability for audience-sensitive, purpose-driven,
automated decision-making systems (ADSs) assessment tools .
⚫These principles include providing accompanying evidence or
reasons for all outputs, offering understandable explanations,
reflecting the system's process for generating the output, and
operating under specific conditions .
⚫These principles shape the types of explanations needed to
ensure confidence in algorithmic decision-making systems,
such as explanations for user benefit, social acceptance,
regulatory and compliance purposes, system development, and
owner benefit .
⚫The source of this information is the NIST document "Four
Principles of Explainable Artificial Intelligence,"
Rights
⚫Contesting Algorithm Logic: a defendant faces challenges in
contesting the logic of an algorithm when they do not have
access to the source code, training data, or required datasets.
⚫Information for Defendants: what information should be
provided to the defendant to contest the logic of an algorithm?.
⚫Inputs and Outputs: is it sufficient for defendants to have
access simply to the inputs and outputs generated by the
algorithm?.
⚫Margin of Error: whether the defendant should receive
information on the margin of error of the algorithm(s) used.
Rights
⚫How can courts enforce due process of law if the
algorithm deploysmachine learning and no one, not
even the developer, understands the ML “analysis”
completely?
⚫• How will courts assess the accuracy of
algorithms, particularly when they forecast future
human behavior”
⚫“What legal and social responsibilities should we
give to algorithms shielded behind statistically data-
derived ‘impartiality’?
⚫• Who is liable when AI gets it wrong?”
Civil Justice, Algorithm
⚫AI has been deployed in various areas of the civil justice
system, including family, housing, debt, employment,
and consumer litigation.
⚫Civil courts are increasingly collecting data about
administration, pleadings, litigant behavior, and
decisions, offering opportunities for automating certain
judicial functions.
⚫AI is used to pre-draft judgment templates for judges,
make predictions or sentencing recommendations for
bail, sentencing, and financial calculations.
⚫AI can assess the outcome of cases based on the past
activities of prosecutors and judges, providing
information to judges that factors in a wide amount of
case law.
⚫AI tools can significantly reduce research time in the
preparation of decisions.
Predictions
1. An AI algorithm developed by researchers from
Université Catholique of Leuven, the University of
Sheffield, and the University of Pennsylvania is used to
predict judicial rulings of the European Court of Human
Rights.
2. The algorithm, led by Dr. Nikolaos Aletras, has an
accuracy rate of 79.70% in predicting these rulings.
3. AI is not intended to replace judges or lawyers but to
assist in identifying patterns in case outcomes.
4. It helps highlight cases that are most likely to be
violations of the European Convention on Human Right
Algorithm as Authority
⚫When Algorithms are embedded in decision
making they become ‘authorities’ by de facto and as
part of AI black boxes
⚫But as we will discuss in subsequent sessions use
of algorithms can impact human rights, rule of law
and adversely affect access to services
⚫Integrating them into AI systems raises questions
about ethics, accountability and responsibility.
⚫AI tools using algorithms are of concern when they
are used in law and justice
⚫Some of these will be elaborated in the subsequent
sessions.
Algorithm and Authority
⚫An AI algorithm developed by researchers from
Université Catholique of Leuven, the University of
Sheffield, and the University of Pennsylvania is used to
predict judicial rulings of the European Court of Human
Rights.
⚫The algorithm, led by Dr. Nikolaos Aletras, has an
accuracy rate of 79.70% in predicting these rulings.
⚫ AI is not intended to replace judges or lawyers but to
assist in identifying patterns in case outcomes.
⚫It helps highlight cases that are most likely to be
violations of the European Convention on Human Right
THANK YOU!
Artificial Intelligence, Law and Justice

Session 9

AI and The Rule of Law - Part-I

Dr. Krishna Ravi Srinivas


Adjunct Professor of Law &
Director, Centre of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
• Key points were discussed in the previous
class about Algorithms and Algorithmic
Decision Making (ADM).
• We also discussed a case study where
algorithm based decision was challenged
• We highlighted issues in ADM and its use
and why some uses are controversial
Overview of AI and Rule of Law
• AI and ML technologies are used in various phases
of legal processes, such as pre-adjudicative and
adjudication phases .
• There are concerns about the displacement of
human judgment by AI and the implications for
fairness, transparency, and equity
• The compatibility of ML technologies with the rule
of law is questionable, particularly regarding
transparency, predictability, bias, and procedural
fairness
• The socio-political context of technology adoption
can exacerbate disparities in power and resources
Overview of AI and Rule of Law
● Pre-Adjudicative Phase
○ ML and AI used to select targets for tax and
regulatory investigations
● Adjudication Phase
○ ML and AI guide determinations of individual
violence risk during pretrial bail
● Human Judgment vs. Code-Driven Counterparts
○ Predictions of displacement of human judgment by
AI
○ Resistance due to concerns about fairness,
transparency, and equity
● Normative Concerns
○ Criticisms often overlap with rule of law concerns
Impact of AI on Rule of Law
● Dynamic Nature of Rule of Law
○ Adapts to changing societal needs and values
○ Influenced by technological advancements
● Technological Influence
○ AI, machine learning, blockchain, and quantum
computing
○ Devices linked to the internet collecting vast data
● Opportunities from AI
○ Enhances consistency, transparency, and
compliance in tax administration
○ Improves decision-making and service quality
○ Promotes transparency and civic engagement
● Challenges from AI
○ 'Black box' nature of autonomous systems
Machine Learning and Rule of Law

● ML is used in law enforcement and


adjudication, with different considerations for
each context .
● ML algorithms improve performance through
training experience and derive rules from
data .
● They produce results with less bias and lower

variance than traditional regression tools


● The technological shift has implications for
rule-of-law values and interactions with social
and economic arrangements
Common Features of ML Tools
● ML tools can interact with different forms of
rule of law, including formal, substantive, and
procedural forms in various ways,
depending on their implementation
● ML tools rely on training data to gauge
variable relationships and develop models
for predictive or descriptive applications
● Supervised ML sorts data into predefined
categories, while unsupervised ML develops
classifications based on the inherent
structure of data
ML and Formal Rule of Law

● Interaction with Formal Rule of Law


○ ML tools can be beneficial or harmful depending on
implementation

● Fuller's Definition of Rule of Law


○ Requires a published code for future disputes
○ Code must be understandable by ordinary citizens or trained
lawyers

● Legislative Obscurantism
○ Fuller allows some obscurity if a trained professional can
understand it
○ Question of whether computer scientists can aid in understanding

● Objections to ML Systems
○ ML systems can be obscure to those affected by their classifications
○ Reward functions of predictive tools are not readily available for
examination
● Challenges in Understanding ML Tools
ML in Private Sector

● Common Examples of ML Tools


○ Recommendation systems used by Netflix and
Amazon
○ Google’s Pagerank algorithm
○ Facebook feed for social media users
● Categories of Real-World Problems Solved by
ML
○ Identification of clusters or associations within a
population
○ Identification of outliers within a population
○ Development of associational rules
○ Prediction problems of classification and
regression
State Adoption of ML

● Governments use ML tools for investigation,


targeting, and as substitutes for human judges .
● ML tools are also used in defense technologies,
diagnostic tools, and work facilitation
● ML tools are used by various government
bodies to analyze data and identify legal
violations .
● Governments use ML for allocating resources
and predicting crime location
● ML tools are used to predict pretrial violence
and guide bail determinations .
Compas Algorithm

● Compas Algorithm Usage


○ Used in many American jurisdictions
○ Generates a risk score from one to ten for
defendants
○ Guides judges' bail determinations

● Controversies and Criticisms


○ Accusations of racial bias
○ Higher proportion of factually innocent black
defendants detained
○ Compared to factually innocent white
defendants
Midas System

● Introduction of Midas System


○ Introduced by Michigan governor Rick Snyder in 2013
○ Aimed to detect fraudulent unemployment benefit
applications
● High Denial and False Positive Rates
○ 93% denial rate
○ Falsely accused 40,000 residents of fraud
● Lack of Mechanism for Challenging Denials
○ No way for individuals to contest denied benefits
● Inadequate Staffing and Support
● Limited User Interface
● Binding Predictions
● Rule-of-Law Concerns
Future of State Adoption
● Dynamic Contexts of ML Adoption
○ Domestic political environment with firms expanding
ML capacities
○ International competition among nations for
geostrategic ends
● Strategic Choices by National Governments
○ Deployment of ML instruments based on strategic
decisions
● Influence of Geopolitical and Domestic Factors
○ Interplay between geopolitical environment and
domestic interest groups
● Future Projections of ML Adoption
○ Potential for new adoptions displacing human judgment
in adjudicative processes
Next Session
● We will take this discussion forward by
discussing more on the Rule of Law and AI
● We will look at some theoretical perspectives
on this issue
Artificial Intelligence, Law and Justice

Session 10

AI and The Rule of Law - Part-II

Dr. Krishna Ravi Srinivas


Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
● In the first of the three sessions on AI and
Rule of Law we began by introducing the key
topics in this theme highlighting some of the
concerns on impact of AI on Rule of Law
● We discussed how states and private sector
have adopted ML tools for different purposes
● By looking at three cases of AI in USA and
Europe we illustrated why and how concerns
about adoption of AI are also concerns about
human rights and entitlements
Empirical Contingencies
● Complex Relationship Between ML-Driven
Adjudication and Rule of Law
○ Not as straightforward as it appears
○ Empirical contingencies play a significant role
● Rule of Law and Background Conditions
○ Influenced by social and economic conditions
○ Governance choices, including ML tools, alter these
conditions
● Material and Intellectual Equality
○ Formal and procedural rule of law presupposes equality
○ Uneven effects in the absence of such equality
● Disparities in Legal Benefits
○ Those with resources leverage the law better
● Promoting Equal Distribution of Legal Resources
Centrality of Human-Managed Courts
● Assumptions on Human-Managed Courts
○ Raz emphasizes the need for judicial independence
○ Fuller requires a robustly independent judiciary
○ Waldron focuses on human-driven adjudicative
processes
● Challenges to Traditional Views
○ Taekama questions the necessity of courts for law
guidance
○ Technological advances offer new ways to achieve rule of
law values
● Unbundling Social and Human Goods
○ Normative ambitions can be separated from traditional
institutional forms
○ Conceptual and institutional elements are less tightly
connected
Future Directions
● Principles of AI Development
○ Transparency in AI creation
○ Inclusivity ensuring diverse
participation
○ Accessibility for all individuals
● Core Interests and Needs
○ AI developed with people's
interests at its core
○ Compliance with the rule of law
Rethinking Rule of Law in an AI Age?
● Challenges to the Rule of Law
○ 'Black box' problem in AI decision-making
○ Systemic biases in automated decisions
○ Lack of transparency and accountability
● Human Involvement in AI
○ Necessity for human oversight in automated decisions
○ Obligation for reason and explainability
○ Varied human involvement based on AI application
● Benefits and Risks of AI
○ Unprecedented benefits of AI applications
○ Risk of exacerbating the digital divide
● Measures to Uphold the Rule of Law
Due Process and AI
● AI as a Broad Field
○ Encompasses various technologies and approaches
○ Aims to create systems performing tasks requiring
human intelligence
● High Complexity AI Applications
○ Examples include Neural Networks and Deep Learning
○ Provide high predictive accuracy
○ Considered black-box models due to output
understanding difficulty
● Concerns with Black Box Models
○ Lack of transparency
○ Difficulty in understanding automated decisions
○ Erodes trust and undermines accountability
● Problem of Interpretability
Importance of AI Literacy
● Traditional Literacy Skills
○ Focus on reading and writing
● AI's Impact on Literacy
○ Generative AI creates and processes text,
audio, images, and video
○ Users must critically engage with AI content
● Understanding AI Mechanics
○ Importance of familiarity with algorithms
○ Recognizing limitations and biases in AI
results
● Ethical Dimension of AI Literacy
○ Encourages responsible and informed use
● Ng et al.'s Definition of AI Literacy
● Promoting Digital Literacy
Rule of Law : Old Concerns, New Issues
● Historical Evolution of Rule of Law
○ Distinct models in the UK, USA, France, and Germany
○ Two archetypes: Aristotle's rule of reason and Montesquieu's rule
of institutional restraint
● Impact of Information Technology Revolution
○ Opportunities for informed decision-making, service delivery,
transparency, and civic engagement
○ Challenges include 'black box' nature of AI systems and widening
digital divide
● 'Black Box' Problem in AI
○ Opaque decision-making processes erode trust and accountability
○ Lack of human comprehensibility poses challenges to the rule of
law
○ Human involvement and explainability are crucial
● Digital Divide and Inequality
● Addressing Challenges
Rule of Law– Variations in a Theme
● Concept of Rule of Law
○ Multiple interpretations
○ Mechanism for curtailing arbitrary state power
○ Attributes necessary for a just society
● Perspective on Rule of Law
○ Mechanism for determining rules
○ Conditions for individuals to reach their potential
○ Achieving personal goals and ideals
● Impact of Technology
○ AI as an inhibiting factor
○ Susceptibility to manipulation and categorization
● Notion of Power
● Technology is more than a mere tool
Artificial Intelligence, Law and Justice

Session 11

AI and The Rule of Law - Part-III

Dr. Krishna Ravi Srinivas


Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
● In the last session that is second session
among the three we discussed the emerging
relationship between technology and the idea
and practice of rule of law
● We stressed the need for AI literacy and
argued that defining AI precisely is difficult
and AI has evolved since 1950s
● Further we also touched upon Rule of Law in
the context of power of Technology
Council of Europe's Perspective
● AI Systems as Socio-Technical Systems
○ Impact is context-dependent
○ Influenced by initial design, data input,
environment, and human values

● Enhancing Rule of Law and Democracy


○ Making public authorities more efficient
○ Freeing up time for long-term issues
○ Identifying public needs
○ Contributing to policy development
Importance of Independent Judiciary
● Role of Judiciary in Rule of Law
○ Ensures fair trial and access to justice for all
○ Maintains principle of equality of arms
● Equality of Arms Principle
○ Each party in a legal dispute has equal
opportunity to present their case
● European Convention on Human Rights
○ Article 6 guarantees right to a fair trial
○ Entitles everyone to a fair and public hearing
within a reasonable time
○ Hearing by an independent and impartial
tribunal established by law
AI in Judicial Decision-Making
● Judiciary Independence and AI
○ AI systems may generate biased recommendations
○ Judges need minimal understanding of AI processes
○ Ensure human oversight and accountability
● Efficiency of AI in Legal Analysis
○ AI can analyze vast datasets more efficiently than
humans
○ Legal decision-making traditionally reserved for
skilled lawyers
○ Algorithmic thought-process must be scrutinized
○ Concerns about bias and transparency
● Distinction in AI Use in Judicial Matters
○ Civil, commercial, and administrative matters
Predictive Policing and Profiling
● High-Risk Classification of AI Systems
○ AI systems used to profile individuals and areas
○ Determines likelihood of re-offending or crime
occurrence
○ Classified as 'high-risk' under the EU's AI Act
● Threats to Rule of Law Principles
○ Equality before the law
○ Presumption of innocence
○ Non-discrimination
● Objections and Calls for Prohibition
○ Fair Trials, EDRi, and 43 others issued a statement
Case for Harmonised Regulation
● Need for Harmonised Regulation
○ Different choices in AI use
○ Importance of upholding the rule of law
● Lack of Transparency and Accountability
○ Intrinsic issue with AI technology
○ Black box reasoning hinders understanding of decisions
● Violation of Rule of Law Principles
○ Access to effective judicial remedy compromised
○ Infringement of fundamental rights without
accountability
● Impact on Fundamental Rights
○ Right to not be discriminated against affected
○ Right to an effective remedy diminished
Challenges in Accountability
● Equality Before the Law
○ No one should be above the law
○ Public and government officials must be accountable
● AI Decision-Making Challenges
○ Difficulty in determining accountability for AI outcomes
○ Opaque or 'black-box' reasoning complicates responsibility
● Potential Accountability Options
○ Responsibility of the person inputting data
○ Accountability of AI system designers and manufacturers
● Implications for Technological Innovation
○ Possible chilling effect on innovation
○ Developers may avoid designing AI systems due to
accountability burden
Ensuring Accountability and Transparency
● Importance of Accountability and
Transparency
○ Essential for the rule of law
○ Maintains mutual trust between authorities
and citizens
● Challenges in Practical Implementation
○ Lack of practical guidance for execution
● Impact of Erosion of Trust
○ Serious implications in countries with low trust
in government
Legal Framework
● Need for Strong Legal and Ethical
Frameworks
○ Ensures AI adheres to rule of law principles
○ Accommodates fast-paced AI development
● Continuous Monitoring and Impact
Assessments
○ Ensures AI systems comply with rule of law
○ Prevents negative impacts on rule of law
● Regulation and Guidelines Development
○ National and multilateral efforts
○ Council of Europe's focus on ethical framework
● International Policy Coordination
● Balancing Innovation and Regulation
Rule of law in the AI era: addressing
accountability, and the digital divide
● Concept of the rule of law in the context of AI
advancements is evolving.
● There is need for transparent AI decision-making
processes and human involvement to ensure
accountability.
● Two main challenges: the 'black box' problem and the risk
of exacerbating the digital divide.
● There should be measures to prevent and narrow the
digital divide, suggesting that both governments and
private entities should implement such measures.
● The importance of human involvement in automated
decision-making processes to uphold the rule of law.
● Addressing these challenges requires upholding the rule of
law through human involvement and possibly enforcing
an obligation for reason and explainability.
Rule of law in the AI era: Where
Do We Go from Here
● How AI can advance Rule of Law is not clear because our
experience with current generation of AI is too short
● But as AI evolves and getting more and more integrated in
day to day lives we would need safeguards at least to
preserve some aspects of Rule of Law including right to
equality
● While there is some sensitivity to AI and Rule of Law in
Council of Europe’s AI Treaty this is missing in most
discussions on AI governance
● The changing nexus between technology and Rule of Law
should be understood in the broader context of politics of
technology than thinking technology as a mere tool and AI
as yet another and emerging technology
References
● The discussions in the three sessions were largely based on the following
● Aziz Z. Huq, "Artificial Intelligence and the Rule of Law", Public Law and
Legal Theory Working Paper Series, No. 764 (2021).
● University of Chicago Law School Chicago Unbound
● Kouroutakis, A. Rule of law in the AI era: addressing accountability, and
the digital divide. Discov Artif Intell 4, 115 (2024).
https://doi.org/10.1007/s44163-024-00191-8
● Stanley Greenstein Preserving the rule of law in the era of artificial
intelligence (AI) Artificial Intelligence and Law (2022) 30:291–323
● Emily Binchy Advancement or Impediment? AI and the Rule of Law The
Institute of International and European Affairs , Dublin 2022
● And
● BIANCA-IOANAMARCU THE WORLD’S FIRST BINDING TREATY
ONARTIFICIAL INTELLIGENCE, HUMAN RIGHTS,DEMOCRACY, AND
THE RULE OF LAW, Future of Privacy Forum, 2024
● As this is an emerging theme we have just introduced to the debates and
perspectives on this theme. References to additional literature can be made
available on request
Next Session
● In the next session we will discuss the idea of
Algorithmic Justice taking forward our
discussion on Algorithms, AI and Algorithmic
Decision Making
Artificial Intelligence, Law and Justice

Session 12
Algorithmic Justice-Part-I
Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
⚫In the last session that is third session among the three we
discussed how AI’s impacts on Rule of Law raises new
concern and the attempts to address them

⚫We stressed the need addressing issues in Accountability


as well case for regulation of AI with Rule of Mind as a
consideration

⚫We pointed out that addressing these challenges requires


upholding the rule of law through human involvement and
possibly enforcing an obligation for reason and
explainability
Algorithmic Justice – The idea
⚫This is an emerging and new concept and it is linked with larger issues of fairness, justice and equality in
the context of AI, algorithmic decision making and technologies like Facial Recognition Technology (FRT).

⚫It can be considered as ruthless critique but it goes beyond that. For example Berkmen Klien Center for
Internet and Society at Harvard University states
• “Our work on algorithms and justice (a) explores ways in which
government institutions incorporate artificial intelligence, algorithms, and
machine learning technologies into their decision making; and (b) in
collaboration with the Global Governance track, examines ways in which
development and deployment of these technologies by both public and
private actors impacts the rights of individuals and efforts to achieve social
justice. Our aim is to help companies that create such tools, state actors
that procure and deploy them, and citizens they impact to understand how
those tools work. We seek to ensure that algorithmic applications are
developed and used with an eye toward improving fairness and efficacy
without sacrificing values of accountability and transparency.”
https://cyber.harvard.edu/projects/algorithms-and-justice
Origins and Definition
⚫The idea of Algorithmic Justice (AJ) was popularized by Dr.Joy Buolamwini whose experiences with
FRT’s showed that the technology applications were biased and her face was recognized only when
she wore a white mask. Based on her experience, research and others experiences with AI
(including FRTs) she wrote a book ‘Unmasking AI’ and founded Algorithmic Justice League (AJL).
Her work was featured in a documentary ‘Coded Bias’ https://sanford.duke.edu/story/dr-joy-
buolamwini-algorithmic-bias-and-ai-justice/
⚫ Defining AJ Algorithmic Justice: "The application of principles of social justice and applied ethics to
the design, deployment, regulation, and ongoing use of algorithmic systems so that the potential for
harm is reduced" (Head, Fister, and MacMillan, 2024).

⚫ Algorithmic Reparations: Drawing from intersectionality critical race theory,


Davis, Williams, and Yang argue that we must "name, unmask, and undo allocative
and representational harms as they materialize in sociotechnical form."
They suggest algorithmic reparations as "a foundation for building,
evaluating, and when necessary, omitting and era
https://library.highline.edu/c.php?g=1401364&p=10368769
AJ – Wider Application
⚫Although AJ may appear to be applicable only in some sectors, in reality, it has
wider scope, particularly when algorithms, ML and AI systems are deployed
across sectors including health and science.
⚫But the context and stakeholders will vary and so are the ideas of addressing AJ.
⚫For example in a report on AJ in Precision Medicine suggested
“Train model developers and users on health equity and health justice
frameworks, the profound impacts of health and health care disparities on
individuals and communities, decision-making trade-offs and their impacts”
⚫It is essential to determine how algorithms can be used to uncover biases in the
databases, algorithms, and related systems; and how algorithms can preclude
such biases in establishing future databases” Towards Algorithmic Justice in
Precision Medicine, A Report, UCSF, 2024
Technical Solutions, Individuals and AJ
⚫“We argue, further, that algorithmic ‘quality’ also demands
adherence to basic constitutional principles and legal
requirements, at least for tools intended to inform public
authority decisions. In these application domains, constitutional
principles, administrative law doctrine and human rights norms
are a crucial part of the relevant socio-technical context and must
therefore inform and constrain the exercise of decision-making
authority by criminal justice officials.”

⚫How do ‘technical’ design-choices made when building


algorithmic decision making tools for criminal justice authorities
create constitutional dangers? by Karen Yeung and Adam
Harkensb. In other words seemingly technical choices and
solutions can have larger implications for constitutional rights.
However the fundamental problem is more acute than this.
Technical Solutions, Individuals and AJ
⚫While Constitutions recognize us as individuals
algorithms do not.
⚫“Algorithmic representations give no special attention to
biological, psychological, and narrative properties, and, as
such, they fail to capture central aspects of our ordinary
representation of human identity. Indeed, we have argued,
algorithms make predictions about us by relying on
properties that do not directly relate to nor reflect the
individuals we understand and represent ourselves as
being” Artificial intelligence and identity: the rise of the
statistical individual -Jens Christian Bjerring, Jacob
Busch
Old Biases, New Modes?
⚫While human biases are too well known and have been dealt with for ages
and with mechanisms that curb the arbitrary exercise of power based on
biases and prejudices. But with algorithms and Machine Learning “Human
bias gets re-packaged in complex layers of mathematical code and
computations, and gains the facade of objectivity, rendering it particularly
difficult (not less so) to spot and challenge – especially in the case of
complex, black-box algorithms”
⚫Opacity and technical complexity add the to the information asymmetries.
Further a model that performs well on training data and testing data could
make many mistakes in dealing with real life circumstances when they are
different from what it was trained on. Moreover over reliance on technical
black box models reduces the chances for meaningful human involvement.
AI algorithmic oversight: new frontiers in Regulation Madalina Busuioc
Criminal Justice – Tools and Errors
⚫ Prof. Karen points out that it is presumed that one is innocent till proved
guilty in Criminal Justice System
⚫Falsely convicting the innocent is an error (Type I error)
⚫This is more serious that letting the guilty go (Type II error)
⚫Public and government officials must be accountable
⚫Right to Due Process
⚫Can increase the risk of Type II error but
⚫Can minimize Type I error
⚫But often developers of ADM tools do not take this into account
⚫Implications
⚫The tools cannot be substitutes for due process based decision making
⚫They can impact depending upon the context of use and decisions given
Algorithmic Decision Making and Context
⚫Thoughtless application of ADM can result in
⚫It being used in contexts where it is not the right
solution
⚫Rights being violated directly or indirectly
⚫Recommender Vs. Judge
⚫Recommending to me what I should watch or buy
⚫Is not the same as Judging me by ADM
⚫Should ADM have a place in Criminal Justice System
⚫‘Efficiency’ Vs. rights and transparency
Fallacies and Grounded Truth
⚫“Algorithms don’t harm people fallacy’
⚫(Gun’s don’t harm people, people harm people..)
⚫ Even if a tool does not, in and of itself, interfere with or engage human rights, it
does not follow that we can ignore other kinds of harm: like
• Troubling tendency of ‘digital enchantment’ in contemporary policy
discussions,
to ignore that ADM systems are powerful technologies which are capable of
producing dangerous decisions (even if tempered by a team of benevolent
developers, or the ‘perfect’ exercise of discretion by a human decision-maker),
because of the capacity to operate automatically, at scale, and to trigger action
that is remote in both time and space from the location at which the action is
triggered.
E.g. ‘government by database’ much more threatening to human rights and
democracy than a policeman with a pen and notebook”
Algorithmic Accountability
⚫Scholars like Prof. Karen have argued that ADMs
applications in Criminal Justice System has serious
consequences.
⚫How to counter that and what are possible interventions
⚫She suggests that solutions through Human Rights Law.
⚫Administrative Law including use of Judicial Review
Principle, and, Anti discrimination law
Algorithmic Accountability
Can lawyers working with developers of ADM ‘fix’ some of
these
⚫Should there be institutional mechanisms that build
safeguards when ADM is used in Criminal Justice
System, Employment, Public Welfare
⚫Is it possible to use ADM in a positive way and for social
good
⚫Given the widespread use of ADM in public sector and
by private sector who will assess them and monitor their
impacts

In the next class we will discuss these


Artificial Intelligence, Law and Justice

Session 13
Algorithmic Justice-Part-II
Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
⚫In the last session the idea of AI Justice was introduced
⚫Drawing upon the work of Prof. Karen and others the
problems in using AI in Criminal Justice System was
discussed
⚫We also discussed how such a use can be problematic
as they can have implications for human rights and the
ADM systems may not be compatible with tenents of
Criminal Justice principles
Opacity and Decision Making Algorithms
⚫Simon Chesterman has identified three challenges posed by opacity
in ADM systems.
⚫“First, it may encourage — or fail to discourage — inferior decisions by
removing the potential for oversight and accountability. Secondly, it
may allow impermissible decisions, notably those that explicitly or
implicitly rely on protected categories such as gender or race in making
a determination. Thirdly, it may render illegitimate decisions in which
the process by which an answer is reached is as important as the
answer itself “. He also points out that naturally opaque systems may
need novel forms of explanation or some decisions cannot be explained
and some decisions should not left in the hands of machines”
⚫THROUGH A GLASS, DARKLY:
⚫ARTIFICIAL INTELLIGENCE AND THE PROBLEM OF OPACITY
Algorithmic Law
⚫How do we understand algorithms and particularly when they take decisions. Editors of European Journal of
Risk Research point out that algorithms often work as a functional equivalent of a legal rule, similar to a
structure of command and consequence. Further we need to study the legal rules as well mechanisms embedded
in them as part of new emerging discipline algorithmic law.

⚫They further make some important observations, after reviewing the literature regulation of algorithms they ask
“Today, we should also evaluate the contemporary experience of algorithmic decision making and set standards
for computer engineers to eventually pass a test in programming code that could resemble the evidence-based,
justifiable, reasonable and prudential activities of a legal decision-maker. In terms of the performance that we
would expect of AI in an imitation game, can machines provide useful predictions for decision-making systems
in the legal domain? “. They propose something that is similar to Turning Test for it.

⚫ They also point out there is a counterview that only human beings can
do decision making as they have empathy and that is a core human
feature essential for lawyering and for Judges.
⚫ Artificial Intelligence Risks and Algorithmic Regulation Pedro Rubim
Borges Fortes1, Pablo Marcello Baquero2,* and David Restrepo Amariles2
European Journal of Risk Regulation (2022), 13, 357–372
AI as Judge ?
⚫In the literature there are arguments that AI can function as a Judge and
so are counter arguments. There are also ideas like SMART law –
“scientific, mathematical, algorithmic law” which is shaped by risks and
technology
⚫But should go for choices that are technically feasible even if they are not
morally acceptable to some or many. It is one thing to use machines to aid
in legal decision making it is another to allow them to make them. So they
argue
⚫“On the other hand, some activities are considered to be essentially
human, such that our society would value the presence of a “human in the
loop” as the decision maker. Robots and AI may carry out the services and
activities that we no longer want to perform. In this sense, we should
carefully examine whether we would prefer to be judged by human
intelligence or by AI.” They rightly point out
⚫“Algorithmic law and regulation challenges everyone to rethink power,
democracy, regulation and institutional design”
How (not) to use AI
⚫Many AI based solutions in sectors like health enable AI to diagnose,
suggest solutions and make recommendations. For example an AI
system can identify the need for a cataract surgery and may
recommend. But it is left to humans to decide although an expert
can agree with the finding of AI system.
⚫In some sectors AI’s expertise can be as good as that of a human
expert.
⚫In law and justice too AI can give a reasoned solution or finding or
can justify its ‘order’ .
⚫But that per se can justify that we should ‘employ’ AI as a judge
⚫Or should be still take a view than while there can be an ‘AI
advocate’, ‘AI Judge’ is not permissible
⚫It can be argued that some of the problems in opacity can be
addressed through technical solutions and systems can be made
more ‘explainable’
Solutions and Approaches
⚫Can systems be designed in such a way that that users, designed can
collaborate and develop them based on lived realities of persons.
According to Siddharth DeSouza absence of careful legal design can
result more exclusion and margialization (S de Souza, “The Spread of
Legal Tech Solutionism and the Need for Legal Design” European Journal
of Risk Regulation)
⚫Algorithmic audits can be another solution.
⚫Ethics committees to evaluate algorithms can be a good solution to
address ethical issues
⚫Bringing in Responsible AI and explainable AI concepts to practice will
also be useful.
⚫But we need clarity on what we want to address or solve and what are the
trade offs we are willing to agree to.
Good Intentions and Reality
⚫It is presumed that AI based systems can offer some services such as
legal guidance if not advice and they can be used to provide
information to citizens in multiple ways. But according to Blank and
Osofsky the reality is different. Based on an extensive study they point
out “However, the use of automated legal guidance also comes with
important costs. Critically, we show how, precisely because of its perceived
strengths, especially when compared to the often messy, or ambiguous,
formal law, automated legal guidance can obscure what the binding legal
provisions actually are” They also point out while such guidance may be
given by tools when legal counsels argue on behalf of governments may not
give the same advice or may provide something different. Thus those who
can afford can get better legal advice. So while AI systems can provide legal
assistance , that addresses only partially the issue of getting the right legal
advice.
⚫Joshua D. Blank and Leigh Osofsky 2025 Automated Agencies
Cambridge University Press
Final word?
⚫“As AI’s increasing use in the judiciary makes codified
justice more appealing in other contexts, its downsides
are likely to be reproduced, too. Problems analogous to
the ones discussed above will likely arise. And the basic
menu of responses, with all their limitations, is also likely
to recur. Finding a path forward will require attention not
only to technology and law, but also to technology’s
impact on conceptions of justice, in both its human and
artificially intelligent forms “

⚫ Developing Artificially Intelligent Justice Richard M.


Re & Alicia Solow-Niederman 22 STAN. TECH. L. REV.
242 (2019)
But a new Wild West?
⚫We need a comprehensive approach or a mechanism to
understand in depth and views of stakeholders.
“When deployed within the justice system, AI technologies have serious
implications for a person’s human rights and civil liberties. At what point could
someone be imprisoned on the basis of technology that cannot be explained?
Informed scrutiny is therefore essential to ensure that any new tools deployed
in this sphere are safe, necessary, proportionate, and effective. This scrutiny is
not happening. Instead, we uncovered a landscape, a new Wild West, in which
new technologies are developing at a pace that public awareness, government
and legislation have not kept up with.”
▪ Technology rules? The advent of new technologies in the justice system House
of Lords Committee 2022
To Sum Up
⚫In these two sessions we have barely scratched
the surface. This is an important emerging theme
and with multiple views with pessimism and
optimism. For every why?, there can be a why
not?. The literature is vast. So this discussion is
more a partial introduction than a grand
exposition to the topic. Literature is cited in the
presentation but more
can be added.
Next Session(s)

⚫The next four sessions will be on AI and Copyright


Artificial Intelligence, Law and Justice
Session 14
Artificial Intelligence and Copyright-Part-I
Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
A Quick Overview
⚫Debates on AI and Copyright are raising fundamental
questions because AI is also a ‘creative’ technology
⚫Traditional roles and understandings of ‘Author’,
‘Authorship’ are being questioned
⚫So is the applicability of copyright for training purposes
in AI
⚫Old issues like ‘fair use’ are being revisited and
challenged
⚫ In this 4 sessions we quickly cover many important
issues and Identify trends, dilemmas and proposed
solution
⚫As some themes are to be discussed in different contexts
for different purposes some repetition is inevitable
Four Factors Dominating the Discussion
⚫Control
⚫Importance of managing and directing processes
⚫Compensation
⚫Fair and adequate remuneration
⚫Transparency
⚫Clarity and openness in operations
⚫Legal Certainty
⚫Ensuring legal clarity and stability
Legislators Trying to Recalibrate the Balance of Interests
⚫Strengthening the Position of Rights Holders
⚫Efforts to protect the rights of individuals and entities
⚫Strengthening the Position of AI Companies
⚫Singapore's approach to support AI companies
⚫Indecision
⚫Lack of clear direction in some regions
⚫Conditional Control
⚫European Union's regulatory measures
⚫Country-Specific Approaches
⚫China's stance on AI regulation
⚫United Kingdom's policies
⚫United States' regulatory framework
Overview of AI in Artistic Creation
⚫AI as a New Artistic Tool
⚫Enables creation of sounds, pictures, and texts
⚫Protected by artistic freedom under Art. 13 EUCFR and Art. 10(1)
ECHR
⚫Risk of Copyright Claims
⚫AI use often seen as a threat to human creativity
⚫Technological progress in reproduction and distribution
challenges culture industry’s value chain
⚫AI Leading to New Copyright War
⚫Concerted action by rights holders
⚫Artists rarely featured in the conflict
⚫Crucial Question for Artists
⚫Whether AI’s input taints its output
⚫21st century’s paintbrush seen as unlawful by many lawyers and
content industry
Artistic Use of AI
⚫Artists' Use of AI
⚫Generate images and sounds
⚫Manually modify or place in self-designed contexts
⚫Not Limited to Prompts
⚫Fine-tune using specific material
⚫Part of artistic practice protected under Art. 13 EUCFR and Art.
10(1) ECHR
⚫AI as a Tool, Not an Alternative
⚫New tool for artists
⚫Same acceptance problems as art created using new technical
means
⚫Historical Comparison
⚫Photography faced similar acceptance issues
Copyright Issues with AI
⚫AI's Dependency on Copyrighted Material
⚫AI needs to be fed material to create new content
⚫Often uses copyrighted material for training
⚫Reproduction of Copyrighted Images
⚫AI must 'see' artworks to produce similar styles
⚫Involves reproduction of copyrighted images
⚫Potential Copyright Infringement
⚫Generated output may infringe on copyrights
⚫Liability under copyright law is complex
⚫Distinction Between Input and Output
Phase
⚫Different legal considerations for input and output
Input and Output Phases
⚫Input Phase of Generative AI Models
⚫Digitised image and text material used
⚫Material collected by web crawlers or selected by artists
⚫Most input material protected by copyright
⚫Training Process and Copyright Infringement
⚫Reproductions made during training constitute copyright
infringement
⚫Exception for text and data mining under CDSM Directive
⚫Right holders can reserve such uses
⚫Liability for Copyright Infringement
⚫US company liable for copyrighted content in learning process
⚫Output Phase and Further Infringement
⚫Exceptions and Limitations
Legal Implications and Pastiche
⚫AI Learning and Copyright Concerns
⚫AI learning is considered copyright infringement if text and data mining
exceptions do not apply
⚫AI output is only safe from copyright claims if input phase is covered by
exceptions
⚫Artistic Use of AI
⚫Artists using AI as a creative tool must have input phase protected
⚫Selection of materials for fine-tuning AI is part of artistic practice
⚫Technical character of fine-tuning does not change its artistic nature
⚫Legal Precedents and Artistic Freedom
⚫Sampling in music is a known practice covered by artistic freedom
⚫Reproductions in this context are protected by CJEU and German
Federal Constitutional Court
⚫Artistic freedom protects both the process and the final artwork
AI Copyright War Dynamics
⚫Copyright as Fundamental Right
⚫Motivates new creations by rewarding old ones
⚫Silicon Valley's Attitude
⚫Eric Schmidt's controversial statement at Stanford
⚫Blatant disregard for IP rights
⚫Rights Holders vs. Creators
⚫Economic interests often disguised as defense of the arts
⚫Creators' fundamental rights overshadowed by corporate
interests
⚫Rights holders focus on existing works, artists create new ones
⚫In a recent case a federal judge in Thomson Reuters v. Ross
Intelligence rejected a fair use defense in an AI training context.
But there is no clarity as there are many cases pending in many
courts on this or similar matters
AI Copyright War and Political Economy
⚫The Political Economy of AI cannot be ignored in any discussion on AI and copy right.
According to a recent report “In Web 1.0, we worried about lower direct rewards for
artists dulling incentives. In Web 2.0, we found out that if there are indirect rewards
(such as YouTube fame),this can compensate. Now we have a world where indirect
rewards are also challenged because human creators will ultimately become more
anonymous, at least insofar as they contribute to the synthetic output of generative AI.”
Lutes, Brent A. ed., Identifying the Economic Implications of Artificial Intelligence for
Copyright Policy: Context and Direction for Economic Research, U.S. Copyright Office,
2025. P61
⚫According to Joshua Gans”, it is demonstrated that an ex-post `fair use'
type mechanism can lead to higher expected social welfare than
traditional copyright regimes” Copyright Policy Options for Generative
Artificial Intelligence https://www.nber.org/papers/w32106
⚫Irrespective of criticisms of copyright as an incentive and an exclusive
right, it is obvious that AI is going to create new issues for creative artists
including writers. On the other hand many of them who have assigned
their copyrights to the publishers or to others, may not gain much when
the copyright holder licenses copyrighted materials for training and other
purposes
AI Copyright War and Political Economy
⚫To give a simplified picture we have the Big Five (Face Book, Meta, Microsoft,
Amazon and Google) and others in AI, and, big publishers, music labels, media
empires engaged in copyright wars but the creators, artists and others who
produced them, often are not in a position to engage in a protracted litigation
even when they know that their creative works have been used for training or
(mis) appropriated otherwise.
⚫On the other hand much of the battle is also on the lines of ‘ fair use’,
‘incentives for creators’ , ‘need for a flexible approach towards copyright to
incentivise AI’ , and, ‘ creating new schemes for rewarding and redefining limits
of copyright protection’
⚫But if go beyond the rhetorical claims and assertions we can easily understand
that the economic stakes are high.
⚫Given the insatiable appetite of AI for data and the potential for endless
generation of synthetic data, the copyright war is also about who will control
and direct the development of AI?
⚫Political Economy aspects and gains from valorisation through copyright have
always been the elephants in the room whether we like it or not
AI Copyright War and Political Economy
⚫From another vantage point it is argued “Law constructs and reinforces
power in the GVCs of AI by providing the foundation for a power-
enabling form of contractual governmentality and by cementing IP fences
that perpetuate the uneven distribution of value across the chain.
Meanwhile, law amplifies the geopolitical framing that wants
transnational AI development to be shaped based on national interests.”
⚫Petros Terzis, Law and the political economy of AI
production, International Journal of Law and Information
Technology, Vol31, Issue 4, Winter 2023, Pages 302–
330, https://doi.org/10.1093/ijlit/eaae001
⚫There are national variations of ‘fair use’ and national policies on AI that
will play a key role in deciding some of the contentious issues.
⚫Striking a balance between innovation promotion and upholding
interests of copyright holders/ authors/creators may be an ideal but how
that will be negotiated and agreed upon is not clear now.
Next Session

• In the next session we will discuss the legal aspects in


Machine Learning and Copyright
Artificial Intelligence, Law and Justice

Session 15

Artificial Intelligence and Copyright -Part-II


Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
• In the last session we had an overview of issues in AI
and copyright

• We highlighted some concerns of the creators and


artists and situated them in the broader political
economy of copyright, IP and Innovation

• We stressed the need to decode the rhetoric behind


some of the terms and claims to understand what
matters in the debate
Copyright implications for
data scraping and mining
⚫Copyright Law and Data Processes
⚫Direct impact on data scraping, mining,
and learning
⚫Corpora may include copyrighted works
⚫Infringement Risks
⚫Digital copies can infringe economic
right of reproduction
⚫Changes in material can be considered
as adaptation
⚫Exceptions and Limitations
⚫Research or text and data mining
exceptions
⚫Not all activities of researchers and firms
are covered
Legal grey areas and
rapid technological development
⚫Layered Protection Complexity
⚫Confusing for users and regulators
⚫Machine Learning in a Legal Grey Zone
⚫Relies on established data processing and analysis
lifecycle
⚫Rapid Integration of Powerful Models
⚫Accelerating pace of integration in services
⚫Impact of Generative AI
⚫Increased visibility of consumer-facing applications
⚫Led to lawsuits and proposed interventions
Dominant scenarios in
generative AI
⚫Generative Artificial Intelligence (AI)
⚫Becoming more visible in the current policy
context

⚫User-Facing Applications
⚫Large language models like OpenAI’s Chat-GPT
⚫Generative image applications like Midjourney
Continuously Deployed vs.
Off-the-Shelf Models
⚫Continuously Deployed Models
⚫Rely on current data updates

⚫Off-the-Shelf Models
⚫Fine-tuned or aligned for specific
purposes
⚫Data collection is essentially complete
European Legal Analysis of
Reproduction Rights
⚫Right of Reproduction under ISD
⚫Contained in Art. 2 of the Information Society Directive (ISD)
⚫Temporary exception in Art. 5(1) ISD
⚫Enables technological development
⚫Tension between ISD and CDSM Directives
⚫Copyright in the Digital Single Market (CDSM)
⚫Directive, brought into law in 2019
⚫Arts. 2/5(1) ISD vs. Arts. 3 and 4 of CDSM
⚫Text and data mining exceptions
⚫Research Use under Art. 3 CDSM
⚫Subject to lawful access
⚫Contracts involved
⚫Opt-out under Art. 4 CDSM
⚫For non-scientific purposes
⚫But there are issues in lack of harmonization
between ISD and CDSM
Scientific research uses
⚫Unclear Copyright in Machine Learning
⚫Uncertainty affects research uses, as seen in
⚫Lawful access terms control research possibilities and costs
⚫Licensing Arrangements for Research
⚫Providers set terms for valuable data sets
⚫Right holders may license material to AI firms
⚫Threats to withdraw archives from public interest research
⚫Live Online Services and Public Interest
⚫Unclear line between competitive control and public interest
⚫EU's TDM (Text and Data Mining) exceptions and
⚫SGDR not successfully drawn
⚫What is the Sui Generis Database Right (SGDR) and how does it
relate to other rights in Databases?
⚫For legal purposes, a ‘database’ means “a collection of independent
works, data or other materials arranged in a systematic or
methodical way and individually accessible by electronic or other
means https://www.openaire.eu/sui-generis-database-right-sgdr
Impact on individual creators
⚫Withdrawal from Machine Learning Contexts
⚫Possible under Art. 4 CDSM opt-out
⚫May reduce diversity and quality of AI models

⚫Licensing and Revenue Sharing


⚫Individual creator's share likely minimal
⚫Foundation models have billions of parameters
⚫Trained on trillions of tokens
Policy Choices for AI and Copyright
⚫ Obligations to disclose training data
⚫ Importance of transparency in development and use
⚫ Legal and ethical considerations
⚫ Moral rights of authors and creators
⚫ Collective licenses for machine learning
⚫ Shared access to datasets
⚫ Collaboration among organizations
⚫ Collective licensing is a simple way to manage the reuse of small extracts of published copyright material. It
provides an efficient, cost-effective means of satisfying the two sides of the equation: those who wish to use
extracts of content, and those who hold the rights to it.

The value for licence holders


For organisations that wish to reuse copyright content without breaking the law, a
collective licence—sometimes referred to as a blanket licence—provides access to a
rich variety of material in return for an appropriate fee. Schools, universities, central
and local government and businesses all use blanket licences to secure permission to
use extracts of content by copying, scanning and other means
https://www.pls.org.uk/collective-licensing/what-is-collective-licensing/
Promote Open source
Benefits of open source software
Community contributions and improvements
Collective Licenses for Machine Learning
⚫Potential Benefits
⚫Prevents innovation hold-ups

⚫Feasibility Issues
⚫Difficulty in assembling sufficient rights
⚫Becomes difficult when it involves
different types of contents such as text,
sound, audio-visual, data from multiple
sources and using small portions of them
Levy approach for equitable remuneration
⚫Levy Approach Proposal
⚫Suggested by Prof. Martin Senftleben
⚫Focus on equitable remuneration to authors
⚫“Generative AI and Author Remuneration
⚫International Review of Intellectual Property and Competition Law 54 (2023),
pp. 1535-1560”
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4478370
⚫EU’s Rental Directive as a Model
⚫Levy paid by providers of generative AI systems
⚫Funds directed to social and cultural funds of collective management
organizations
⚫Purpose: Fostering and supporting human literary and artistic work
⚫Challenges of Levy Approach
⚫Bureaucratically challenging
⚫Issues around levy efficiency remain unresolved
⚫Litigation on who pays and who receives
Open Source Advantage
⚫Transparency in Open Source AI
⚫Potential to outperform closed AI systems
⚫Wide use in operating systems and security
protocols
⚫Common Training Sources
⚫Repositories governed by open licenses
⚫Examples include Wikipedia and GitHub
⚫Varieties of Licenses for different purposes
Lifecycle of Machine Learning and
Legal Implications
⚫Legal, Technological, and Contractual Opacity
⚫Leads to undesirable allocation of licenses and
obligations
⚫Risks of Training and Deploying Unlicensed Models
⚫Currently risky in the EU
⚫Will remain risky for the foreseeable future
⚫Movement Towards Fully Licensed AI Copyright
Environment
⚫Regardless of available exceptions
⚫Key Question: Obtaining Suitable Licenses
⚫Where to obtain a license
⚫Under what conditions
⚫Global Deployment of AI/ML models vs. National
Variations
Public Benefit and
Copyright Works in AI Training
⚫ Public Benefit of Using Copyright Works
⚫ Uncertainty about the societal impact
⚫ Potential for a fully licensed AI environment
⚫ Current Copyright Solutions
⚫ Controlled by major right holders and large AI firms
⚫ Alternative Approach
⚫ Machine learning as a general purpose technology
“Knowing if an emerging technology is general purpose is of significant strategic
importance for managers and policymakers. Such general purpose technologies
(GPTs) are rare and hold potential for large scale economic impact because they
push the production possibility frontier out several times (Bresnahan and
Trajtenberg 1995). Examples of GPTs include the steam engine, electricity,
computers, and the internet (Lipsey, Carlaw, and Bekar 2005).” Avi Goldfar, Bledi
Taska, Florenta Teodoridis
https://www.nber.org/system/files/working_papers/w29767/w29767.pdf
⚫ Challenges for Copyright Law
⚫ Novelty of technology and its ramifications – its more than copying or
reproduction
⚫ Balancing market entry, open source innovation, and creator remuneration
⚫ Necessity to address these tensions
Next Session
Artificial Intelligence, Law and Justice

Session 16

Artificial Intelligence and Copyright -Part-III

Dr. Krishna Ravi Srinivas


Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
⚫We discussed the issues in Machine Learning and Copyright including
issues Text and Data Mining (TDM)

⚫We discussed various policy options including considering ML as a


General Purpose Technology
Use of digital tools by authors
⚫Traditional Digital Tools for Authors
⚫Spelling & grammar and thesaurus features in Microsoft Word
⚫Educational chat bots for learning and knowledge acquisition
⚫Generative AI Models
⚫ChatGPT and other NLP-based tools
⚫Generate high-quality works with a prompt
⚫Extract and contextualize information from big data
⚫Improve based on user feedback
⚫Implications of Generative AI
⚫Changes in human-technological interaction
⚫Impact on GDPR, privacy, and IP rights
⚫Need to remunerate human authors when Generative AI uses
them
Parameters in GenAI models
Legal regimes Blurred Boundaries
⚫Role of Language and Legal Regime in AI
⚫Language as both input and output in Generative AI
⚫Example: Google's translate feature and virtual
assistants
⚫Content Moderation and Data Enrichment
⚫Accuracy of content in moderation practices
⚫User input data enriches NLP-driven tools
⚫Copyright Protection and AI Output
⚫Distinction between reading and copying blurs
⚫Challenges of Licensing and Synthetic Data
• Synthetic Data is a new beast which we have to
understand and deal with
Definition and generation of synthetic data
⚫Definition of Synthetic Data
⚫Artificially generated using real data as input
⚫Has the same statistical properties as real data
⚫Imitation or New ?
⚫Generation of Synthetic Data
⚫Requires sufficient real data to understand style and patterns
⚫Possible to generate works in the technique of a given author
⚫But can Synthetic Data come anywhere near the original work
in terms of authors insights and brilliance in thinking ?
⚫Ease of Generation
⚫Easier to generate for creative works due to consistent style
⚫Authors' styles evolve but retain individual elements
⚫Usage in AI Training
⚫Synthetic outputs used to train AI systems
⚫Expected to surpass organically generated data soon
⚫Difficulty to differentiate from real data
Value and transition to synthetic data
⚫Analytical Findings from Synthetic Data
⚫Synthetic data provides similar insights as
original data
⚫Factors Facilitating Transition
⚫Rapid uptake of computational abilities
⚫Increasing storage possibilities
⚫Advanced and improved algorithms
⚫Examples of Synthetic Data Usage
⚫Gupta et al.'s 'SynthText in the Wild' for scene
text detectors
⚫Dosovitskiy et al.'s synthetic images of chairs
for generative AI models
Blurring boundaries between
real and synthetic data
⚫Blurring Boundaries Between Real and
Synthetic Data
⚫Synthetic data increasingly blurs the line between true
and false, real and imaginary
⚫Deep fakes often contain substantial synthetic
elements
⚫But difficult to identify and differentiate
⚫Real-Time High-Quality GenAI Outputs
⚫Live video is superimposed on a targeted monocular
video clip
⚫Quick succession of images creates high-quality deep
fake videos
⚫Ease of creation means deep fakes can be used to
develop more deep Fakes
Example of synthetic data in AI training
⚫Synthetic Data Diversity
⚫Includes datasets, images, and audio-visual works
⚫AlphaGo by Google’s DeepMind
⚫Trained on real-world inputs like game rules and
strategic moves
⚫Used repeat simulation to learn and improvise
⚫Identified a strategic blind spot overlooked by
humans
⚫Defeated the best AlphaGo player effortlessly
Legal implications of synthetic data
⚫Role of Synthetic Data in Law
⚫Addresses limitations in data access and analytics
⚫Involves both personal and non-personal information
⚫Data Protection and Privacy
⚫Complies with personal data protection laws
⚫Preserves statistical properties of original data
⚫GDPR Compliance
⚫Offers protection over personal data
⚫Facilitates compliance with GDPR
⚫Technical and Legal Advantages
⚫Retains value without identifying data subjects
⚫Anonymization Tool
Copyright challenges with synthetic data
⚫Synthetic Data and GDPR Compliance
⚫Facilitates compliance with GDPR
⚫Introduces new challenges for copyright and related rights
⚫Impact on Human Authors
⚫Distances the 'romanticised' human author core to copyright
⚫Case Study: ChatGPT
⚫Uses large datasets and high computational power
⚫Produces complex works mimicking human-like works
⚫Authors Guild Lawsuit
⚫Questions if LLMs' use of copyright-protected works is
'systematic theft on a mass scale'
⚫OpenAI allegedly used substantial copyright-protected
content without licensing fees
⚫Copyright Concerns
GenAI, synthetic data &
the need to remunerate the human author
⚫Generative AI and TDM Process
⚫Machine reading of large data volumes to discover
patterns
⚫Generates new knowledge and extracts insights
⚫Legal Debates and Lawsuits
⚫Permission of authors for training ML systems debated
⚫Many lawsuits in the USA and one in the UK
⚫Defendants include OpenAI, Stability AI, Meta, and
GitHub
⚫Training and Copyright Infringement
⚫Human Author Remuneration
Input and authorship questions
⚫Value Chain of GenAI
⚫Driven by copyright considerations
⚫Includes content and datasets used for training models
⚫Authorship of AI-Generated Content
⚫Central to the GenAI debate
⚫Example: Stephan Thaler's 'The Creative Machine' and 'A Recent
Entrance to Paradise'
⚫US Copyright Office refused registration due to lack of human author
⚫Legal Decisions
⚫US District Court upheld the requirement for human involvement in
copyright protection
⚫Militsyna's Five-Step Test
⚫Proposes a method to address AI-based output with human creative
contribution
⚫Future Complexities
Access to content and licensing
⚫Types of Content Access
⚫Open source content
⚫Content behind paid walls
⚫Restrictions on Freely Available Content
⚫Creative Commons (CC) licences
⚫Special licence may be required
⚫Licensing for TDM
⚫Specific licence needed for mining content
⚫Firms must fairly license authors' works
⚫EU Regulations
⚫2001 InfoSoc Directive
⚫2019 CDSM Directive
⚫But no global harmonization
Extraction and copying of content
⚫Mining and Extraction of Content
⚫Distinct from content availability
⚫Access does not imply the right to engage in TDM
⚫Relevant Exceptions and Limitations (E&Ls)
⚫Debate on unlicensed TDM coverage
⚫Article 5 of the 2001 InfoSoc Directive
⚫Notable Exceptions
⚫Temporary copies exception (Article 5(1), InfoSoc
Directive)
⚫Scientific research exception (Article 5(3)(a),
InfoSoc Directive)
New and Contentious Issues in
generative AI and TDM debate?
⚫Generative AI's Influence on Creativity
⚫Generative AI systems like GPT-n and BARD generate human-like
outputs
⚫Quantity is unmatched as no author can be so prolific
⚫These systems analyze human creations through TDM
⚫Need for Remuneration of Human Authors
⚫Philosophical foundations of copyright support compensating human
authors
⚫Generative AI's outputs are based on original human works
⚫Copyright as a Privilege
⚫Peukert's view: copyright permits third parties to use works
⚫This view strengthens authors' bargaining positions
⚫Cycles/Loop s of Generative AI
⚫AI systems generate outputs that train further AI systems
⚫Impact on Authors' Bargaining Position
Not clear
TDM & lawful access to content

⚫Key Issue in Legal Discourse on GenAI


⚫Relationship between TDM and access to content
⚫Cases in the USA highlight direct infringement by ChatGPT
⚫EU's Licensing-Driven Approach
⚫Access to lawful content is a pre-requisite for TDM exceptions
⚫Creates additional transaction costs for high-tech industries
⚫Alternative Remedy: Levy on Generative AI Systems
⚫Designed along the lines of Article 8(2) of the Rental and Lending
Rights Directive
⚫Ensures equitable remuneration for human literary and artistic
work
⚫General Levy Proposal
⚫Supported by Frosio, Senftleben, Geiger & Iaia
⚫Distinction between inputs (TDM purposes) and outputs (GenAI
tools)
Copyright and TDM in AI – Exceptions
⚫Objective of Copyright and Potential of AI
⚫New creations emanate from older ones in multiple
ways
through multiple means
⚫Enhance creativity by allowing new generations to use
preexisting works
⚫Role of Exceptions and Limitations (E&Ls)
⚫Balance interests of users and right holders
⚫Recognized as user rights by Canadian Supreme Court in 2004
⚫European Court of Justice (ECJ) echoed similar views in 2019
⚫Design of a Broader E&L Framework
⚫Should it be confined only to Text and Data Mining (TDM)?
Next Class
⚫Generative AI, IP and Training Data and some observations
Artificial Intelligence, Law and Justice
Session 17

Artificial Intelligence and Copyright -Part-IV

Dr. Krishna Ravi Srinivas


Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
⚫We discussed issues including synthetic data
⚫We also highlighted issues on Text and Data
Mining and AI
⚫We listed how this is being addressed and
solutions that are suggested
Impact of Generative AI on Creation
⚫Generative AI's Impact on Creation
⚫Significant shift in creation processes
⚫Blurring of boundaries between human and
machine-driven creation
⚫Authorship Questions
⚫Existence of authorship in AI-generated works
⚫Attribution of authorship in such works
Proposed Four-Step Test
⚫Four-Step Test for Authorship
⚫Step 1: Identify persons involved
in the creation process
⚫Step 2: Determine the type of AI
system used
⚫Step 3: Analyze subjective
judgment exercised by persons
involved
⚫Step 4: Assess control over the
execution of the work
Evolution of AI in Creative Processes
⚫Traditional Computer Programs
⚫Microsoft Paint and Word assist in creating works
⚫Rely on user’s creative input
⚫Advancements in AI
⚫Improved data accessibility and algorithms
⚫Enhanced processing capabilities
⚫AI in Everyday Tasks
⚫Used in autonomous driving
⚫Recommendations in streaming services
⚫AI in Creative Fields
⚫Image production
⚫Transformation of Creative Process
Legal Challenges in Copyright
⚫High Autonomy of AI Programs
⚫Raises questions about human
intervention in authorship
⚫Determining Authorship
⚫Essential for preventing
unauthorized use
⚫Legal action can be taken if
authorship is established
⚫Publication of AI-Generated Works
⚫Applies to both artists and everyday
users
Human Interaction with AI Systems
⚫Human Interaction in AI Systems
⚫AI systems require human interaction to create works
⚫Humans make creative choices in the preparation phase
⚫Preparation Phase Choices
⚫Genre, style, technique, format
⚫Choice of model and input data
⚫Execution Stage
⚫AI systems perform complex actions
⚫Outputs cannot be fully anticipated or explained by
users
⚫Finalization Stage
⚫Humans rework and process draft versions
⚫Further creative choices implemented
Implementation of the Four-Step Test
⚫Identify Humans Involved
⚫Differentiating between developers and third
parties (users)
⚫Determine AI System Used
⚫Classify systems into partially, highly, and fully
autonomous
⚫Consider Subjective Judgment
⚫Free and creative choices by the user
⚫Control Over Execution
⚫Ultimate step in the evaluation process
Legal Certainty and Future Adaptations
⚫Bridging the Gap Between Traditional Authorship and
AI
⚫Promotes legal certainty
⚫Developers as Authors
⚫When working with partly or highly autonomous systems
⚫Third Party as Authors
⚫When using AI systems created by developers
⚫Must work with partly autonomous systems or edit highly
autonomous system outputs originally
⚫AI-Generated Work
⚫Initial AI-generated work not considered human
authorship
⚫Edited work may be considered human authorship
⚫Fully Autonomous Systems
⚫Case-by-Case Examination
Fair Outcomes and Case-by-Case Examination
⚫Consistency with EU Copyright System
⚫Results align with the EU copyright framework
⚫Ensures fair outcomes for creators
⚫General-Purpose AI Systems
⚫Works created without significant effort cannot be
attributed to users
⚫Further editing to imbue originality is required for
attribution
⚫Attribution to Developers
⚫Works created using AI systems by developers can be
attributed to them
⚫Case of Mario Klingemann
⚫Creates artworks through AI system development and
utilization
⚫Reasonable to attribute resulting works to him as the
author
Ongoing Legal Refinement
⚫Difficulty in Ascertaining AI
Authorship
⚫Complexity in identifying the
creator of AI-generated works
⚫Future Evolution of AI
Technologies
⚫Continuous advancements in AI
⚫Need for Ongoing Legal
Refinement
⚫Adapting laws to keep up with AI
developments
How Generative AI Models are Trained
⚫ Generative AI Models and Deep Learning
⚫ Generative AI models like OpenAI's GPT series use deep learning.
⚫ Deep learning involves neural networks with multiple layers of nodes.
⚫ Functioning of Deep Learning
⚫ Information is processed at different levels of complexity through deep
layers.
⚫ Early layers identify simple patterns; subsequent layers recognize complex
patterns.
⚫ Learning from Large Data Sets
⚫ AI models learn from large quantities of data.
⚫ Patterns and rules identified during training are used to create new content.
⚫ Data Retention and Memorization
⚫ AI models encode patterns as numerical parameters, not storing entire
datasets.
⚫ Generative AI models can sometimes recreate identical or near-identical
copies of training data.
Example of GPT-3 training data
⚫Massive Quantities of Training Data
⚫Generative AI models require extensive
data for training
⚫Example: GPT-3 by OpenAI
⚫Sources of Training Data
⚫Common Crawl dataset
⚫WebText2 dataset
⚫Two datasets of books
⚫English-language Wikipedia
⚫Fine-Tuning with Curated Datasets
⚫Further training on smaller, curated
datasets
⚫Refines model capabilities for specific
domains
AI Developers' Increasing Secrecy
⚫Shift in Transparency
⚫AI developers have become more secretive about
training data
⚫Detailed explanations have been replaced by single
sentence descriptions
⚫Examples of Reduced Transparency
⚫OpenAI disclosed main sources for GPT-3
⚫GPT-4's data described vaguely as a mix of public and
licensed data
⚫Motivations for Secrecy
⚫AI developers have not provided detailed reasons
⚫OpenAI cited competitive landscape and safety
concerns
⚫Sharing details could facilitate replication of models
⚫Detailed information could aid malicious actors
Arguments Against Transparency
⚫Merits and Objections of AI Training Data Secrecy
⚫Preventing rivals from replicating technology without
investing resources
⚫Absence of transparency could facilitate anti-
competitive practices
⚫Concerns Over Dangerous AI Technology
⚫Justifies measures to hinder market entry of competitors
⚫Restricting access to training data may not prevent
dangerous AI tools
⚫Potential Harm from Lack of Transparency
⚫Difficult for regulators to identify harmful or
discriminatory behavior
⚫AI companies have vested interest in preventing data
release
Full Access Approach
⚫Benefits of Training Data Transparency
⚫Multiple advantages for AI development
⚫Arguments against transparency are weak
⚫Achieving Transparency
⚫Various approaches provide different levels of
information
⚫Full Access Approach
⚫Highest degree of transparency
⚫Datasets made fully publicly accessible
⚫Third parties can verify training data
⚫Possible if AI developers retain data post-training
⚫Some developers already provide full dataset access
online
Gated Access Approach
⚫Logistical Challenges of Full Access
⚫Hosting a large repository of training data is difficult
⚫Managing millions of individual works is complex
⚫Copyright Issues
⚫Rightsholders may object to their works being freely
available
⚫Licensing deals do not cover unrestricted access
⚫Personal Data Concerns
⚫Datasets may contain sensitive personal information
⚫Gated Access as a Solution
⚫Users can request access to restricted parts of datasets
⚫Access is granted upon verification of credentials
⚫Logistical Issues with Gated Access
Rightsholders' Perspective On Transparency
⚫Transparency for Rightsholders
⚫Helps establish if a work appears in a dataset
⚫Requires clear explanation of data sources
⚫Training Data Summaries
⚫Should identify individual works
⚫Explain sources like existing datasets or internet
domains
⚫Assessing Likelihood of Use
⚫Rightsholders need to assess if their works were
used
⚫Challenges in Copyright Infringement Claims
⚫Unauthorized use identification is not
straightforward
⚫Varies significantly between jurisdictions
Copyright Issues With AI Training Data
⚫Copyright Protection of Training Content
⚫Most generative AI models use content protected by copyright law
⚫Copyright arises automatically for works meeting minimal
requirements
⚫Protection lasts at least the life of the author plus 50 years
⚫Alternatives to Copyrighted Works
⚫Public domain works are outdated, mostly from before the 1950s
⚫Creative Commons (CC) licensed content is limited
⚫CC licenses may impose 'Share Alike' obligations on derivative
works
⚫Potential of Synthetic Data
⚫AI-generated data could replace real-world data
⚫Experts are divided on the feasibility and timeline of synthetic data
Memorization & Potential
Copyright Infringement
⚫Generative AI Models and Copyright
⚫Models often do not infringe copyright by their mere
existence
⚫Memorization can complicate this view
⚫Memorization and Reproduction
⚫Models can reproduce verbatim or near-verbatim portions
of training data
⚫Potential for courts to find models 'contain' works
⚫Infringement through Output
⚫Direct reproduction of training data could be infringement
⚫Non-verbatim output might still infringe if it contains
recognizable protected elements
⚫Other Exclusive Rights
⚫Output might infringe rights to authorize translations,
adaptations, or public communication
Reproduction Right &
Copyright Exceptions
⚫Reproduction of Training Data
⚫Training data must be reproduced at least once during
the training process
⚫Considered a prima facie infringement of the right of
reproduction
⚫Debate on Copyright Protection
⚫Some argue temporary electronic reproduction and
'non-expressive' uses should not be protected
⚫If accepted, most AI training copying would fall outside
copyright protection
⚫Current Assumptions and Legal Stance
⚫Assumption that reproductions during AI training are
copyright-relevant
⚫Permission from rights holder required unless an
exception applies
Challenges in clearing
rights for training data
⚫Difficulty in Clearing Rights for AI Training
Data
⚫Extremely large number of works involved
⚫High transaction costs for identifying and
negotiating with rightsholders
⚫Expense of Paying Licence Fees
⚫Prohibitive costs associated with each work
⚫AI Developers' Approach
⚫Using content without identifying or seeking
permission
⚫Basis for ongoing training data litigation
Varied Impact of Training Data Transparency
⚫Varied Impact of Copyright Law
⚫Legal systems differ in their treatment of
copyrighted materials in training data
⚫Some require explicit permission, others
do not
⚫Transparency Requirements
⚫Impact varies between jurisdictions
⚫US fair use vs. EU Article 4(3) CDSM
exception
⚫Pro-Rights holder Advocacy
⚫Assumption of similar outcomes across
jurisdictions
⚫Flawed reasoning due to differing legal
contexts
Training Data Transparency in the AI Act
⚫Impact of Transparency Requirements
⚫Determined by existing copyright laws
⚫Limitations without revising those laws
⚫Transparency Provisions in AI Act
⚫Facilitate opt-out mechanism in Article 4(3) of CDSM Directive
⚫Provide information on sources of training data
⚫Unlikely to improve rightsholders' position due to flaws in opt-out
⚫Article 53(1)(c) Requirements
⚫Providers must comply with Union law on copyright and related
rights
⚫Identify and comply with reservation of rights in Article 4(3) of
Directive (EU) 2019/790
⚫Article 53(1)(d) Requirements
⚫Providers must make publicly available a detailed summary of
training content
Benefits & Limitations of
Transparency Requirements
⚫Benefits of Training Data Transparency
⚫Clear advantages in understanding data usage
⚫Promotes accountability and trust
⚫Dependency on Local Copyright Law
⚫Varied outcomes across different jurisdictions
⚫Local laws significantly influence the impact
⚫Challenges with Copyrighted Materials
⚫Complex issues remain unresolved
⚫Transparency alone is not a complete solution
Need For Broader Policy Measures
⚫Importance of Training Data Transparency
⚫Advantages of transparency requirements
⚫Need for Policy Focus Beyond Transparency
⚫Amending copyright law to balance interests
⚫Varied approaches based on jurisdiction
⚫Protecting Rights holders vs. Promoting AI
Development
⚫Further measures for rights holders in some cases
⚫Ensuring copyright law does not hinder AI in
others
⚫Engagement with Key Stakeholders
⚫Determining effective policies for specific contexts
⚫Frequent Reassessment of Policies
Future of copyright law in the context of AI
⚫Challenges posed by Generative AI
⚫Comparison with past technological advances
can help in crafting policies but some issues are novel
Pacing problem in technology and regulation as example
Not just copyright – IP and AI – IP for AI ?
⚫Need for Adaptation
⚫Copyright law must adapt to new technologies
⚫Importance of decisive action by policymakers
⚫Crafting Effective Policy
⚫Difficulty in regulating new technology during early stages
⚫Too difficult later -
⚫Importance of understanding societal impacts
⚫Issue of Entrenchment and
⚫Rapid integration of generative AI by major tech companies
⚫Training Data Transparency
Balancing Innovation & IP Protection
⚫Balancing Innovation and Intellectual Property
Protection
⚫Exploring the impact of generative AI on copyright laws
⚫Examining the need for updated copyright exceptions
⚫Generative AI and Copyright
⚫Understanding the challenges posed by generative AI
⚫Identifying potential solutions for copyright protection
⚫Future Directions
⚫Proposing new frameworks for copyright exceptions
⚫Ensuring a balance between innovation and protection
Next
⚫Session 18 and 19 will be on AI and Patents
Artificial Intelligence, Law and Justice

Session 18

AI and Patents / Patenting -Part-I


Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
⚫In the last session we completed the
discussion on Copyrights and AI and
highlighted emerging issues
⚫In the four sessions we touched upon and
Protection under Patent Law
⚫Purpose of Patent Law
⚫Rewards investment in research and development
⚫Provides temporary monopoly rights to patentees
⚫Scope and Duration of Patent Rights
⚫Limited to 20 years from application date
⚫Subject to annual fee payments
⚫Includes direct and indirect infringement claims
⚫Exceptions for experimental and noncommercial
use
⚫Eligibility Conditions for Patents
⚫Excludes abstract ideas like pure algorithms
Patentability Conditions
⚫Novelty Requirement
⚫Invention must not be available to the public at the date
of filing
⚫Known as the 'state of the art'
⚫Inventive Step
⚫Invention must not be obvious to a person skilled in the
art (PSA)
⚫Based on the state of the art
⚫Industrial Applicability
⚫Invention must be usable in an industrial context
⚫Challenges in Inventiveness Analysis
⚫Only technical features contributing to solving a technical
problem are considered
⚫Nontechnical features, like abstract algorithms, are
excluded
Disclosure Requirements
⚫Patent Bargain and Disclosure Requirements
⚫Patentees must disclose inventions clearly and
completely
⚫Disclosure must enable a PSA to carry out the
invention
⚫Black Box Nature of AI Technology
⚫AI systems often lack transparency in their operations
⚫Difficulty in explaining how AI systems reach results
⚫Impact on AI-Related Inventions
⚫Some AI inventions may not meet disclosure
requirements
⚫Experts can disclose AI system structure and principles
⚫Patent offices may accept this level of disclosure
⚫Incentives for Explainable AI
⚫Alternative Protection Methods (Trade Secret)
Patent Application Trends
⚫Uncertainty in Patenting Process
⚫Difficulty in predicting outcomes
⚫Does not deter prospective patentees
⚫Rising Number of AI-Related Patent
⚫Applications
⚫Over 300,000 applications filed since the
1950s
⚫Sharp increase in the past decade
⚫More than half published since 2013
⚫Trend expected to continue
Patent Trends
⚫Between 2010 and 2023, the number of AI
patents has grown steadily and significantly,
ballooning from 3,833 to 122,511. In just the
last year, the number of AI patents has risen
29.6%.
⚫As of 2023, China leads in total AI patents,
accounting for 69.7% of all grants
Source Stanford AI Index Report 2025
Patent Granted Trends
AI Inventorship
⚫Existence of Inventive Machines
⚫Inventive machines are rarer than AI systems in
creative endeavors
⚫Progress in this area is undeniable
⚫DABUS: The Creativity Machine
⚫Developed by physicist Dr. Stephen Thaler
⚫Neural network-based system simulating human
creativity
⚫Patent Applications for DABUS' Inventions
⚫Filed in 2018 for two inventions
⚫DABUS listed as the inventor
⚫Dr. Thaler obtained rights as successor in title
⚫Test Case for AI Inventorship
⚫Patent applications offer a test case for AI inventorship
Legal Perspective on AI Inventorship
⚫AI Systems and Legal Personality
⚫AI systems cannot have ownership rights
⚫AI systems cannot be considered employees
⚫AI Systems as Inventors
⚫AI systems cannot be considered inventors under current law
⚫Confirmed by European Patent Office, UK Supreme Court,
and German Federal Supreme Court
⚫Inventor's Right of Attribution
⚫Inventors have the right to be mentioned as such
⚫Extending inventorship to AI systems may render this
⚫Right meaningless
⚫Human Requirement in Patent Law
⚫Inventors must be human
⚫Legislative provisions imply the need for physical personhood
Ownership of AI-Generated Output
⚫Allocation of Ownership Rights
⚫IP law does not recognize AI systems as authors or
inventors
⚫Human authorship or inventorship is questioned with
AI intervention
⚫Human Control and IP Rights
⚫IP rights can protect creators if a human commands
and controls the AI
⚫Lack of sufficient causal relationship weakens the
argument for human authorship
⚫Complexity of AI Systems
⚫Black box nature of AI complicates establishing
sufficient control
⚫Determining causal links between human actions and
AI output is challenging
Stakeholders in AI-Generated Output
⚫Role of AI Creators
⚫Programmers, designers, and producers involved
⚫Substantive role in AI-generated output
⚫Allocation of Rights
⚫Unpredictable nature of AI output
⚫AI creator's choices define the system, not the final
output
⚫Autonomy of AI Algorithms
⚫Increased autonomy strengthens the argument
⚫Programmers can tweak algorithms to shape
output
IP Infringement and AI Development
⚫Data Requirements for AI Training
⚫Significant amount of data needed
⚫Potential IP protection on training data
⚫Authorization and Exceptions
⚫Reproduction and communication require authorization
⚫Relevant exceptions and limitations to copyright
⚫Legal Proceedings
⚫Pending cases on internet scraping for generative AI art
tools
⚫EU AI Act Provisions
⚫Text and data mining exceptions for general-purpose AI
models
⚫Potential opt-out for right holders
⚫Public availability of detailed summary about training
content
Impact of AI on Key IP Concepts
⚫Inventiveness Standard under Patent Law
⚫Centers around the 'person skilled in the art'
(PSA)
⚫PSA's knowledge and skill depend on the
technology field
⚫Effect of Inventive Machines
⚫PSA standard may evolve to include inventive
machines
⚫Raises the bar for inventive step and patentability
⚫Potential Consequences
⚫Could shake the foundations of the patent system
⚫If everything becomes obvious, no room for
patentable inventions
Use of AI in IP Practice
⚫Increased Use of AI in IP Sector
⚫AI systems process and analyze vast amounts of data
quickly and efficiently
⚫Broad range of opportunities offered by AI technology
⚫WIPO's Adoption of AI
⚫Automatic categorization of patents and trademarks
⚫Prior art searches and machine translations
⚫Formality checks
⚫Benefits to Registrants
⚫AI suggests relevant classes of goods and services for
trademarks/designs
⚫AI aids in patent drafting and screening registers for
existing registrations
⚫Determines similarity of trademarks/designs and
evaluates prior art
AI and IP Law Interaction
⚫Traditional Scope of Intellectual Property (IP) Law
⚫Historically focused on human creations
⚫Technological Evolution in AI
⚫Challenges the anthropocentric view of IP law
⚫Contentious Questions
⚫Should authorship and Inventorship extend to AI systems?
⚫Who should acquire ownership rights for AI-generated content?
⚫Arguments Vs. Arguments
⚫Valid points exist for and against extending IP rights to AI
⚫Middle position or muddled position?
⚫Preservation of IP Law Foundations
⚫Need for caution against hastily altering IP law
⚫Do we know enough to bring radical changes
⚫Can we anticipate technology developments
⚫Look before you leap lest later you weep
Ai's Impact on Patent Law Debates
⚫AI Outperforming Human Intelligence
⚫AI excels in narrow tasks
⚫Expectations of AI matching or exceeding human
intelligence
⚫Slower Pace of AI-Driven Transformation
⚫Transformation in innovation processes slower than
predicted
⚫Less visible impact on patent law and policy
⚫Increase in AI Technology Patenting
⚫Dramatic rise over the past twenty years
⚫Within largely unchanged patent law framework
⚫Ongoing Questions in Intellectual Property
⚫Can a machine-generated invention be patented?
⚫Should a machine-generated invention be patented?
Generative AI’s Impact on IP &
Innovation
⚫Introduction to Generative AI
⚫Public awareness through tools like ChatGPT
⚫Generates new content based on learned data
⚫Definition of Generative AI
⚫AI algorithms generating new outputs from trained
data
⚫Transformative Features
⚫Rapid self-improvement at scale
⚫Generation of new outputs using existing data
⚫Amplification and extension of human capabilities
⚫Economic Impact
⚫Potential to transform economic activity unpredictably
⚫Promises or threats depending on perspective
Four Responses to AI
⚫Traditional Human-Centered Notion of
Invention
⚫Patent system provides incentives to people to
invent and innovate
⚫Proposals for Patent Law Responses to AI-
Generated Inventions
⚫No unique patent law response needed for
transformative emerging technologies
⚫Patents should remain tied to human inventors
⚫Pragmatic responses to AI disruption with modest
changes to existing patent law
⚫Arguments for AI neutrality and patenting AI-
generated inventions
Next
⚫In the Next Session we will deal with
inventorship and infringement in light of AI
Arguments for No Change To Patent Law
⚫Patent Law's Response to Emerging
Technologies
⚫Patent law is designed to be technology and
industry neutral
⚫Historically accommodated technologies like
software, biotechnology, and genetics
⚫AI and Patent Law
⚫AI has fit within existing patent law frameworks
⚫AI may change the cost of innovating and the role
of human inventors
⚫Central inquiry remains incentivizing human
invention
Human-Centered Nature Of Invention
⚫Focus on Human-Centered Invention
⚫Importance of human centrality in the patent system
⚫Justifications for human-centered patent law
⚫Professor Gervais' Argument
⚫Goal of patent law is to foster human development
⚫Patent law should support and encourage human
invention
⚫Guidance by Human Progress
⚫IP rights in AI should be guided by human progress
⚫Not just the acceleration of AI or growth of Silicon
Valley fortunes
⚫Support for Human-Centered Progress
⚫Human-centered IP system supports progress of
science and useful arts
Pragmatic Solutions for
AI-generated Inventions
⚫Disruptive Impact of AI-Generated
Invention
⚫Potential to disrupt the current patent
system
⚫Patent law designed around human
inventors
⚫Need for Pragmatic Solutions
⚫Ensure continued functioning of the
patent system
⚫Address inventions without a human
inventor
⚫Varied Proposals
⚫Largest and most varied bucket of
proposals
⚫Different approaches to manage the
disruption
Patentability of AI-generated
Inventions
⚫AI-Generated Inventions and Patentability
⚫Some believe AI-generated inventions should be
patentable
⚫Approaches vary from modest changes to patent
law to radical shifts
⚫Radical Approach
⚫Machines as inventors for patentability purposes
⚫Less Radical Approaches
⚫Adjustments for shared claim to invention
⚫Broadened view of conception
⚫Inventorship attributed to people using AI tools
⚫AI as co-inventor or work for hire
Regulatory Responses and Competition
Law
⚫Industry Concentration and Control
⚫Key AI assets include compute power, data, and human
capital
⚫Trends in industry concentration raise concerns
⚫Broader Regulatory Response Needed
⚫AI disruption requires more than patent law changes
⚫Antitrust law and governance principles of open access
and data sharing are essential
⚫Innovation-Friendly Macrostructure
⚫Patent law functions effectively with an innovation-
friendly data economy
⚫Macrostructure plays a pivotal role in the innovation
ecosystem
Artificial Inventors Project
& Legal Test Cases
⚫Historical Context
⚫Sporadic legal attempts to patent thinking machines
⚫Open question on patentability of machine-generated
inventions
⚫Artificial Inventors Project
⚫Formed by Professor Abbott and pro bono patent lawyers
⚫Started filing legal test cases in 2019
⚫Goal: Promote discussion and guidance on AI-generated
discoveries
⚫Patent Filings
⚫Filed patents for a warning light and a food container
⚫Listed machine Dabus as the inventor
⚫Legal Challenges
⚫USPTO rejection based on inventorship requirement
Next
⚫In the next session we will discuss inter alia,
Generative AI and Patent/Patenting
Artificial Intelligence, Law and Justice
Session 19

AI and Patents / Patenting- Part -II


Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
⚫In the previous session we started with
fundamentals of patent protection and criteria
⚫We further discussed the challenges AI poses
to patenting in theory and practice
Robot Scientists: Adam and Eve
⚫Breakthrough Discovery in 2015
⚫Eve made a significant discovery in the fight against
malaria
⚫Eve is a Robot Scientist, not a human being
⚫Development of Robot Scientists
⚫Designed and built by a team led by Prof. Ross King
⚫Adam was the predecessor of Eve
⚫Field of automated drug design and synthetic biology
⚫Role of Artificial Intelligence
⚫Robot Scientists use AI for automated drug discovery
⚫Complex mechatronic systems involved
⚫Patent Law Challenges
⚫Questions about the patentability of AI
Drug Development
⚫Importance of Adam and Eve in Drug Research
⚫Drug development takes 10-14 years
⚫Costs range from USD 1 to 2.6 billion
⚫Any simplification or acceleration is welcome
⚫Drug Development Process
⚫Isolating a target molecule or cell structure
⚫Searching for substances with desirable effects
(hits)
⚫Constructing promising substances efficiently
⚫Optimizing results for effectiveness and efficiency
⚫Role of Robot Scientists
⚫Use AI to optimize substance selection
⚫Limitations of Robot Scientists
Antenna Design
⚫Challenges in Antenna Design
⚫Achieving specific radiation characteristics is
complex
⚫Designing antennas is difficult even for experts
⚫AI Assistance in Antenna Design
⚫AI simplifies the design process
⚫NASA has used AI for antenna design for many
years
⚫Notable AI Achievements
⚫In 2006, a genetic programming algorithm created
a unique antenna design
⚫The design resembled a twisted paper clip
⚫AI solved the problem without human intervention
Automation & Autonomy
⚫Robot Scientists like Eve
⚫Located between human and machine autonomy
⚫Antenna Design
⚫AI systems act more autonomously
⚫Machines capable of finding autonomous solutions for
specific tasks
⚫Closer to autonomous invention than drug development
⚫Drug Development
⚫Lags behind antenna design in machine autonomy
⚫Requires significant human oversight and interaction
⚫Potential for increased machine autonomy with
sufficient data
The Substantive Aspect
⚫Substantive Aspect of Inventorship
⚫Interpret inventorship broadly to include task
formulation and result acknowledgment
⚫AI inventions can be protected under current law
⚫Natural Person as Inventor
⚫Patent offices in various countries are considering
machine-made inventions
⚫Example: DABUS, a machine named as inventor in
a patent application
⚫Human Contribution in AI Inventions
⚫Formulating the task and acknowledging the result
may justify inventorship
⚫Arguments for Broad Interpretation
The Substantive Aspect
The Formal Aspect
⚫Formal Requirements in Patent Applications
⚫Patent offices can grant patents if formal requirements are
met
⚫Example: EPO application left inventor field empty
⚫Machine as Inventor
⚫Applicants named a machine (DABUS) as inventor
⚫Machine naming does not meet EPO formal requirements
⚫Application refused due to lack of natural person inventor
⚫Intermediate Findings
⚫No rights granted if machine is named as inventor
⚫Rights can be granted if a natural person is named
⚫Future of Inventorship
⚫Requiring human inventors is outdated
Legal Assessment
⚫Formal Requirements for Inventorship
⚫Machines cannot be named as inventors
⚫Natural persons must be named to grant
patent rights
⚫Issues with Current Inventorship Concept
⚫Human inventors requirement is outdated
⚫Technological progress increases AI autonomy
⚫Need for Policy Adjustments
⚫Current concept is not future-proof
⚫But what is the alternative is not clear
Overcoming the Human Invention
Requirement
⚫Abandoning the Human Inventor Requirement
⚫Proposals for non-human inventorship have been made
⚫Assigning inventorship to the machine's owner is
impractical
⚫Challenges with Machine Inventorship
⚫Machines are not necessarily physical devices
⚫Machines lack legal status to be inventors
⚫Patent offices cannot grant rights to machines
⚫Alternative Solutions
⚫Companies as original right holders
⚫Control over the machine as a criterion
⚫What Amendments Needed
⚫Clarity in theory to consistent rules
⚫Patching up or Transformation?
International Treaties
⚫TRIPS Agreement
⚫Does not address moral rights of the inventor
⚫Mentions inventor only in Art. 29(1) regarding
disclosure requirement
⚫TRIPS gives flexibility in some aspects like defining
invention
⚫Certain inventions can be denied patents but countries
differ
⚫TRIPS is technology neutral
⚫Paris Convention (PC)
⚫Art. 4ter PC expresses moral rights of the inventor
⚫PC does not define 'inventor', leaving it to national laws
⚫National laws can include companies as inventors
Who Infringes?
⚫Autonomous Infringements by AI Systems
⚫Issues in attributing infringement to AI
⚫Legal implications of AI autonomy
⚫Who Infringes?
⚫Difficult as AI system may have multiple
parties
⚫Attribution and Intention
Attribution of AI Acts In Patent Law
⚫Attribution of Autonomous Acts in AI Systems
⚫Debate in general tort law
⚫Direct Use of Patented Invention
⚫Autonomy of AI does not exonerate user
⚫Equating AI actions to human actions
⚫Responsibility for Patent Infringement
⚫Liability does not require personal fulfillment
⚫Facilitation or assistance in fulfillment
⚫Knowledge and Reasonable Effort
⚫Liability for supporting infringing acts
⚫Reasonable effort to obtain knowledge
User Liability For AI System Actions
⚫Setting a Cause for Infringement
⚫Alleged infringer must prevent infringement by
third parties
⚫Failure to prevent can establish liability
⚫Insufficient Measures
⚫Any reproachable cause of infringement suffices
⚫Includes insufficient measures to prevent third-
party infringements
⚫AI and Patent Use
⚫Equating AI use with third-party use
⚫User remains responsible for insufficient
precautions against AI infringements
Insufficient Precautions Against AI
Infringement
⚫Definition of Insufficient Precaution
⚫Relates to infringement by a third party
⚫Applies in cases of non-intentional action
⚫Conditions for Insufficient Precaution
⚫Contribution to causing the infringement
⚫Breach of a legal obligation
⚫Legal Obligation
⚫Serves to protect the infringed patent right
⚫Observance would cease the contribution to
causation
Legal Obligations & Infringement
Prevention
⚫Existence of Legal Obligation
⚫Depends on the weighing of all interests
concerned
⚫Decisive factor: Reasonableness of expected
actions
⚫Interplay Between Protection & Reasonableness
⚫Injured party's need for protection
⚫Reasonableness of obligations to examine and
act
Conclusion
⚫AI asks fundamental questions on Patent
Law
⚫From Human Centric to Machine Centric?
⚫More questions will arise as technology
advances
⚫But arriving at a broader agreement globally
is needed
⚫At least on fundamental issues and resolving
them
Next
⚫The Next Session on AI Ethics
Session 20

AI Ethics

Dr. Krishna Ravi Srinivas


Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Definition & Need
 Definition of AI Ethics
 Ethical reflection on issues related to AI
 Need for AI Ethics
 Examples of ethical questions raised by AI
technologies
 Student using AI to write a paper
 Self-driving car making decisions that could harm
humans
 Autonomous weapons systems in warfare
 Fascination with AI
 AI raises ethical questions by its very idea
 Need for a sub-field of ethics focused on AI
Inclusive Definition
 Variety of Definitions for AI
 Multiple ways to define or explain AI
 Exclusive approach favors one definition
 Inclusive Approach to AI
 All definitions count as different forms of AI
 AI technologies perform tasks using natural
intelligence
 Types of AI
 Narrow AI: Performs a narrow set of tasks
 General AI: Performs a wide set of tasks
 Artificial General Intelligence (AGI)
 Super-Intelligent AI
Narrow vs. Broad Conceptions
 Narrow Conception of Ethics
 Focuses on what actions or practices are impermissible or
permissible
 Determines what should be forbidden or tolerated
 Ethical arguments are for or against conclusions about
impermissibility or permissibility
 Broad Conception of Ethics
 Includes questions about living a good human life and
being a good person
 Explores important values and their interrelations
 Inquires into well-being and human flourishing
 Reflects on living a meaningful life and having
meaningful relationships
 Considers what makes certain forms of work more
meaningful than others
Negative & Positive Approaches
 Narrow Perspective on AI Ethics
 Focuses on permissible and impermissible uses of AI
 Examples include debates on banning autonomous weapons
 Responsibility and Harm
 Discusses who should be held responsible for AI-related
problems
 Commonly associated with negative impacts of AI
 Broader Perspective on AI Ethics
 Considers AI's impact on human well-being and flourishing
 Reflects on meaningful lives, work, and relationships
 Positive and Negative Ethical Questions
 Negative: Focus on harms, injustices, and responsibility
 Importance of a Broader Ethical Approach
Responsibility Gaps
 Inclusive Definition of AI
 Technologies performing tasks using intelligence
 Imitating or simulating intelligence
 Artificial agents with or without intelligence
 Ethical Questions Raised by AI
 Tasks with ethical components
 Obligations and responsibilities
 Responsibility Gaps
 Potential gaps in responsibility
 Question of who is responsible when tasks are
outsourced to AI
Impacts and Implications
 Human Work
 AI taking over tasks we find meaningful can raise issues on
nature of work and need for human responsibility
 Example: Doctors work replaced by medical diagnoses by AI
 Less room for human responsibility and engagement as AI
takes over more tasks
 Opportunity Gaps and Employment
 More AI involvement reduces opportunities for human
Employment .Large scale displacement in long run
 Counter Argument
 AI can take over meaningless tasks, freeing humans for
meaningful activities
 Danaher (2019) argues AI can act as a meaning-booster
 Automation can enhance opportunities for meaningful
human life
Extension of Human Capabilities
 Extending Human Agency through AI
 AI technologies as a means to augment human
capabilities
 Viewing human agency as extending into AI
technologies
 Acquiring New Abilities
 Creating new AI technologies to extend capacities
 Potential to achieve more good, knowledge, and
beauty
 Engaging with the Good, True, and Beautiful
 AI systems enhancing our interaction with the world
 Enabling meaningful and new ways of engagement
Intelligence & Moral Agency
 AI Independence and Intelligence
 AI technologies may operate independently from human agency
 Possibility of AI technologies becoming truly intelligent
 Intelligence in AI could lead to ethical obligations and rights
 AI as Moral Agents
 AI technologies could be considered moral agents
 Floridi argues AI agents can have morally significant consequences
 Intelligent AI agents would raise pressing moral agency questions
 AI Imitation and Deception
 AI technologies may imitate intelligent behavior
 Potential for deception and manipulation of human users
 Turing's test examines if a machine can mimic human interaction
Control Problem
 Control Problem in AI Technologies
 Concerns about AI performing at or above human level
 Difficulty in controlling super-intelligent or general AI
 Even narrow AI applications raise control concerns
 Historical Context
 Turing's early worries about AI control in 1950
 Control problem predates the term 'artificial intelligence'
 Value Alignment in AI
 Importance of aligning AI goals with human values
 Technical challenge of ensuring AI supports human
values
 Normative challenge of deciding which human values to
align with
Questions About Existing Technologies
 Ethical Questions Related to Existing AI Technologies
 Numerous examples of ethically important or
controversial situations
 AI technologies performing tasks that humans would
otherwise execute
 Creating gaps with respect to responsibility and
meaningfulness
 Ethical Questions Based on Perceptions of AI
 Assumptions people make about AI technologies
 Importance of discussing these perceptions, whether
accurate or not
 Ethical Questions for Future AI Technologies
 AI technologies that do not yet exist but may come into
existence
 Importance of considering potential future ethical issues
Questions About Existing Technologies
 Artificial General Intelligence (AGI) and Super-Intelligence
 AGI might be developed with technological breakthroughs
 Super-intelligent AI is controversial and debated among
researchers
 Potential Future AI Technologies
 Range of conceivable AI technologies not yet in existence
 Importance of reflecting on ethical issues before they arise
 Ethical Reflections on AI
 David Chalmers' work on sentient AI
 Reflecting on AI as potentially intelligent agents
 Historical Reflections on AI
 Turing's early reflections on control over thinking machines
 Aristotle's thoughts on intelligent tools and human labor
Limits & Future of AI
 AI Explosion Explained
 Attributed to new troves of data and increased
computation speeds
 Less about new algorithms
 Mining Past Solutions
 Strategy involves using past solutions
 Improvements may plateau
 Shared Boundary of Knowledge
 Artificial and human intelligence will share similar
boundaries
 Boundary will continue to expand
AI & Legal Accountability
 Purpose of Law
 Compensate for errors or misfortune
 Maintain social order by dissuading wrong actions
 Legal Accountability
 Law designed to hold humans accountable
 Corporations held accountable through real human
control
 AI systems cannot be held accountable like humans
 Evolution of Legal Principles
 Law coevolved with human societies
UNESCO AI Ethics
 Recommendation on the ethics of
artificial intelligence
 file:///D:/april1stweek25/380455en
g.pdf
UNESCO AI Ethics
 Principles
• Proportionality and Do No Harm
• Safety and security
• Fairness and non-discrimination
• Sustainability
• Right to Privacy, and Data Protection
• Human oversight and determination
• Transparency and explainability
• Responsibility and accountability
UNESCO AI Ethics
 Principles
 Awareness and literacy
 Multi-stakeholder and adaptive governance
and Collaboration
 Policy Areas 11
 Pros: Comprehensive, accepted by states
 Cons: Difficult to implement, values and
priorities vary
Global AI Ethics
 Global Scale of AI
 Challenges connected to global AI
 Debate on what constitutes the global
 Global Communication Standpoint
 Constructing knowledge contextually and
historically
 Engaging with power, relationality, and comparison
 Situated Ethics Approach
 Scrutinizing tensions between global AI ethics and
local perspectives
 Focus on differences in social, cultural, political,
economic, and institutional positionality
 Balancing Local and Universal Values
Issues and Concerns
 Lack of clarity
 The ethical principle mean different things to
Different organizations and institutions
 Translating into actionable points
 Lack of Guidelines on realizing them
 Linkage with Governance is vague or Not clear
Ambiguous AI Ethics?
 Ticking the box approach
 AI ethics is reduced ticking boxes andNice
slogans or rhetoric
 No legal backing or binding agreement
 Issues in translating general principles to
Sector specific guidelines
 There is overlap with Responsible AI but
linkages are not vague or not clear
Between Soft Law & Hard Law
 AI ethics as soft law
 Acceptable, flexible approach
 Consensus on need
 Contextual approach
 AI ethics as part of regulation
 Pros and Cons
 AI ethics and governance of AI Indicators,
standards, assessing Policies and institutions
Literature (Selected)
 Mager, A., Eitenberger, M., Winter, J., Prainsack, B., Wendehorst,
C., & Arora, P. (2025). Situated ethics: Ethical accountability of
local perspectives in global AI ethics. Media, Culture &
Society, 0(0). https://doi.org/10.1177/01634437251328200
 Maclure, J., Morin-Martel, A. AI Ethics’ Institutional Turn. Digit.
Soc. 4, 18 (2025). https://doi.org/10.1007/s44206-025-00174-x
 Oxford Handbook of Ethics of AI. (Chapter 1)
 What Is This Thing Called the Ethics of AI andWhat Calls for It?
Sven Nyholm in
Handbook on the Ethics of Artificial Intelligence Edited by David J.
Gunkel
 Kijewski, S., Ronchi, E. & Vayena, E. The rise of checkbox AI ethics:
a review. AI Ethics (2024). https://doi.org/10.1007/s43681-024-
00563-x
 Journal AI Ethics (Springer)
Next
 AI Ethics in Law and Justice
Session 21

AI Ethics and Law and Justice


Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
 In the previous session we discussed AI ethics,
 why it is necessary and how to implement it
 We also discussed various perspectives on AI
ethics including those questioning the
universal values vs. the reality of diverse
perspectives
 We introduced UNESCO’s Principles of Ethics
and the view that AI ethics has become a
matter Of ticking boxes
Ethics, AI, and Law
 Core Legal Values
 Equal treatment under the law
 Public, unbiased, and independent adjudication
 Justification and explanation for legal outcomes
 Procedural fairness and due process
 Transparency in legal substance and process
 Potential Negative Impacts of AI
 Increased bias against certain social groups
 Elevating efficiency over due process
 Potential Positive Impacts of AI
 Existing Ethical Issues will be there
 AI will add more issues and concerns
Definition and Examples
 Definition of Artificial Intelligence
 No universally agreed-upon definition
 Useful working description: technology
automating tasks requiring human
intelligence
 Examples of AI Outside Law
 Playing chess
 Driving cars
 AI in Legal Domain
 Prediction of legal outcomes
 Legal analysis of factual situations
 Human Cognitive Activities
 Reasoning
 Judgment
Criminal Justice, AI and Ethics
 Role of AI in Criminal Justice
 Judges use AI to assess likelihood of reoffending
 AI assists in bail and sentencing decisions
 Traditional vs. AI-Assisted Assessments
 Judges traditionally rely on witness testimony, criminal history, and
intuition
 AI uses machine learning algorithms to predict reoffending
 Creation of AI Prediction Software
 Data scientists use historical data to identify predictive factors
 Private companies develop and license software to the government
 Data Sources and Predictive Factors
 AI Model and Risk Scores
 Judgment and Subjectivity in AI Development
 Ethical Issues are inevitable and have to be considered
Ethical Issues, AI & Decision Making
 Equal Treatment Under the Law
 Core value in legal systems
 Decisions should be based on law and facts, not individual
characteristics
 AI systems may disproportionately harm or benefit certain
social groups
 Bias in AI Systems
 Structural inequalities reflected in data
 Biased police arrest data can skew AI predictions
 Low-income individuals may be unfairly denied bail
 Design Decisions in AI Systems
 Subjective choices by AI creators
 Unintentional or intentional biases
 Comparison to Human Decision-Making
 AI ethics thus is inevitably needed to address these
Equal Treatment Under the Law
 Core Value of Equal Treatment
 Legal decisions should be based on law and facts
 Socioeconomic, political, racial, gender backgrounds should not
influence decisions
 Concerns with AI Systems
 AI may disproportionately harm or benefit certain social groups
 Existing structural inequalities may be reflected in AI data
 Example of Biased Data
 Police arrest data may be biased
 AI may predict higher re-offense rates for low-income individuals
 Subtle Biases in AI Design
 Design decisions can favor certain groups
 Comparison to Human Judges
 So AI ethics is needed to address these issues
Transparency and Explanation
 Interpretability Problem in AI Systems
 Some machine learning techniques are easy to understand (e.g.,
regression, decision trees)
 Neural-network and deep learning approaches are complex and
hard to interpret
 AI models may be accurate but not understandable to humans
 Transparency in AI Decision-Making
 AI systems can be deterministic and auditable
 Private companies often keep AI models and data confidential
 Trade secret laws and nondisclosure agreements protect AI details
 Explanation in Legal Decisions
 Judges must justify decisions with socially acceptable reasons
 AI recommendations may lack meaningful human explanations
 Potential Benefits of AI in Transparency if it is explainable
 Transparency as a norm in AI ethics
Accountability in Decision-Making
 Current System of Accountability
 Judges or juries make crucial decisions
 Identifiable points of accountability
 AI-Aided Decision-Making
 Predictive AI systems provide recommendations
 Judges may defer to automated recommendations
 Potential Ethical Issues
 False precision of numerical scores
 Shift of accountability from judges to AI systems
 Shift of accountability from public to private sector
 Impact on Legal Judgments
 Judges may adopt AI recommendations by default
 Ethical questions are inevitable
Ethical Concerns of AI Use Lawyers
 Increasing Power of AI Systems in Legal Predictions
 AI systems used for predicting legal outcomes
 Automated document analysis and discovery
 Legal case outcome prediction
 Potential Shift in Ethical and Professional Standards
 AI outperforming traditional legal predictions
 Possible obligation for lawyers to use AI systems
 Impact on Power Dynamics in Law
 Advantage for lawyers with access to AI tools
 Increased divide between resourced and less resourced
lawyers
 Potential undermining of access to justice and
adjudication based on merits
Power Dynamics and AI
 Increasing Power of AI Systems in Legal Practice
 Lawyers use AI for predictions in legal issues
 AI used in document analysis, discovery, and case outcome
prediction
 Future Scenario of AI Outperforming Traditional
 Methods
 AI predictions may outperform traditional legal judgment
 Potential shift in ethical and professional standards
 Impact on Power Dynamics in Law
 AI tools may give significant advantages to well-resourced lawyers
 Possible exacerbation of resource-based divides in legal outcomes
 Access to Justice and Legal Norms
 AI use may undermine central values of the legal system
 Concerns about justice and adjudication based on merits
Tilting the Balance?
 Central Ethical Challenge
 Identifying how AI may shift core legal values
 Ensuring preservation of crucial values during
technological transition
 Positive View on AI Technology
 AI can preserve central values
 AI can foster and enhance values
 Betterment of the legal system and society
overall
Ethical Questions Raised by GAI
 Competency in GAI Tools
 Lawyers must determine the level of competency required for using GAI
tools.
 Confidentiality Concerns
 Ensuring client information remains confidential when inputting data into
GAI tools.
 Disclosure to Clients
 Lawyers need to know when to disclose the use of GAI tools to clients.
 Review of GAI Tool Output
 Determining the necessary level of review for GAI tool processes and
outputs.
 Reasonable Fees and Expenses
 Assessing what constitutes a reasonable fee or expense when using GAI tools
for legal services.
 Rapidly Evolving Technology
 GAI tools are rapidly changing, making it difficult to anticipate future
features and utility.
Risks of Inaccurate Output
 Potential Benefits of GAI Tools for Lawyers
 Increase efficiency and quality of legal services
 Quickly create new, seemingly human-crafted content
 Inherent Risks of GAI Tools
 Risk of producing inaccurate output
 Quality and sources of underlying data may be limited or
outdated
 Potential for unreliable, incomplete, or discriminatory
results
 Lack of understanding of text meaning and context
 Prone to “hallucinations” with no basis in fact
 Consequences of Uncritical Reliance on GAI Tools
 Inaccurate legal advice to clients
 Misleading representations to courts and third parties
 Can we use AI ethics to help lawyers ?
From Human to AI
 Shift from Labour-Intensive to Technology-Enhanced Legal
Methods
 AI's potential to improve access to legal services
 Streamlining legal procedures
 Ethical Challenges of AI Integration
 Issues of bias and transparency
 Importance in sensitive legal areas like child custody,
criminal justice, and divorce settlements
 Need for Ethical Vigilance
 Developing AI systems with ethical integrity
 Ensuring fairness and transparency in judicial proceedings
 Human in the Loop Strategy
 Combining human knowledge and AI techniques
 Preserving the Human Element in Legal Practices
 AI ethics is necessary to address issues and dillemas
Evolution of AI in Legal Practice
 Two Major Phases of AI Evolution in Legal Sphere
 First stage: Moderate-innovative stage
 Second stage: Significant advancement in legal
automation
 Technological Tools Enhancing Legal Practices
 Software and technical instruments
 Machine learning and natural language processing
 Automated document evaluation
 AI Integration in Legal Procedures
 Automation in trial courts
 Use of complex algorithms in pre-trial and sentencing
stages
 Ethics will be more relevant as technology raises new
Concerns and challenges
Guidelines and Good Practices
 ABA Guidelines July 2024
 https://www.americanbar.org/news/abanews/aba-
news-archives/2024/07/aba-issues-first-ethics-
guidance-ai-tools/
 AI and Legal Ethics
 https://www.lexisnexis.com/pdf/practical-
guidance/ai/ai-and-legal-ethics-what-lawyers-
need-to-know.pdf
 AI Ethics in Law: Emerging Considerations for Pro
Bono Work and Access to Justice
https://www.probonoinst.org/2024/08/29/ai-ethics-
in-law-emerging-considerations-for-pro-bono-
work-and-access-to-justice/
Guidelines and Good Practices
 ABA ethical rules and Generative AI
https://legal.thomsonreuters.com/blog/generative-
ai-and-aba-ethics-rules/
 State Legal Ethics Guidance on Artificial
Intelligence (AI) in USA
https://www.bloomberglaw.com/external/document
/X2JK49QC000000/legal-profession-comparison-
table-state-legal-ethics-guidance-on
 AI and the Legal Profession: Professional
Guidance
https://lawsociety.libguides.com/AI/professional-
guidance
Conclusion
 AI ethics is much relevant for law and justice
In theory and practice
 Challenge is in translating to guidelines,
 Identifying right principle and context AI
ethics is no substitute for professional Ethics or
ethical practice
 It will enhance it as lawyers use more of AI
Systems and tools
Next
 Responsible AI will be discussed in the next session
Session 22

Responsible AI
Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
 In the previous session we discussed AI ethics
in the context of law.
 We spoke about the relevance of AI ethics in
law and its importance for lawyers and Its
application in different branches of law
We also looked at how AI ethics guidelines are
being issued by different institutions and
listed a few of them
Responsible AI
 Why Responsible AI is subject to so much
attention and concern
 Do we talk of Responsible software or Apps in
mobiles or of responsible medical technology, if
not why Responsible AI
 Responsible AI is spoken of by many but do they
all mean the same or it is an amoeba term
 Do we need Responsible AI in law and justice when
we have codes of ethics and guidelines for lawyers
and judges.
 If we need what should focus on and who will
decide
 What is Responsible AI and what is not
Brief History
 Turing Test – To test human or not at the other end
 Asimov’s Rules of Robots- ethical rules for robots
 Nothing much during the AI winter years 1960s-end
of 1990s
 2000—2010s– AI took shape for better- concerns
about societal impacts – Google, Microsoft-
academics– good and responsible practices
 OECD's 2018 guidelines set an intergovernmental
standard
 2020 onwards - Industry’s acceptance of need but no
consensus on what constitutes Responsible AI
 ISO standard 2023 AI governance, the ISO/IEC 42001
Responsible AI Today
 Wider development of principles and guidelines
 Industry across sectors is keen to use Responsible AI
 EU AI Act and similar regulations are making it necessary to
adhere to some norms
 Development of standards and companies statements on AI
ethics necessitate further Steps
 All these have created a demand for Responsible AI from
developers and adopters
Some examples
 EU AI Act
 CoE’s AI Treaty
 UNESCO’s AI Ethics Guidelines
 Niti Aayog’s papers on Responsible AI
 Google's AI Principles
 Microsoft's AI Framework
 IBM's AI Ethics Principles
 PwC's responsible AI framework:
 Salesforce's AI Ethics Maturity Model:
 Accenture's responsible AI
 Cisco's responsible AI Framework:.
 Intel's AI for Social Good:
 SAP's AI Ethics:
Some more
 Global Partnership on AI:.
 IEEE Global Initiative on Ethics:
 AI Now Institute:.
 Montreal Declaration for Responsible
Development of AI:
 European Commission Ethics Guidelines:
 These guidelines, created by the High-Level Expert
Group
 On AI. Influenced development of AI Act
 Asilomar AI Principles: These were developed
during the 2017 Asilomar conference. These
principles provide a broad vision for the ethical
and beneficial development of artificial
intelligence.
Some Questions
 Is Responsible AI is a generic concept that can be
applied anywhere or do we need sector specific
Responsible AI policies etc.
 Is there anything that is common among these
bewildering variety of guidelines and practices
 Is it based on some theoretical understanding
without any understanding of real conditions
 Hard law approach, soft law approach and hybrid
approach
 Is there anything in Responsible AI that is sensitive
to cultural milieu, faith and values or is it another
western idea(l) imposed on the rest
Principles in Practice
 According to ISO:
 Building a responsible AI: How to manage the AI ethics debate
 Fairness: Datasets used for training the AI system must be given careful
consideration to avoid discrimination.
 Transparency: AI systems should be designed in a way that allows users to
understand how the algorithms work.
 Non-maleficence: AI systems should avoid harming individuals, society or
the environment.
 Accountability: Developers, organizations and policymakers must ensure AI
is developed and used responsibly.
 Privacy: AI must protect people’s personal data, which involves developing
mechanisms for individuals to control how their data is collected and used.
 Robustness: AI systems should be secure – that is, resilient to errors,
adversarial attacks and unexpected inputs.
 Inclusiveness: Engaging with diverse perspectives helps identify potential
ethical concerns of AI and ensures a collective effort to address them.
Principles in Responsible AI
Good Practices
ISO suggests the following

 Design for humans by using a diverse set of users and use-case scenarios, and
incorporating this feedback before and throughout the project’s
development.
 Use multiple metrics to assess training and monitoring, including user
surveys, overall system performance indicators, and false positive and
negative rates sliced across different subgroups.
 Probe the raw data for mistakes (e.g. missing values, incorrect labels,
sampling),training skews (e.g. data collection methods or inherent social
biases) and redundancies – all crucial for ensuring responsible AI principles
of fairness, equity and accuracy in AI systems.
 Understand the limitations of your model to mitigate bias, improve
generalization andensure reliable performance in real-world scenarios; and
communicate these to userswhere possible.
 Continually test your model against responsible AI principles to ensure it
takes real-world performance and user feedback into account, and consider
both short- and long-term solutions to the issues.
Good Practices
 Understand the limitations of your model to mitigate bias,
improve generalization and ensure reliable performance in
real-world scenarios; and communicate these to users where
possible.
 Continually test your model against responsible AI principles
to ensure it takes real-world performance and user feedback
into account, and consider both short- and long-term
solutions to the issues.
 These are also contextual and will be evolving depending on
the need
General to Specific
 The challenge lies in translating these principles
 To specific sectors
 Domain specific issues and priorities
 Nature of application and system
 Responsible AI is not AI ethics
 There may be overlaps but they are different
 Implementing both needs capacity
 Sector specific metrics, indicators and assessments
 But Responsible AI is promoted because methodologies have
been developed integrating technical parameters with values
RAI- Some Concerns
 Quantification vs qualitative assessment
 Similar to Ethics washing
 Lack of tools to assess and certify
 Driven by big consultancy and firms in AI sector than by
stakeholders?
 What is role of government
 Is RAI a ploy in sectors like law enforcement for more power
 Is RAI linked with regulation
 Will RAI be more a matter of soft law than regulation by law
Next
 Responsible AI in Law and Justice will be
discussed in the next session
Session 23

Responsible AI in Law and Justice


Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
 We introduced Responsible AI (RAI) and
various Issues in realizing it
 We discussed various components in RAI and
the Challenges in making it applicable in
different sectors
 Further we underscored the importance of RAI
and Highlighted cautions and concerns about
it.
What is RAI in Law and Justice
 General Principles to practice
 Countries have different codes of conduct, Ethics
guidelines often decided by legal profession Judges
are similarly governed by code of conduct and
ethical guidelines
 Scale of technology adoption
 Opacity in AI Systems in law and justice
 This is all the more important to understand and
examine through lens of RAI
 The Risks categorization
 Identifying the potential risks
Divergence & Differentiation
 UN agencies role
 UNESCO and the United Nations Interregional
Crime and Justice
 Research Institute (UNICRI) are developing
guidebooks
 UNESCO on Law and Justice in context of AI
 UNICRI on RAI in law enforcement
 Inherent Complex Decision Making
 AI models make decisions based on complex Models and
understanding is not simple
 But systems in Judicial Use need to be addressed
differently
 Challenges in Explaining and Understanding Outputs
 Countries may adopt different levels of transparency
Recommended Pathway of UNICRI
 Introduction to Responsible AI Innovation
 Familiarize with key concepts and terms
 Refer to Technical Reference Book for
technical details
 Organizational Readiness Assessment
 Conduct self-assessment using
Organizational Readiness Assessment
Questionnaire
 Supported by the Organizational Roadmap
 Understand the stage of necessary
capabilities
 Risk Assessment and Responsible AI
Innovation
 Complete Risk Assessment Questionnaire
early in AI system life cycle
 Understand potential threats to
individuals and communities
 Align decisions with Principles for
Responsible AI Innovation
 Conduct Risk Assessment periodically
Trustworthy, Reliable
 How to make them trustworthy and reliable
 Importance of Transparency and
Accountability
 Transparency and accountability are essential
legal requirements but translating them into
RAI categories means they should be credible
 Critical Decision making Judicial Systems
 Their trustworthiness cannot be questionable
 So they will need Explainable AI in addition to
RAI
Guarantee Against Bias &
Discriminatory Outcomes
 Opacity in AI Systems
 Lawyers and Judges may not be familiar
 But likely to be impacted if they result in
 Bias and Discriminatory Outcomes
 Challenges in Addressing Bias
 Difficulty in understanding reasons behind AI
Decisions
 Hinders ability to identify and correct biases
 Solution Systemic algorithmic audit
 Test system for different factual backgrounds
 Assess in comparison with human judgement
Lack of Transparency
 Discriminatory Outcomes
 Not the only problem with 'black boxes'
 Lack of Transparency
 Hinders understanding of underlying logic
 Potential impact on affected individuals
 Government Use of Automated Systems In
Law and Justice
 Individuals affected by unclear operations
 Capabilities may not be well defined in system
design
RAI= Ethics +?
 Defining RAI as more than Ethics plus is necessary
 RAI is a new idea in law and justice while ethics is
an older idea and practice
 RAI’s goal is to bring Responsibility in use and
design of AI systems
 Challenges in incorporating RAI
 RAI principles often tailored to users preferences
 Responsibility placed on developers
 Ideal Characteristics of RAI in Law and Justice
 It enhances credibility and acceptability
 It makes EXAI possible
 RAI on its own is a goal worth pursuing
Transparency, Interpretability, &
Explainability
 Transparency in AI
 Enables accountability by allowing stakeholders
to validate and audit decision-making processes
 Helps detect biases or unfairness
 Ensures alignment with ethical standards and
legal requirements
 Refers to the ability to understand a specific
model
 Can be considered at different levels: entire
model, individual components, and training
algorithm
RAI Principles - Translating
 Translation Challenges
 From abstract to concrete and specific
 Different interpretations in putting to Practice.
 Expectations and Realization
 Better to understand that RAI as an ideal
cannot be realized in one go and
 A system may score high on one parameter but
may not be scoring that high in others
RAI & Challenges
 RAI as a journey than as an all in one go
 Given the issues in translating RAI principles into system
functioning and outcomes understand
 What matters most in that context and application if
minimum criteria is fixed whether it meets them but
different systems may have different objectives and
priorities
 Focus on specific output decision-making
 LIME creates interpretable surrogate models
 SHAP assigns values to each feature
 RAI, Ethics and Explainable AI
 These three are interlinked but objectives are not same
 Understanding ethics and EAI is necessary for RAI
Conclusion
 Variations in understanding RAI
 Translating RAI into AI systems in law and Justice
depends on variations in understanding RAI helps
meet
 In many jurisdictions there are no specific guidelines
or procedures in incorporating RAI so the paradox is
there is much talk but in terms of guidelines that
help both developer and users there is not much
 Challenges of RAI
 Risk of misinterpretation and misalignment
 Responsibility and Oversight by collaboration and
incorporating RAI from conception of system is
important
Next
 Responsible AI in Law and Justice in India
Artificial Intelligence, Law and Justice
Session 24
Responsible AI in Law and Justice in Indian Context

Dr. Krishna Ravi Srinivas


Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
⚫We discussed Responsible AI (RAI) in Law
and Justice and the issues and challenges in it.
⚫We pointed out that while there is a
consensus on principles of Responsible AI
translating that into practice in Law and
Justice has seen varied responses.
⚫Further we mentioned that Responsible AI in
Law and Justice might mean different things to
different stakeholders and preferences in
values in RAI make a difference
RAI and Playbooks
⚫RAI in India is discussed by inter alia,
NASSCOM
⚫NASSCOM has come out with a play book
⚫Digital Futures Laboratory has come up
with a play book
⚫On RAI in Law and Justice we have cited
Vidhi’s report on this topic.
RAI and Niti
⚫In 2021 and 2022 Niti Aayog published
documents on Responsible AI for India with
⚫slogan “AI for ALL”. It identified core principles
and followed that with case studies.
⚫NASSCOM has come with a play book on RAI.
⚫DFL has published a similar one.
⚫In 2023 NASSCOM published a report State of RAI
in India
NASSCOM Findings 1
NASSCOM Findings 2
NASSCOM Findings 3
Developers Playbook on RAI
⚫Last year NASSCOM came out with a play
book on RAI for developers
⚫It gives guidance on Risk Mitigation for
⚫Discriminative AI Model
⚫Gen AI Model , and,
⚫AI application
⚫https://nasscom.in/ai/pdf/the-developer's-
playbook-for-responsible-ai-in-india.pdf
⚫Although not directly related to applications
in
⚫Law and Justice it can be adapted for
developing
⚫RAI in Law and Justice
RAI principles in India
⚫DFL’s playbook on RAI refers to UNESCO’s AI
ethics principles and is meant for social impact
organizations.
⚫ICMR’s “Ethical Guidelines for Application of
Artificial Intelligence in Biomedical Research and
Healthcare” is another documents that is relevant
for RAI in India.
⚫In January 20255 a multistakeholder body under
the Ministry of Electronics and Information
Technology (MeitY) published a document
outlining AI governance principles for mitigating
AI harms and promoting RAI.
RAI Principles
1. Transparency: Al systems should be accompanied with meaningful
information on their development, processes, capabilities & limitations,
and should be interpretable and explainable, as appropriate. Users
should know when they are dealing with Al.
2. Accountability: Developers and deployers should take responsibility for
the functioning and outcomes of Al systems and for the respect of user
rights, the rule of law, & the above principles. Mechanisms should be in
place to clarify accountability.
3. Safety, reliability & robustness: Al systems should be developed,
deployed & used in a safe, reliable, and robust way so that they are
resilient to risks, errors, or inconsistencies, the scope for misuse and
inappropriate use is reduced, and unintended or unexpected adverse
outcomes are identified and mitigated. Al systems should be regularly
monitored to ensure that they operate in accordance with their
specifications and perform their intended functions.
4. Privacy & security: Al systems should be developed, deployed & used
incompliance with applicable data protection laws and in ways that
respect users' privacy. Mechanisms should be in place to data quality,
data integrity, and 'security-by-desian'.
RAI Principles
5. Fairness & non-discrimination: Al systems should be developed, deployed,
& used in ways that are fair and inclusive to and for all and that do not
discriminate or perpetuate biases or prejudices against, or preferences in
favour of, individuals, communities, or groups.
6. Human-centred values & 'do no harm': Al systems should be subject to
human oversight, judgment, and intervention, as appropriate, to prevent
undue reliance on Al systems, and address complex ethical dilemmas that
such systems may encounter. Mechanisms should be in place to respect
the rule of law and mitigate adverse outcomes on society.
7. Inclusive & sustainable innovation: The development and deployment of
Al systems should look to distribute the benefits of innovation equitably.
Al systems should be used to pursue beneficial outcomes for all and to
deliver on sustainable development goals.
8. Digital by design governance: The governance of Al systems should
leverage digital technologies to rethink and re-engineer systems and
processes for governance, regulation, and compliance to adopt
appropriate technological and techno-legal measures, as may be
necessary, to effectively operational lise these principles and to enable
compliance with applicable law.
NITI’s RAI principles
NITI’s work on FRT and RAI
⚫By applying the principles of RAI to FRT and to Digi Yatra,
Niti published a series of recommendations for policy
makers and others.
⚫It emphasised on inter alia, principle of accountability,
principle of transparency, privacy by design, Principle of
protection and reinforcement of positive human values.
⚫It has pointed out need for a comprehensive Data
Protection Act.
⚫It’s recommendations are similar to what is discussed in
literature e.g. Jauhar, Ameen (2020) "Facing up to the Risks
of Automated Facial-Recognition Technologies in Indian Law
Enforcement," Indian Journal of Law and Technology: Vol.
16: Iss. 1, Article 1.
⚫Available at: https://repository.nls.ac.in/ijlt/vol16/iss1/1
RAI in Law and Justice
⚫Although RAI in Law and Justice in India is
discussed In the literature there are gaps that may
hinder fuller development and realization
1. Absence of a data protection regime: The DPDP
Act is yet to be implemented in full scale.
2. Absence of studies on specific aspects in RAI in
Law and Justice: While Niti’s report on FRT is
important we need Similar Studies on other
technologies including algorithms
3. Governance framework on AI including
algorithms and deployment of ADM.
RAI in Law and Justice
⚫Guidelines for advocates, courts and others on using AI
⚫Guidelines on assessment of AI systems in Law and Justice
⚫Absence of specific policy on RAI in Law and Justice
⚫Lack of information on use of predictive policing in India
⚫Delhi Police’s Crime Mapping Analytics and Predictive
System
(CMAPS ,predictive policing software which determines
‘hotspots’ in the city and used to decide how much force is
required and where to be deployed. The Status of Policing
in India Report (2019)
(https://www.commoncause.in/uploadimage/page/Status_
of_Policing_in_India_Report_2019_by_Common_Cause_an
d_CSDS.pdf) But there is no comprehensive study of such
use elsewhere
RAI in Law and Justice
⚫On the positive side we have a well aware civil
society Groups like Vidhi and Daksh and pubic
interest AI organizations like agami
https://agami.in/
⚫Responsible AI in the Law and Justice in India
needs a nuanced balance between innovation and
safeguarding citizens’ fundamental rights. For this
we need inter alia,robust regulatory frameworks,
accountability mechanisms, Guidelines on ADM
and a functioning data protection regime.
.
Next
⚫Explainable AI
Artificial Intelligence, Law and Justice
Session 25

Explainable Artificial Intelligence


Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
⚫We discussed Responsible AI in Law and Justice
in India
⚫We analysed Niti Aayogs Perspective of
Responsible AI
⚫With Facial Recognition Technology as a case
study
⚫We pointed out the issues and constraints in
Responsible
⚫AI in Law and Justice in India and the way
forward
The 'Black Box' Effect
⚫Rapid Adoption of AI in Various Sectors
⚫Healthcare, finance, transportation,
manufacturing, and entertainment
⚫Automation of tasks and pattern identification
⚫Popular AI Models
⚫Large language models (LLMs) like ChatGPT
⚫Text-to-image models like Stable Diffusion
⚫Opacity in AI Systems
⚫Providers, deployers, and users often cannot
explain AI decisions
⚫Complexity of AI systems contributes to this
opacity
⚫The 'Black Box' Effect
⚫Inability to understand or explain AI outcomes
Complexity and Lack of
Understanding
⚫AI Systems and Training
⚫Machine learning (ML) and deep learning (DL) use
self-learned algorithms
⚫Training process allows AI models to discover new
correlations
⚫Complex Decision Making
⚫AI models make decisions based on complex
models
⚫Involves a large number of interacting parameters
⚫Challenges in Understanding Outputs
⚫Difficulty for AI experts to understand how outputs
are produced
⚫Complexity increases with the number of
parameters
Misplaced Trust & Over-Reliance
⚫Unclear Decision-Making in AI Systems
⚫Users and affected individuals may not understand AI
decisions
⚫Leads to misplaced trust or over-reliance on AI
⚫Lack of Understanding of Technology
⚫Users often do not need to understand technology to use it
⚫Example: Few drivers can describe how an automatic
transmission works
⚫Importance of Transparency and Accountability
⚫AI used for automated decision-making by public
authorities
⚫Transparency and accountability are essential legal
requirements
⚫Unacceptability of the Black Box Effect
⚫Hides the underlying logic of AI decisions
Bias & Discriminatory Outcomes
⚫Opacity in AI Systems
⚫AI engineers may not fully understand the internal
workings
⚫Decisions become harder to interpret
⚫Impact of AI Opacity
⚫Can hide deficiencies such as bias and inaccuracies
⚫Potential for discriminatory or harmful results
⚫Examples of AI Bias
⚫Job applicant selection favoring certain demographics
⚫Medical diagnosis misdiagnosing certain conditions
⚫Challenges in Addressing Bias
⚫Difficulty in understanding reasons behind AI decisions
⚫Hinders ability to identify and correct biases
Lack of Transparency
⚫Discriminatory Outcomes
⚫Not the only problem with 'black boxes'
⚫Lack of Transparency
⚫Hinders understanding of underlying logic
⚫Potential impact on affected individuals
⚫AI Models in Credit Approval
⚫Bank customers lack insight into decisions
⚫Financial lives affected by automated decisions
⚫Government Use of Automated Systems
⚫Individuals affected by unclear operations
⚫Capabilities not well defined in legislation
Definition & Goals
⚫Definition of XAI
⚫Ability of AI systems to provide clear and
understandable explanations
⚫Goal is to make AI behavior understandable to
humans
⚫Challenges in XAI
⚫Explanations often tailored to AI researchers
⚫Responsibility placed on AI experts
⚫Ideal Characteristics of XAI
⚫Explain system’s competencies and understandings
⚫Explain past actions, ongoing processes, and
upcoming steps
⚫Disclose relevant information for actions
Transparency, Interpretability, &
Explainability
⚫Transparency in AI
⚫Enables accountability by allowing
stakeholders to validate and audit decision-
making processes
⚫Helps detect biases or unfairness
⚫Ensures alignment with ethical standards and
legal requirements
⚫Refers to the ability to understand a specific
model
⚫Can be considered at different levels: entire
model, individual components, and training
algorithm
Importance of Explainability
⚫Interpretability in AI
⚫Degree of human comprehensibility of a model or decision
⚫Poorly interpretable models are opaque and hard to understand
⚫Importance of Explainability
⚫Helps users, regulators, and stakeholders understand AI outcomes
⚫Crucial in critical applications involving human lives or sensitive
information
⚫Explainability in AI
⚫Provides clear explanations for specific model predictions or
decisions
⚫Requires interpretability as a building block
⚫Looks to fields like human-computer interaction, law, and ethics
⚫Building Trust in AI Systems
⚫Explainability is important for trust but may not be necessary if
systems are interpretable
Self-Interpretable Models
⚫Categories of AI Explainability
⚫Self-interpretable models
⚫Post hoc explanations
⚫Self-Interpretable Models
⚫Interpretability is built into the design
⚫Post Hoc Explanations
⚫Behavior is observed first
⚫Explanations are provided afterward
Post Hoc Explanations
⚫Global Explanations
⚫Provide overall understanding of AI model behavior
⚫Feature importance identifies influential features
⚫Rule extraction generates human-readable rules
⚫Local Explanations
⚫Focus on specific output decision-making
⚫LIME creates interpretable surrogate models
⚫SHAP assigns values to each feature
⚫Limitations of Post Hoc Explanations
⚫Perturbations can be distinguishable from normal data
⚫Potential for biased and discriminatory models
⚫Not reliable as sole mechanism for fairness
Transparency
⚫Transparency in Data Processing
⚫Core principle of data protection
⚫Data should be processed lawfully, fairly, and transparently
⚫Information must be concise, transparent, intelligible, and
accessible
⚫Controller's Responsibilities
⚫Provide information about automated decisions and
profiling
⚫Offer meaningful information about the logic applied
⚫Role of Explainable AI (XAI)
⚫Provides insights into AI decision-making processes
⚫Empowers deployers and individuals to understand and
interact with AI systems
⚫Fostering Trust and Compliance
⚫Transparency fosters trust and confidence in AI systems
Data Controller Accountability
⚫Responsibility of Organisations
⚫Ensure lawful and transparent processing of personal
data
⚫Implement mechanisms for compliance with data
protection principles
⚫Enable effective oversight and auditability of processes
⚫Risk Assessment
⚫Better assessment of risks for data controllers
⚫Perform data protection impact assessments
⚫Role of XAI
⚫Facilitate audits
⚫Hold organisations accountable for AI-driven decisions
⚫Promote responsible AI development
⚫Foster public trust in AI technologies
Data Minimisation
⚫Data Protection by Design and Default
⚫Emphasises technical and organisational measures
⚫Implements data protection principles like data
minimisation
⚫XAI's Role in Data Minimisation
⚫Reveals influential factors in AI decision-making
⚫Reduces data collection, storage, and processing
⚫Compliance with Data Protection Regulations
⚫Identifies critical data points for decision-making
⚫Leads to focused and targeted data collection
⚫Minimises intrusion into individuals’ privacy
⚫Achieves accurate and effective AI-driven
outcomes
Special Categories of Data
⚫High Risk to Privacy
⚫Special categories of data can pose a high risk if
mishandled
⚫Opacity of AI algorithms raises concerns about data
processing
⚫Proxy Attributes
⚫AI systems identify correlations between attributes and
data subjects
⚫Proxy attributes can infer specific categories of data
⚫Example of Proxy Attributes
⚫Postcode can be a proxy for ethnicity in some cities
⚫AI systems may use proxy attributes for decisions like
credit reliability
⚫Risk of Incorrect Inferences
⚫Inferences about individuals may be wrong
⚫XAI and Data Protection
Misinterpretation
⚫Misinterpretation Risks
⚫Explanations can be too complex or oversimplified
⚫May lead to misunderstanding by individuals
⚫Providing Clear Information
⚫Information should be concise, transparent, and
intelligible
⚫Use clear and plain language
⚫Adjusting Explanations for Different Stakeholders
⚫Identify different stakeholders
⚫Adjust level of detail for each audience
⚫Facilitating Explanation Process
⚫Use user-friendly interfaces with graphical
representations
⚫Ensuring Accurate and Neutral Explanations
Potential Exploitation of the Systems
⚫Importance of Security Measures
⚫Implement technical and organisational measures
⚫Ensure confidentiality, integrity, and availability of data
⚫Risks in XAI Context
⚫Avoid disclosing personal data
⚫Prevent exploitation of AI systems
⚫Types of Attacks
⚫Membership inference attacks
⚫Adversarial attacks
⚫Counterfactual Explanation Methods
⚫Help attackers find adversarial samples
⚫Need for Balance
Disclosure of Trade Secrets
⚫Risk of Business Competitiveness Loss
⚫Disclosure of proprietary information
⚫Exposure of sensitive business strategies
⚫Principles of Accountability and Data
Protection
⚫Relevant to XAI implementation
⚫Ensures informative explanations
⚫Mechanisms for Informative
Explanations
⚫Built into XAI's implementation
⚫Avoids undue disclosure of proprietary
details
Over-reliance On The AI System
By Deployers
⚫Human Intervention and Oversight
⚫Individuals should have access to human intervention from
the controller
⚫Ability to express their point of view and challenge
decisions
⚫Preventing Over-Reliance on AI
⚫Promote human involvement in significant decisions
⚫Address risks to physical or economic harm, and rights and
freedoms
⚫Clear Communication of AI Limitations
⚫Ensure responsible and socially acceptable decisions
⚫Encourage seeking human intervention when necessary
⚫Comprehensive Understanding of XAI
⚫Enhance trustworthiness and acknowledge limitations
⚫Collaboration with Authorities
Humans Prefer Contrastive
Explanations
⚫Human Aspect in AI Explanations
⚫Explanations must be relevant and meaningful to people
⚫Humans perceive and process information differently
⚫Factors Influencing Human Perception
⚫Preferences for contrastive explanations
⚫Selectiveness in information processing
⚫Trust in the explanations
⚫Ability to contextualize explanations
⚫Contrastive Explanations
⚫People ask “Why event P happened instead of Q?”
⚫Simplify complex decision-making processes
⚫Highlight key differences between options
Humans are Selective
⚫Selective Focus on Striking Aspects
⚫Individuals prioritize the most striking or
relevant details
⚫Less important details are often filtered out
⚫Alignment with Existing Knowledge
⚫People gravitate towards explanations that
match their current understanding
Humans Must Trust the Explanations
⚫Factors Influencing Trust in Explanations
⚫Accuracy and reliability of the system
⚫Clarity and completeness of explanations
⚫Consequences of Mistrust
⚫Explanations that are too complicated
⚫Incomplete or inaccurate explanations
Explanations are Contextual
⚫Contextual Explanations in XAI Systems
⚫Explanations depend on the specific task at
hand
⚫Abilities of the AI system influence the
nature of explanations
⚫User expectations shape how explanations
are framed
⚫Importance of Context in XAI
⚫Ensures explanations are relevant and useful
⚫Helps users understand AI capabilities
better
Explanations are Social
⚫Explanations as Knowledge Transfer
⚫Presented as part of a conversation or interaction
⚫Relative to the explainer’s beliefs about the
explainee’s beliefs
⚫Influences on Explanations
⚫Individual vs. group behaviour
⚫Norms and morals
⚫Considering the Intended Audience
⚫Supervisory authorities may need detailed
explanations
⚫Example: AI work management system
compliance
Conclusion
⚫Trustworthy AI Characteristics
⚫Transparency, accountability, and ethical standards
⚫Explainable AI (XAI) helps meet these requirements
⚫Human-Centered AI
⚫XAI explains the rationale behind AI decisions
⚫Empowers individuals to understand and challenge AI-
driven decisions
⚫Strengthening Trust and Fairness
⚫XAI enhances transparency and trust between
organizations and users
⚫Promotes fairness, equal treatment, and nondiscrimination
⚫Challenges of XAI
⚫Risk of misinterpretation and misuse
⚫Responsibility and Oversight
Next
⚫Explainable AI in Law and Justice
Session 26

Explainable AI in Law and Justice


Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
 In the last session we discussed Explainable AI
(ExAI) and the need for it
 We discussed issues including translating
Explainable AI into law and policy
 We highlighted different approaches in ExAI
Introduction to Explainable
AI in Legal Contexts
 Role of Explanations in Law
 Judges write opinions to explain decisions
 Government agencies provide reports for
denied benefits
 Credit lenders inform applicants about denial
reasons
 Functions Supported by Explanations
 Right to appeal adverse decisions
 Transparency in government decisions
 Building public trust in institutions
Risks of Automated Decision-
Making Systems
 Risks of Automated Decision-Making Systems
 Legal systems may become opaque 'black boxes'
 Individuals may not understand the basis of
decisions
 Policymakers' Response
 Creating rights to explanation of automated
decisions
 California Privacy Protection Agency's Draft
Regulations
 Businesses must inform consumers about the 'logic'
of decision-making technologies
 Explanation of 'key parameters' and their application
in decisions
Bridging Computer Science & Law
 Key Parameters in Algorithmic Explanations
 Definition and importance remain unclear
 Bridging Computer Science and Law
 Developing a legal framework for XAI
 Three-Step Approach
 Presenting a taxonomy for legal explanations
 Applicable to various legal areas and AI
systems
Legal-XAI Taxonomy
 Introduction of Legal-XAI Taxonomy
 Delineates key factors with practical implementations in
XAI
 Types of Model Explanations for Legal Audiences
 Different types categorized into the taxonomy
 Key Factors in the Taxonomy
 Scope: Local or global explanations
 Depth: Contrastive or non-contrastive explanations
 Alternatives: More or less selective explanations
 Flow: Explanations as conditional or correlations
 Factors Related to Model Properties
 Scope and Depth
 Factors Related to Information Presentation
Scope: Global vs. Local Explanations
 Local Explanations
 Focus on model behavior for specific instances
 Crucial for understanding individual
predictions
 Examples: healthcare diagnostics, criminal
justice
 Global Explanations
 Provide understanding of model behavior
across many instances
 Important for regulatory and compliance
purposes
Application of Legal-XAI Taxonomy
 Framework for Understanding Explanations
 Conditions for data subjects to demand
explanations
 Wide Applicability
 Applies to various AI methods
 Relevant to multiple legal areas
 Abstract Design for Stability
 Maintains relevance despite rapid AI
innovation
Legal, Technical, & Behavioral
Guidance
 Legal Prescriptions for Explanation Methods
 Law may dictate types of explanation methods for
algorithmic decisions
 Examples from credit scoring show varying requirements
based on policy goals
 Technical Implementation by Computer Science
 Research identifies suitable algorithms for specific
explanation methods
 Current eXplainable AI methods can be mapped to the
taxonomy
 Behavioral Insights from Empirical Research
 Empirical studies reveal effective and accepted
explanation methods
 Roadmap and software package for comparing methods
in field experiments
Examples in Various Legal Areas
 Application in Various Legal Areas
 Medicaid
 Higher education
 Automated decision-making
 AI Explanations for Policy Goals
 EU AI Act
 California's upcoming regulation
 Policy Recommendations
 Implementing the taxonomy in the real world
Bridging Legal & Computer
Science Discussions
 Importance of Interdisciplinary Work
 Combining law, computer science, and behavioral research
 Ensuring societal benefits from algorithmic decision-making
 Challenges with Current Laws and Models
 Laws not incorporating technical realities
 Decision-making models not focusing on data subjects'
interests
 Need for Empirical Evidence
 Effective algorithmic explanations in practice
 Framework for assessing explanations' effectiveness
 Benefits for Policymakers and Computer Scientists
 Policymakers: Assessing explanations' intended purpose
 Computer scientists: Understanding AI methods' perception
and acceptance
Empirical Evidence & Practical
Implementation
 Benefits of Fair and Trustworthy Algorithmic Systems
 Individuals gain from systems that exhibit fairness
 Trustworthiness in algorithms is crucial for user
confidence
 Challenges Due to Explainability Gaps
 Law and computer science approaches need alignment
 Risk of mistakes similar to the Tammy Dobbs case
 Potential Threats of Automation
 Automation could erode trust in institutions
 Key decisions made by algorithms need transparency
 Goals of Bridging the Explainability Gap
 Solving problems related to automated decision-making
 Enabling benefits without risking legal institutions
Legal Principles for AI Explanations
 Explanation for Decision Subjects
 Helps in making changes after adverse decisions
 Underlies parts of U.S. credit legislation and regulation
 Credit Scoring and Pricing
 One of the most studied areas of algorithmic decision-
making
 Credit scores estimate an individual's creditworthiness
 Major Credit Reporting Agencies
 TransUnion, Equifax, and Experian
 Calculate credit scores based on various factors
 FICO Score
 Standard credit score since the late 1980s
 Importance of Credit Scores
Introduction to Counterfactual
Examples
 Mandated Explanations for Consumer
Empowerment
 Law requires explanations to help consumers make
corrections and change behavior
 Need for Local, Contrastive Explanations
 Counterfactual examples involve altering input
feature values
 Rest of the features remain unchanged
 Insights from Counterfactual Examples
 Observe changes in model’s prediction
 Understand factors influencing decision-making
 Answer questions like “What if a feature had a
different value?”
Context-Specific Approach
 Determining Appropriate AI Explanation Methods
 Legal-XAI Taxonomy aids policymakers and courts
 Helps decide which AI explanation method to use
 High-Level Navigation of Typology
 Ask simple questions to navigate
 Identify factors under control of decision's subject
 Contrastive Methods for Controlled Factors
 Illustrate how changing factors alters prediction
 Audience Consideration
 Determine if explanation is for a broader audience
Voluntary adoption & guidelines
 Voluntary Adoption by Government Agencies
 Framework can be integrated into existing
systems
 Algorithm-Agnostic XAI Methods
 Adaptable to both newer and described
algorithms
 Guidance for Data Scientists and Engineers
 Ensures models work in legal contexts
 Connects familiar methods to legal goals
Transparency Issues With
Third-party Vendors
 Reliance on Third-Party Vendors
 Creates transparency issues for governments
 Governments may lack visibility into algorithms'
inner workings
 Challenges in Decision-Making
 Difficult to understand why certain predictions or
decisions are made
 Problematic when individuals seek to challenge
algorithmic decisions
 Policy Recommendations
 Encourage government agencies to develop their
own algorithms
Empirical Research & Field
Experiments
 Importance of Empirical Work in XAI
 Bridges the gap between law and computer science
 Encourages adoption of effective XAI in social contexts
 Role of Government Agencies
 Conduct field experiments to test efficacy of
explainability methods
 Provide valuable insights into effective algorithmic
explanations
 Existing Survey Work
 Focuses on the effectiveness of explanations
 Large-Scale Field Experiments
 Conducted by government agencies
 Help refine and improve XAI techniques
 Creating a Feedback Loop
Importance of User-centric Design
 Focus on End-Users in XAI
 Aligns AI development with user-centric
design principles
 Emphasizes the need for AI systems to be
understandable and accessible
 Real-World Impact of AI Decisions
 Addresses the effects of AI decisions on
individuals
 Promotes a more inclusive and democratic
technology development
Plans For Field Experiments
 Field Experiments in Diverse Legal Contexts
 Test robustness and applicability of the framework
 Gain insights into practical challenges and
opportunities
 Empirical Validation and Refinement
 Ensure relevance and effectiveness across legal
domains
 Contributions to Academic Discourse
 Offer tangible benefits to practitioners and
policymakers
 Impact individuals affected by algorithmic decisions
Importance Of Judicial Demand for
xAI
 Concern about Machine Learning Algorithms
 Operate as “black boxes”
 Difficulty in identifying decision-making processes
 Judicial Confrontation with Algorithms
 Increasing frequency in criminal, administrative, and civil
cases
 Judges Should Demand Explanations
 Address the “black box” problem
 Design systems to explain algorithmic conclusions
 Role of Courts in Shaping xAI
 Developing xAI meaning in different legal contexts
 Advantages of Judicial Involvement
 Favoring Public Involvement
Explanation of The Black Box Problem
 Concern about Machine Learning Algorithms
 Operate as 'black boxes'
 Adjust inputs to improve accuracy
 Need for Explanation
 Humans and law demand answers
 Questions of 'Why?' and 'How do you know?'
 Explainable AI (xAI)
 Design systems to explain algorithm conclusions
 Legal and computer science scholarship on xAI
 Beneficiaries of xAI
 Criminal defendants with long sentences
 Military commanders
Importance of xAI In Legal Contexts
 Deployment of Autonomous Weapons
 Concerns about the ethical implications
 Need for accountability in decision-making
 Doctors and Legal Liability
 Use of “black box” algorithms in diagnoses
 Potential legal consequences for medical
professionals
 Theoretical Debate on Algorithmic Decisions
 Which decisions require explanations
 Forms that explanations should take
Role of Judges in Shaping xAI
 Judges' Interaction with Machine Learning Algorithms
 Increasing frequency of interactions
 Importance of demanding explanations for algorithmic
decisions
 Judicial Influence on xAI Development
 Shaping xAI in criminal, administrative, and civil cases
 Using common law tools to define xAI
 Advantages of Judicial Involvement
 Pragmatic rule development through case-by-case
consideration
 Stimulating production of xAI forms for different legal
settings
 Theoretical Perspective
 Favoring public actors' involvement in xAI development
 Moving beyond private hands in shaping xAI
Existing forms of xAI
 Variety of Explainable AI (xAI)
 Multiple forms of xAI currently exist
 Continuous development by computer
scientists
 Intrinsically Explainable Models
 Some machine learning models are built to be
intrinsically explainable
 These models are often less effective
Model-centric & Subject-centric
Approaches
 Model-Centric Approach
 Also known as global interpretability
 Explains creator's intentions behind the modelling
process
 Describes the family of model used
 Details parameters specified before training
 Qualitative descriptions of input data
 Performance on new data
 Testing data for undesirable properties
 Auditing Outcomes
 Scours system's decisions for bias or error
 Attempts to explain the whole model
Role Of Courts In xAI
 Courts as Key Actors in Machine Learning Ecosystem
 Deciding when, how, and in what form to develop xAI
 Questions Courts Need to Consider
 Audience for the explanation
 Complexity and simplicity of the explanation
 Time required to understand the explanation
 Structure or form of xAI: code, visuals, programs
 Factors to Focus on in Explanations
 Model-centric vs. subject-centric explanations
 Handling trade secrets: in camera review or independent
peer review
 Defining a “Meaningful Explanation”
 Judges developing pragmatic approaches to xAI
Challenges & Opportunities for xAI
in Agencies
 Importance of Agency Reason-Giving
 Defends the rules produced by agencies
 Complications with Machine Learning Algorithms
 Reliance on machine learning predictions
 Need to share data types and models used
 Disclosure of algorithm’s error rate
 Explanation of algorithm’s functioning
 Uncertainty in Court Demands
 Unclear what courts will require from agencies
 Uncertainty in agency responses
Case study: State v. Loomis
 Racial Bias in Algorithms
 Algorithms trained by computer scientists may exhibit racial bias
 Accuracy of Algorithms
 Algorithms may not be better at predicting recidivism than humans
without criminal justice expertise
 Opacity of Algorithms
 Structure, contents, and testing of algorithms are often opaque
 State v. Loomis Case
 Defendant challenged the use of COMPAS algorithm in sentencing
 COMPAS categorized the defendant as high risk of recidivism
 Defendant argued that the use of COMPAS violated due process
rights
 Inability to assess COMPAS's accuracy was a key concern
Shift Towards Transparent Algorithms
 Jurisdictions Shifting to Transparent Algorithms
 Moving away from opaque commercial algorithms
 Using public data and publicly available source
codes
 Importance of Explainable AI (xAI)
 Source codes alone are not self-explanatory
 Courts relying on opaque algorithms should demand
xAI
 Reasons for Courts to Demand xAI
 Statutory requirements to justify sentences
 Ensuring accuracy and fairness in sentencing
 Maintaining institutional integrity of the courts
Forms of xAI in Criminal Justice
 Administrative Law Context
 Audiences: executive agencies, judges, corporate or interest-
group plaintiffs
 Likely to be sophisticated actors
 Criminal Justice Setting
 Three main audiences: judges, defendants, and their lawyers
 Judges and defense counsel: sophisticated repeat players
 Defendants: likely to have little experience with algorithms
 Judges: varying levels of experience with tools like regression
analyses
 Model-Centric vs. Subject-Centric xAI
 Judges may prefer model-centric explanations
 Defendants may need subject-centric xAI
 Judges' Role
Challenges From Proprietary
Algorithms
 Pushback from Proprietary Algorithm Producers
 Resistance to revealing algorithm workings
 Claims based on trade secrets
 Protecting Trade Secrets
 Issuing protective orders
 Building Surrogate Models
 Shedding light on algorithm functioning
 Avoiding trade secret disclosure
 Role of Explainable AI (xAI)
 Counterbalancing trade secrets claims
 Relevant in cases like Loomis
Benefits of xAI in Preventing
Automation Bias
 Addressing Automation Bias
 Machine learning algorithms can cause
automation bias
 Automation bias is the undue acceptance of
machine recommendations
 Role of xAI for Judges
 xAI helps judges question algorithm
conclusions
 Promotes critical thinking and reduces
automation bias
Reasons for lack of xAI
Demand in Courts
 Reasons for Lack of xAI Adoption in Courts
 Nascent idea of xAI
 Algorithms in criminal justice under scrutiny
 Trade secrets hurdles
 Lack of confidence in courts to use xAI
 Potential for xAI in Courtrooms
 Connecting xAI to real-world challenges
 Increasing use of machine learning and xAI
Broader Legal Contexts for xAI
 Product Liability Litigation
 Involves self-driving cars
 Concerns the internet of things
 School Districts' Use of Algorithms
 Algorithms for teacher evaluations
 Malpractice Litigation
 Doctors relying on medical algorithms for diagnoses
 Governmental Decisions
 Freezing assets based on algorithmic recommendations
 Defendants' Challenges
 Challenges to police actions based on algorithms
Role of Law Makers in xAI Legislation
 Role in xAI Regulation
 May demand and shape xAI use across industries and
within government
 Could require xAI in briefings by executive agencies,
including intelligence community
 Challenges in Crafting xAI Legislation
 Statutes must be general to capture basic values
 Legislation may struggle to keep up with rapid changes in
xAI
 Promise of xAI
 Courts can address xAI issues at the edges
 Can draw on xAI developments in different legal areas
 Creators and users of algorithms may respond to court
actions
Next
 AI and Labour Law
Session 27

The Impact of AI on Labor Law


Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
 In the previous session we discussed
Explainable AI in Law and Justice
 We highlighted issues and challenges in
incorporating ExAI in law and justice
 We pointed out that ExAI should have an
important role in adopting AI in law and
justice despite challenges
Definition and Scope of Labor Law
 Definition of Labor Law
 Set of rules governing legal relationships between
employers and employees
 Applies to both collective and individual levels of
employment
 Scope of Labor Law
 Covers interactions between employers, employees, and
the state
 Framework for rights and obligations in employment
relationships
 Impact of AI on Labor Law
 Fast-developing field of research
 Poses crucial inquiries and obstacles as technology
progresses
Integration of AI in the Workplace
 AI Technologies in Professional
Environment
 Machines acquiring knowledge
 Engaging in logical thinking
 Handling information like the human brain
 Reassessment of Labor Legal Frameworks
 Stimulated by AI integration
 AI's Influence on Workforce
 Altering job responsibilities
 Generating novel employment opportunities
 Replacing human workforce
 Need for Reassessment
 Labor rules
Challenges Posed by AI
 Worker Categorization and Job Stability
 Impact on job stability and salaries
 Ethical aspects in decision-making
 Concerns on Worker Safety, Privacy, and
Discrimination
 Issues in hiring, monitoring, and evaluation
 Concerns about partiality, equity, and openness
 Need for Updated Labor Legislation
 Preventing violations of workers' rights
 Avoiding worsening workplace disparities
 Global Cooperation and Uniformity
 Challenges to conventional labor rules
 Increased remote and cross-border job prospects
Need for a Balanced Approach
 Interaction between Technology and Labor
Legislation
 Requires a well-rounded strategy
 Promotes innovation and economic expansion
 Safeguarding Workers' Rights
 Ensures equitable treatment of workers
 Dynamic Nature of AI Environment
 Needs continuous research
 Requires policy formulation
 Necessitates legislative adjustments
 Addressing AI Challenges in Labor Industry
 Effectively tackles distinct difficulties
Role of Trade Unions
 Revolution in Industries and Employment
 AI automates intricate activities
 Analyzes extensive volumes of data
 Alteration of Job Positions and Prerequisites
 Automation and AI-driven technologies augment
productivity
 Generate novel employment prospects
 Risk of job displacement for monotonous or repetitive duties
 Reassessment of Workforce Skills and Employment
Structures
 Trade unions must participate in this transformation
 Historical Role of Trade Unions
 Safeguarding workers' rights
 Promoting equitable working practices
Adapting to AI-Driven Changes
 Training and Development
 Lobby for programs to empower workers with AI skills
 AI in the Workplace
 Participate in deliberations to ensure AI coexists with
human labor
 Protecting Workers' Rights
 Ensure equitable remuneration, job security, and suitable
working environments
 Minimizing Job Displacement
 Implement policies like transition programs and social safety
nets
 Ensuring Fair AI Procedures
 Safeguarding Worker Privacy
 Proactive Approach to AI
 Promoting Ethical AI Practices
Technological Unemployment
 Technological Unemployment
 Displacement of jobs due to automation and AI-enabled
technology
 Tasks ranging from simple physical work to intricate mental
processes taken over by machines
 Transformation of Job Market
 Significant changes in job characteristics
 Necessity for acquiring new skills and adaptability
 Advantages of AI
 Enhanced productivity
 Decreased operating expenses
 Emergence of novel job opportunities
 Challenges to Conventional Employment
 Potential increase in unemployment or underemployment
in some sectors
Need for Upskilling and Reskilling
 Impact of Digital Technology and AI on Work
 Rapid developments in AI technology
 Altering the future world of work
 Importance of Human Capacity Revolution
 Investment in upskilling and reskilling
 Ensuring workforce relevance and
competitiveness
 Global Reskilling Needs
 WEF predicts 50% of global labor force may
require reskilling by 2025
Digital Economy & Transformation
 Significance of Technology
 Satya Nadella emphasized technology's role in
empowering humans
 Technology enhances human potential and capital
 AI-Driven Digital Economy
 Fundamental to the modern economy
 Introduces new forms of work
 Digital Transformation
 Adoption of digital technologies to revolutionize
services or businesses
 Significant developments in recent years
Redefining Work Roles
 Unemployment as a Global Concern
 Despite recovery efforts, joblessness persists
 Impact of the Gig Economy
 Multiple professions becoming the norm
 Frequent role redefinition and career transitions
 Key Factors for Economic Success
 Education, learning, and meaningful work
 Individual well-being and community cohesiveness
 Essential Skills for the Digital Economy
Balancing Automation & Employment
 Disparity Between Skill Sets, Automation, and Employment
 Need to address the gap to prevent persistent joblessness
 Equal Distribution of Work Hours
 Humans: managing, advising, decision-making, reasoning,
communicating, interacting
 Robots: analysis and execution of precision tasks
 WEF Future of Jobs 2020 Research
 85 million jobs displaced by 2025
 97 million new jobs emerging across 26 economies
 Increase in Gig, Contract, and Work-on-Demand
Arrangements
 Need for better coordination in policymaking and
development
 Assist workers in finding meaningful employment
 Establish Comprehensive Social Support Systems
Policy and Social Support Systems
 Integration of Transferable Skills
 Incorporating coding, robotics, AI, and IT skills into
public education
 Reevaluating higher, vocational, and technical education
 Transformative Shift in Skills
 Many countries are unprepared for this transition
 Need to equip individuals facing joblessness
 Government's Role
 Redesigning the human capacity development ecosystem
 Private Sector's Role
 Defining necessary skills and implementing large-scale
initiatives
 Labor Unions' Role
 Educational Establishments' Role
Rise of Remote Work
 Transformation Due to COVID-19
 Shift to remote formats in education, business, and
socializing
 Adoption of new rituals during recovery
 Emergence of AI-Driven World
 Consideration of future labor and remote work
 Sudden arrival of the future of work
 Innovation During Uncertainty
 Significant innovation during the pandemic
 Organizations reassessing operational methods
 Post-Pandemic New Standard
 Remote work as a lasting consequence
 88% of job searchers favor remote work options
Digital Infrastructure & Connectivity
 Investment in Digital Infrastructure
 Expand internet accessibility in underprivileged regions
 Enhance speed and dependability of broadband services
 Decrease cost of internet services to improve affordability
 Government and Private Sector Partnership
 Narrow the digital divide
 Enable equitable remote employment opportunities
 Modifications to Labor Legislation
 Define remote working provisions
 Outline rights and obligations of employers and
employees
 Address concerns regarding working hours, leave, and
data security
 International Examples
Legal and Policy Framework
 Establishing Policies for Remote Labor
 Compliance with the law
 Clear terms and conditions
 Provision of equipment
 Data protection procedures
 Performance evaluation
 Monitoring Systems
 Preventing cyberloafing
 Potential power disparities
 Disconnection between employees and
organization
 Decrease in meaningfulness of work
 Reduction in innovative organizations
Right to Disconnect
 Definition and Importance
 Legal entitlement to disengage from work-related
communication outside regular hours
 Consideration of safety and health requirements
 Managing Work-Life Balance
 Implementing ergonomic guidelines for home offices
 Frequent health and safety evaluations
 Protecting remote workers from exploitation
 Examples and Implementations
 Microsoft Outlook's feature promoting communication
during business hours
 South African legislation lacks recognition of the right to
disconnect
 Proposed code of good practice in South Africa
 Precedents set by Ireland and Australia
Enhancing Human Capabilities
 AI's Influence on Employment Landscape
 Integration of technology into workplaces
 Emergence of new occupations and skills
 Concerns about potential widespread unemployment
 Enhancing Human Capabilities with Technology
 Potential for technology to enhance rather than displace
human roles
 Utilization of enabling technologies like wearables
 Wearables in Professional Contexts
 Significant growth in personal use
 Increasing prevalence in professional environments
 Changing perceptions of technology's role in the office
 Health and Safety Benefits of Wearables
Health and Safety Benefits
 Enhancement of Worker Abilities
 Increases strength, alertness, capacity, and endurance
 Improves productivity and safety
 Value Addition through Technology
 Strengthens physical and perceptual capabilities
 Assists in overcoming physical constraints
 Compensates for inadequate abilities
 Human-in-a-Loop Models
 Provides real-time access to data
 Enables speedier decision-making
 Promotes healthy behaviors and prevents exhaustion
 Workplace Safety and Training
Ethical Considerations
 Decontextualized Data
 Primarily descriptive
 Cannot explain causal linkages
 Disregards social and psychological factors
 Real-World Performance
 Need comprehensive understanding
 May undermine organizational credibility
 Focuses solely on tracked indices
 Preferential Treatment
 Based on distorted facts
 Results in unfair treatment
 Work-Life Balance Concerns
Privacy and Data Security
 Privacy Implications
 Concerns about data privacy and its usage
 Invasiveness of devices
 Employee Willingness
 Some employees may resist adopting the technology
 Work-Life Balance
 Concerns when wearables are used in both professional and
personal settings
 Openness in South Africa
 75% of employees willing to share information with
incentives
 Incentives include flexible working hours and reduced
insurance prices
 Utilization in Workplace
 Inquiry on how firms might use wearables
Regulatory Framework
 Transparent Communication
 Inform users about device capabilities and data collection
 Engagement and Reflection
 Provide opportunities for users to engage with and reflect on
data
 Collaborative Performance Criteria
 Work with users to establish performance standards
 Fostering Critical Discussions
 Encourage discussions about the implications of wearables
 Protection of Personal Information Act (POPIA)
 Regulates data gathering, usage, and safeguarding
 Ensures privacy in terms of notification, awareness,
decision-making, agreement, access, and involvement
 Ethical Practices and Employee Rights
Balancing AI & Labor Law
 Challenges in AI and Labor Law
 Effective navigation of AI capabilities for economic
growth
 Protection of worker rights and fair labor practices
 Maintaining a fair and impartial work environment
 Need for Continual Modification of Labor Legislation
 Addressing worker categorization and job displacement
 Tackling privacy concerns
 Regulating AI-facilitated decision-making processes
 Active Participation from All Parties
 Understanding AI's impact on the workplace
 Formulating adaptable regulations for a transforming
workforce
Ongoing Adaptation & Collaboration
 Importance of Updating Labor Laws
 Legal and social perspectives
 Safeguarding workers' rights
 Fostering an Equitable Future of Work
 Inclusive of everyone
 Complexity of the Process
 Requires ongoing discourse
 Necessitates investigation and cooperation
 Technologically Sophisticated Workforce
 Based on human needs
Summary
 Automation and Job Displacement
 Potential for AI to automate tasks
 Impact on job markets and worker displacement
 Reskilling and Upskilling
 Need for workforce adaptation
 Importance of continuous learning
 Algorithmic Management
 Effects on workers' rights and privacy
 Challenges in monitoring and regulation
 Classification of Gig Workers
 Adaptive Labor Laws
 Social Dialogue and Collaboration
Literature (selected)
 Handbook of Artificial Intelligence at Work
Interconnections and Policy Implications –
(eds) Martha Garcia-Murillo, Ian MacInnes
,Andrea Renda- Edward Elgar 2024

 Artificial Intelligence and Law-Tshilidzi


Marwala, Letlhokwa George Mpedi – Palgrave
2024
Next
 AI and Health Law
Artificial Intelligence, Law and Justice
Session 28

AI and Law in Health Law


Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
⚫In the last session we discussed AI and labour
law taking into account the challenges that
arise when AI is used widely
⚫We discussed issues including privacy,
surveillance, labour rights, role of trade unions
and put this in a broader context
⚫We pointed out that labour law will be
impacted by developments arising on account
of deployment of AI in all sectors and this has
major
⚫Implications for policy
AI In Health & Life Sciences
⚫Early AI Systems in Health and Life Sciences
⚫DENDRAL: Assisted chemists in identifying structures of organic
molecules
⚫MYCIN: Computer-based consultation system for diagnosing
bacterial infections
⚫MOLGEN: Assisted scientists in developing complex experiment
plans in molecular biology
⚫INTERNIST-I: Capable of making multiple and complex diagnoses
in internal medicine
⚫Modern AI Applications in Healthcare
⚫Precision medicine
⚫Medical visualisation and image recognition
⚫Virtual care and monitoring
⚫Electronic health records (EHRs)
⚫Robotic surgery
⚫Need for Legal and Ethical Analysis
AI for Diagnosis and Prevention
⚫AI in Diagnostics
⚫AI algorithms perform as well or better than clinicians
⚫Used to read images and support decision-making
⚫Example: AI outperformed radiologists in detecting pneumonia
from chest X-rays
⚫Example: AI achieved 94% accuracy in optical diagnosis of
polyps during colonoscopy
⚫Direct-to-Consumer (DTC) Medical AI/ML Apps
⚫Used for health monitoring and disease prevention
⚫Some apps authorized by the US FDA for independent
screening decisions
⚫Example: ECG app in Apple Watch for atrial fibrillation
screening
⚫AI in Preventive Care
⚫Personalized nutrition management for chronic diseases
⚫Self-management of chronic conditions like diabetes,
hypertension, and mental health issues
AI in Pharmacology
⚫AI in Drug Discovery
⚫Used for predicting chemical and pharmaceutical
properties
⚫Shortens drug synthesis process in R&D cycle
⚫Automates chemical experiments, performing thousands of
reactions simultaneously
⚫Enables cost savings and offloads repeated work
⚫High-Throughput Screening and Experimentation
⚫Paired with AI, ML, and robotics
⚫Develops new drugs for specific patients
⚫Digital Twins in Drug Discovery
⚫Virtual representations of organs or entire person
⚫Exact replicas down to cellular level
⚫Used to study bioactivity, chemical, and pharmacological
properties of new drugs
AI for Treatment
⚫AI in Drug Delivery Systems
⚫Optimizes drug delivery for effectiveness
⚫Uses micro- or nanosensors with AI algorithms
⚫Monitors drug concentrations and generates feedback
⚫Enables self-medication and real-time data transfer to
physicians
⚫AI in Clinical Decision Making
⚫Assists clinicians with high accuracy and speed
⚫Used in diagnosing and treating breast and lung cancer
⚫AI in Treatment Plan Generation
⚫AI in Home-Based Care
⚫AI in Healthcare Companies
⚫Economic Impact of AI in Healthcare
Challenges of AI Medical Devices
⚫Autonomous Decision-Making in Medical AI
⚫AI systems like IDX-DR can detect conditions without
human input
⚫Raises legal questions about accountability in medical
decisions
⚫Locked vs. Adaptive Algorithms
⚫Locked algorithms produce consistent results
⚫Adaptive algorithms learn and adapt, making regulation
difficult
⚫Explainability of AI Decisions
⚫Black box AI decisions may not be understandable
⚫White box models can partially explain black box
decisions
⚫Physicians face dilemmas in counseling patients and
trusting AI systems
Scalability & Centralization
⚫Distinctions between Easy- and Hard-to-Scale AI Systems
⚫Easy-to-scale AI, like diabetic retinopathy detection, may
be more scalable
⚫Hard-to-scale AI, like ICU settings, present unique
challenges
⚫Risks of Over-Reliance on Easy-to-Scale AI
⚫Potential sidelining of human judgement
⚫Overgeneralization leading to performance degradation
⚫Challenges with Hard-to-Scale AI
⚫Risk of overfitting due to small data sets
⚫Limited wide application
⚫Centralized vs. Decentralized Systems
⚫Centralized data vulnerable to adversarial attacks
⚫Decentralized data poses legal compliance challenges
Legal & Ethical Questions
⚫General Theory of Law and AI
⚫Applies to various AI settings like driverless cars
and criminal sentencing algorithms
⚫Temptation to use a single approach for all AI-
related legal issues
⚫Distinctive Features of Medical AI
⚫Requires specialized regulatory design
⚫More complex ecosystem compared to other AI
applications
⚫Complexity of Medical AI Ecosystem
⚫Involves multiple stakeholders in developing and
decision-making
Patients & Data Rights
⚫Patient Data Usage
⚫Rights to be informed and consent to data
usage
⚫Privacy Rules
⚫Anonymising or pseudonymising patient data
⚫Data Sharing Rules
⚫Sharing data with various entities
⚫Anti-Discrimination Rules
⚫Preventing data misuse against patients
AI Developers & Design Decisions
⚫AI Developer's Role in Medical AI Design
⚫Central and weighty decisions about AI design
⚫Selection of data sets for training
⚫Determining desired outcomes
⚫Choice architecture for physician interaction
⚫Override process for AI decisions
⚫Scrutiny by Tort System
⚫Jurisdiction's treatment of product liability
⚫Impact on consequential decisions
Regulators & Standards
⚫Regulatory Agencies Involved
⚫US Food and Drug Administration (FDA)
⚫European Medicines Agency (EMA)
⚫Review Standards
⚫Safety
⚫Efficacy
⚫Bias
⚫Distinctions in AI Tools
⚫Decision-support tools on hospital computers
⚫Medical AI built into devices
⚫Algorithm Types
⚫Liability Considerations
Hospital Systems & Deployment
⚫Decision-Making in Privatised Healthcare
Systems
⚫Responsibility for choosing medical AI
⚫Scrutinising offered medical AI
⚫Employment and Labour Law Concerns
⚫Imposing medical AI on healthcare workers
⚫Medical Law Rules
⚫Abiding by AI recommendations
⚫Override concerns
⚫Human Subjects Research vs. Quality Assurance
⚫Review from research perspective
⚫Co-Development or In-House Development
⚫Role as healthcare service deliverers vs. developers
Physicians & Liability
⚫Liability for Following or Ignoring AI Recommendations
⚫Adverse outcomes may lead to liability
⚫Ignoring AI recommendations that could prevent adverse
outcomes
⚫AI as Part of Standard of Care
⚫Court decisions on AI acceptance in medical practice
⚫Failure to use accepted AI may breach standard of care
⚫Informed Consent and AI Usage
⚫Deciding when to inform patients about AI usage
⚫Legal and ethical implications of non-interpretable AI
⚫Ethical Comfort with Non-Interpretable AI
⚫Substituting other reliability indicators for AI
understanding
⚫Legal consequences of using non-interpretable AI
Insurers & Reimbursement
⚫Insurers as Payers
⚫Decide on reimbursement for hospitals or
physicians using medical AI
⚫Determine conditions for reimbursement based
on AI recommendations
⚫Reimbursement Conditions
⚫Reimburse expensive treatments only if
recommended by AI
⚫Possibility of human-in-the-loop appeals for
initial decisions
⚫Malpractice Insurers
⚫Adapt coverage to decisions influenced by
medical AI
Algorithmic Discrimination & Equity
⚫Core Legal Issues in AI
⚫Focus on law and ethics and various legal concerns
⚫Data Treatment in Medical AI
⚫Discrimination and bias
⚫Data protection
⚫Medical Liability and Informed Consent
⚫Examining legal responsibilities
⚫Intellectual Property in AIn
⚫Algorithmic Discrimination and Equity
⚫Policy Solutions for Algorithmic Discrimination
Data Privacy & Protection
⚫Data Privacy in US Legal Regime
⚫Examines clinical data protection under HIPAA
⚫AI complicates compliance with legal provisions
⚫Interpretation of Legal Provisions
⚫Guidance from US Department of Health and Human
Services (HHS)
⚫De-identification and AI
⚫AI undermines traditional de-identification strategies
⚫Triangulation of data points to reidentify individuals
⚫Data Privacy in Health Research
⚫Protection of human subjects
⚫Consumer and Commercial Protections
⚫Public Health Protections
Medical Liability
⚫Physician Liability
⚫No case law on AI altering standard of care
⚫Several outcomes charted using existing legal doctrine
⚫Potential evolution of standard of care to require AI
recommendations
⚫Institutional Liability
⚫Hospital liable for employee-caused harms
⚫Hospital directly liable for AI-related decisions
⚫Developer Liability
⚫Analysis under tort law
⚫Consideration of FDA rules
⚫Challenges in Applying Tort Law to AI
⚫Doctrine provides limited answers to concerns
Informed Consent
⚫Case Study of Cancer Diagnosis
⚫Explores physician's obligation to inform patients
about AI use
⚫Focuses on necessary explanations and their depth
⚫Challenges in Applying Existing Case Law
⚫Difficulty in analogising medical AI to current laws
⚫Blurring boundaries between different legal
analogies
⚫Deriving Governing Principles
⚫Proposes principles for applying law to AI
⚫Goes beyond current informed consent doctrine
International Organizations
⚫International Organisations and AI Governance
⚫WHO
⚫Cross-Sectoral Efforts
⚫Organisation for Economic Co-operation and
Development (OECD)
⚫Council of Europe
⚫European Union (EU)
⚫United Nations (UN)
⚫United Nations Educational, Scientific and
Cultural Organization (UNESCO)
⚫Broad focus, not specifically targeting health
⚫Sector-Specific Efforts
⚫Effectiveness of Soft Law Mechanisms
US Regulations
⚫FDA Regulation of AI/ML-enabled Medical Devices
⚫Challenges in classifying devices
⚫Criteria for distinguishing between serious and non-serious
conditions
⚫Incorporating clinical judgements into decisions mediated
by SaMD
⚫Role of FTC and HHS
⚫FTC offers protections for patients as consumers
⚫HHS rubrics relevant to research involving human subjects
⚫Limits to institutional review board oversight
⚫Health Data Procurement for Medical AI Systems
⚫De-identified EHRs
⚫Consumer data from wearables and mobile apps
⚫Soft Law Policies and Guidelines
UK Governance
⚫State of AI Governance in the UK
⚫Lacks a settled legal framework
⚫Reflects existing law values
⚫UK's Soft Law Approach
⚫Policy background and strategies
⚫Government and NHS regulation and governance
⚫Fragmented Regulatory Landscape
⚫Gaps in AI regulations for health
⚫Collaborative approach by CQC, ICO, GMC, NHS, and
MHRA
⚫Role of Technical Standards and Guidance
⚫Medical device regime
⚫Potential legislative changes
EU Regulations
⚫EU's Leading Efforts in AI Regulation
⚫Development of both soft and hard laws
⚫Initial guidelines setting principles
⚫Potential Implications of AI Act
⚫Relationship with Medical Device Regulations (MDR)
⚫Impact on proposed Artificial Intelligence Liability
Directive (AILD)
⚫Revised Product Liability Directive (PLD)
⚫Regulating Data Ecosystem
⚫Importance of GDPR
⚫Risk Classification System for AI Systems
⚫Impact on medical devices
⚫Complex Regulatory System
Singapore’s Approach
⚫Governance of AI Development and Deployment
⚫Relevant laws in Singapore health institutions
⚫AI innovation in the healthcare sector
⚫Cybersecurity Concerns
⚫Data leaks and hacking incidents
⚫Data protection and cybersecurity law
⚫AI Governance Framework
⚫Principles-based Model AI Governance Framework by
PDPC
⚫Advisory Council on Ethical Use of AI and Data
⚫Research Programme on Governance of AI and Data Use
⚫Ministry of Health’s AI in Healthcare Guidelines
(AIHGle)
⚫Complement existing medical device regulations
New AI Forms & Issues
⚫New Forms of AI
⚫Likely to raise new issues
⚫Integration into chatbots
⚫ChatGPT and Patient-Facing AI
⚫Possibility of patient-facing AI
⚫Raises questions about regulation
⚫Regulation and Freedom of Expression
⚫Regulation of professional speech
⚫Freedom of expression
⚫Provision of information as practice of medicine
Transition From Soft Law To Hard
Law
⚫Decline of Medical AI Soft Laws
⚫Soft laws will be replaced by codification
⚫Anticipation of a substantial onset of hard law
⚫Influence of EU, China, and GCC Countries
⚫EU's regime enforcement
⚫China and GCC's AI laws implications
⚫Impact of GDPR and AI Act
⚫GDPR's global influence on data protection
⚫Potential similar impact of AI Act
⚫Investment Dynamics in Medical AI
⚫Race-to-the-top or race-to-the-bottom dynamics
⚫Attracting investment in various countries
Special Legal Treatment For Medical
AI
⚫Choice for Regulators
⚫Carve off aspects of medical AI for special legal
treatment
⚫Adopt general AI law and apply it to healthcare
⚫Medical AI's Special Nature
⚫Complex ecosystem of stakeholders
⚫Sensitive and personal nature of data
⚫Particularities in Healthcare
⚫Informed consent
⚫Self-referral risks
⚫Legislative Challenges
⚫Empowering Existing Agencies
Global Development Of Medical AI
Law
⚫Global Development of Medical AI Law
⚫Highlighted jurisdictions leading the space
⚫Many countries still in the soft law phase
⚫Models for Smaller Countries or LMICs
⚫Extent to which regimes are good models
⚫Sensitivity to cultural and religious differences
⚫Convergence to International Standards
⚫Possibility of a single or small number of standards
⚫Rules concerning data processing and transfer
⚫Implications for AI Medical Devices
⚫Impact on international trade and regulation
⚫Encouragement for Further Research
Variations in Regulatory Frameworks
⚫Variations in Regulatory Frameworks
⚫Different jurisdictions have different regulations
for AI in healthcare
⚫No law will capture all nuances and needs given
the fast developments
⚫Pacing problem
⚫Rapidly Evolving Area
⚫AI in healthcare is a new and rapidly changing
field
⚫Few studies empirically evaluate the impact of
specific regulations or the current legal frameworks
in AI and law as applied to health
Literature (Selected)
⚫WHO Guidelines
⚫ICMR Guidelines
⚫Artificial Intelligence in Health Care: The Hope, the Hype,
the Promise, the Peril (2022)- National Academies Press,
Washington DC
⚫AI, Healthcare and Law Edited by Guilhem Julia, Anne
Fauchon Rushed Kanawati- ISTE and Wiley -2024
⚫Palaniappan, K.; Lin, E.Y.T.; Vogel, S. Global Regulatory
Frameworks for the Use of Artificial Intelligence (AI) in the
Healthcare Services Sector. Healthcare 2024, 12, 562.
⚫https://doi.org/10.3390/healthcare12050562
⚫Research Handbook on Health, AI and the Law (Ed) Barry
Solaiman, I. Glenn Cohen - Edward Elgar 2024 (open access)
Next
⚫AI and Competition Law
Artificial Intelligence, Law and Justice
Session 29

AI and Competition Law


Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
⚫In the previous session we discussed AI and Health Law.
⚫We dealt with various issues including privacy, data governance
and lack of uniformity in data governance
AI, Law and Regulation
⚫Technology Neutrality in Law-Making
⚫Central principle in regulating emerging technology
⚫Focus on activities rather than specific technologies
⚫Regulation of Behavior
⚫Regulates people's behavior and machine behavior affecting
people
⚫Focus on societal impact
⚫Intellectual Property Rights
⚫Copyright incentivizes creativity by granting monopolies
⚫Patents protect substantial inventions, balancing innovation
and imitation
⚫Trademarks offer social value and legal protection
⚫Antitrust Laws and Competition
⚫Challenges in Regulation
AI and Competition Law
⚫Competition and Democracy
⚫Promotes material progress, quality, and innovation
⚫Supports democracy and preserves choice and opportunity
⚫Enforced to prevent illegitimate monopoly power
⚫Digital Economy's Influence
⚫Revolutionized business patterns, products, and services
⚫Priority for government policy and competitiveness
⚫Alters competitive advantages globally
⚫Governance and Digital Technology
⚫Consumer Protection and Data
⚫Algorithms and Automation
Defining Competition and AI
Developments
⚫AI as an Intangible Capital Asset
⚫Investments in AI can enhance productivity
⚫Applications include autonomous vehicles and voice-
recognition systems
⚫Impact on Legal Technologies
⚫AI affects access to justice and legal services
⚫Raises questions about regulation and equality
⚫Global Digital Transformation
⚫AI developments influence various sectors
⚫Public-private partnerships and private governance are key
⚫AI in Competitive Advantage
⚫AI complements and substitutes human cognitive capabilities
⚫Competition and Market Dynamics
Market and Competitiveness
⚫Market and Technological Innovation
⚫Advances technological knowledge and transforms it into
valuable products
⚫Combines decentralization with strong incentives for
innovation
⚫Profit is the key driver for technological innovation
⚫Competition Law and Economic Coordination Competition Law
Agency/Institution Policies
⚫Overarching legal framework
Competition and IPRs
⚫Role of Intellectual Property in the Knowledge-Based Economy
⚫Central to wealth creation in the new economic environment
⚫Characterized by negligible marginal costs and high innovation
rates
⚫IPRs and Competition Policies
⚫IPRs linked to competition policies and FDI laws
⚫Necessary to protect the market from anticompetitive practices
⚫Impact of Digital Technologies
⚫AI, big data, and cloud computing transforming the global
economy
⚫Digital platforms facilitating new products and services
⚫Antitrust Concerns
⚫Strategies for IPRs Protection
⚫IPRs in Developing Countries
MNCs and Innovation
Performance
⚫Location and Governance Decisions of MNCs
⚫Success grounded on advantageous technological assets
⚫Changes in invention activities along geographic and
organizational boundaries
⚫IPR Protection and Innovation Performance
⚫Boosted performance with strong IPR protection
⚫Higher innovation in high IPR locations
⚫Digitalization and Tax Strategies
⚫Shifting profits to low-tax rate states
⚫Minimum 15% global corporate tax
⚫Scope of IPR Protection
⚫Effects of IPR on Competition
⚫Role of Open Trade
Competition Law, Competition, &
Democracy
⚫Legal Rules of Competition Law
⚫Outlaw certain practices
⚫Goals of Competition Law
⚫Mission to safeguard competitive process Balance
Consumer Welfare with Market driven pricing
⚫Ensure that monopoly or oligopoly is restrained
Dynamic Pricing Algorithms
⚫Role of Algorithms in Pricing
⚫Algorithms are widely used for pricing and business functions
⚫Digital companies rely on algorithms for setting prices
⚫Algorithmic Pricing and Competition
⚫Algorithms can facilitate tacit collusion
⚫Technological advancements enable personalized pricing
strategies
⚫Algorithmic pricing affects market competition
⚫Benefits and Risks of Algorithmic Pricing
⚫Increases profits by adjusting prices based on supply and
demand
⚫May reduce prices and increase consumer surplus
⚫Can lead to anticompetitive behavior and market distortions
⚫Regulation and Compliance
E-Platforms Power
⚫Extreme Concentration of Wealth and Power
⚫Public recognition of wealth and power in few e-platforms
⚫Unauthorized surveillance and data harvesting
⚫Oligopolistic Platforms
⚫Social networks and search engines use free content
⚫Data harvesting and surveillance to increase user time
⚫Selling advertising and personal data
⚫Control over individuals' opinions, spending, and political
positions
⚫Fair Use Doctrine and Dominant Platforms
⚫AI Technology and Content Dissemination
⚫Commercial Media Organizations
⚫Platform Strategies
Digitalization & Digital
Entrepreneurship
⚫Impact of Digitalization on Productivity
⚫Computerization, robotization, and automation drive
digitalization
⚫Decline in digital technology prices boosts digital capital
investment
⚫Digital investment enhances firm-level TFP growth sectoral
heterogeneity affects productivity benefits
⚫Only 30% of the most productive laggard firms benefit
⚫Digitalization as a Game-Changer
⚫Alters firms' productivity growth
⚫Productivity gains slow for general-purpose technology
⚫Digital Entrepreneurship
⚫Complexity and Uncertainty in Digital Entrepreneurship
⚫Recent Developments in AI Technology
AI, Competition, & Torts
⚫Forms of AI
⚫Algorithms mimicking human intelligence
⚫Neural networks managing complex problems
⚫Tort Law and AI
⚫Remedy for injuries caused by AI
⚫Negligence claims: duty, breach, causation, damages
⚫Assumption of risk defense
⚫AI Torts
⚫Injuries from autonomous vehicles and robots
⚫AI-driven algorithms hidden in daily tasks
⚫Legal Challenges
⚫Workplace Injuries
Terms of Use and Liability
⚫Responsibility and Liability
⚫EZ-Robot assumes no responsibility for errors or
inaccuracies
⚫Applies to documentation, files, and software provided
Algorithmic Collusion & Personalized
Pricing
⚫Express vs. Tacit Collusion
⚫Express collusion involves explicit agreements among firms
⚫Tacit collusion achieved through intelligent market
adaptation
⚫Legal Perspectives on Algorithmic Collusion
⚫Algorithms as Illegal Agreements
⚫Potential risks from algorithm technology use by companies
⚫Horizontal algorithmic pricing practices
⚫Regulatory challenges in the digital era
Definition Importance of Algorithms
⚫Adoption of Algorithms in Online Retail
⚫One-third of best-selling products on Amazon US in 2015 sold
through algorithms
⚫Higher prices and sales volume observed for algorithm-sold
products
⚫EU Sector Study Findings
⚫Two-thirds of online retailers in the EU use algorithms
⚫Understanding Algorithms
⚫Defined by OECD as a sequence of rules performed in order
⚫Generates output from given input
⚫Examples include solving mathematical problems, food recipes,
music sheets
⚫Computer Science Perspective
⚫Defining Features of Algorithms
⚫Limitations and Strengths
Functional Classification
⚫Monitoring Algorithms
⚫Monitor market, competitors, and customers through scraping
⚫Help competitors monitor each other's prices
⚫Permit instant retaliation against defecting cartel members
⚫Pricing Algorithms
⚫Optimise pricing strategies by reacting faster to changes
⚫First-generation: Follow simple pricing instructions
⚫Second-generation: React to changing market conditions
⚫Speed up price updates from weeks to seconds
⚫Signalling Algorithms
⚫Signal pricing intentions to competitors
⚫Implement instantaneous price changes stealthily
Classification by Interpretability
⚫Types of Algorithms by Interpretability
⚫White box algorithms are transparent and clear
⚫Black box algorithms are impenetrable and complex
⚫White Box Algorithms
⚫Designed as transparent and clear code blocks
⚫Visible and understandable to humans with suitable
knowledge
⚫Steps leading to decisions can be retraced
⚫Black Box Algorithms
⚫Work like human thought processes, not easily inferred
⚫Prevent users from controlling all outcomes
⚫Obstruct courts from determining user intent
⚫Implications for Firms
Classification by Learning Method
⚫Adaptive Algorithms
⚫Fixed capabilities, cannot improve autonomously
⚫Follow specific instructions by the programmer
⚫Present fewer concerns for algorithmic collusion
⚫Mostly deployed in the Messenger scenario
⚫Learning Algorithms
⚫Capable of machine learning
⚫Improve over time through data, experience, and
experimentation
⚫Do not follow static programming instructions
⚫Modify themselves to improve performance
⚫Types of Machine Learning
⚫Supervised Learning: Conducted under human supervision
Definition & Features of Collusion
⚫Definition of Collusion
⚫Competitors coordinate actions to raise profits
⚫OECD: Joint profit maximisation strategy harming
consumers
⚫Features of Collusion
⚫Coordination among competitors
⚫Raises profits to supra-competitive level
⚫Harms consumers
⚫Legal Perspective
⚫US: Requires evidence of conscious agreement
⚫EU: Requires communication between rivals
⚫Types of Collusion
⚫Tacit collusion tolerated by competition law
Economic & Legal Perspectives on
Collusion
⚫Definition of Collusion by Economists
⚫Harrington's definition emphasizes a reward-punishment
scheme
⚫Supracompetitive prices can occur with or without collusion
⚫Importance of Reward-Punishment Scheme
⚫Critical to collusion as it ties current conduct with future
conduct
⚫Defines collusion through causal relationships
⚫Legal Perspective on Collusion
⚫Requires evidence of direct communication
⚫Focuses on agreement or mutual understanding
⚫Economic Perspective on Collusion
⚫Focuses on reward-punishment schemes
⚫Differences Between Law and Economics
Structural Characteristics Conducive
to Collusion
⚫Structural Characteristics Conducive to Collusion
⚫Concentrated market with few competitors
⚫Symmetric competitors with similar cost structures
⚫Homogeneous products
⚫High barriers to entry
⚫Market transparency
⚫Stable demand
⚫Small and frequent purchases by customers
⚫Impact of Concentrated Markets
⚫Role of Cost Symmetry and Product Homogeneity
⚫High Barriers to Entry
⚫Market Transparency and Stable Demand
Facilitation of Collusion by Algorithms
⚫Definition and Premise of Algorithmic Collusion
⚫Algorithms facilitate or consummate collusion autonomously
⚫Autonomous algorithmic collusion does not require human
intervention
⚫Facilitation of Collusion by Algorithms
⚫Human agents agree to collude, algorithms facilitate it
⚫Legality of collusion unaffected by the use of algorithms
⚫Possible to pursue under US antitrust law or EU competition law
⚫Controversy Surrounding Autonomous Algorithmic Collusion
⚫Technical feasibility is debated
⚫Prominent commentators argue algorithms can achieve tacit collusion
⚫Experimental studies show Q-learning algorithms can collude in certain
settings
⚫Future Considerations
Types of Autonomous Algorithmic
Collusion
⚫Types of Algorithmic Collusion
⚫Express Collusion: Direct communication between algorithms
⚫Tacit Collusion: Independent adaptation without direct
communication
⚫Legal Implications
⚫Express Collusion: Clearly illegal under US and EU laws
⚫Tacit Collusion: Generally legal but debated in algorithmic
context
⚫Dimensions of Analysis
⚫Direct communication among firms or algorithms
⚫Degree of algorithmic autonomy
⚫Extent of collusive human intent
⚫Challenges in Attribution
⚫High algorithmic autonomy complicates firm liability
Ezrachi and Stucke's Classification
⚫Messenger Scenario
⚫Humans use algorithms to execute collusion
⚫Express collusion among human agents
⚫Minimal algorithmic autonomy
⚫Hub and Spoke Scenario
⚫Common algorithm used by competitors
⚫Higher degree of algorithmic autonomy
⚫Indirect communication through the hub
⚫Predictable Agent Scenario
⚫Firms adopt similar algorithms independently
⚫No agreement or intent to collude
⚫Digital Eye Scenario
Debate on Tacit Collusion
⚫Traditional Debate on Tacit Collusion
⚫Donald Turner's view: Tacit collusion is natural in oligopolistic markets
⚫Richard Posner's view: Tacit collusion should be prohibited like express
collusion
⚫Courts generally side with Turner's perspective
⚫Recent Arguments by Louis Kaplow
⚫Current approach to agreement under Sherman Act is misguided
⚫Focus should be on economic consequences of collusion
⚫Kaplow does not advocate outright prohibition of tacit collusion
⚫Algorithmic Collusion: Predictable Agent vs. Digital Eye
⚫Predictable Agent: Firms adopt similar algorithms expecting
competitors to follow
⚫Intent of firms using Predictable Agent is not purely to maximize profit
⚫Difference from traditional tacit collusion: Awareness of facilitating
collusion
Algorithmic Tacit Collusion
⚫Legality of Tacit Collusion
⚫Posner's and Kaplow's arguments support prohibiting tacit
collusion
⚫Express and tacit collusion cause similar consumer harm
⚫Current approach focuses on lack of direct communication
⚫Algorithmic Tacit Collusion
⚫Algorithms enable collusion in various market structures
⚫Increased transparency and high reaction speed facilitate
collusion
⚫Algorithms can signal pricing intentions and enact frequent
price changes
⚫Critical Aspects Facilitated by Algorithms
⚫Reaching terms of coordination among firms
⚫Rapid detection and retaliation
⚫Accountability and Liability
Conclusion
⚫Debate on Algorithmic Collusion
⚫Ongoing controversy in the competition law community
Whether to regulate or not?
⚫Need for Proactive Competition Law
⚫Cannot ignore algorithmic collusion
⚫Law should take a proactive stance
⚫Programmer Incentives
⚫Clear stance against algorithmic collusion
⚫Indecipherable algorithms not accepted as a defense
⚫Minimizing Algorithmic Collusion
⚫Best tackled at the design stage
⚫May require ex ante regulation
Literature (Selected)
⚫The Cambridge Handbook of Private Law and Artificial Intelligence
(Eds) Ernest Lim, Phillip Morgan – Cambridge University Press—
2024
⚫Artificial Intelligence and Competition Law in the Transatlantic
Sphere: Navigating New
⚫Frontiers in Regulation and Enforcement Charles Ho Wang Mak
Stanford-Vienna Transatlantic Technology Law Forum 2025
⚫AI and Competition Georgios I. Zekos 2023 Springer
⚫Regulating Algorithms in India: Key Findings and
Recommendations- Archana Sivasubramanian- CPR New Delhi 2021

⚫https://www.moneycontrol.com/news/opinion/ai-and-
competition-cci-weighs-risks-of-algorithmic-collusion-
13008300.html
Next
⚫AI, Law and Justice in Select Jurisdictions (1 of 2)
Artificial Intelligence, Law and Justice
Session 30

AI and Law and Justice in select jurisdictions


Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap AI and Competition Law
⚫In the last sessi0n we discussed AI’s impact on
Competition Law in the context of power of the platform
and digital economy. We highlighted some key issues and
the challenge in balancing competition with inter alia,
consumers rights.
International Experience in Using AI
in Justice
⚫China's AI Integration in Judicial System
⚫Three stages of transformation since 1990
⚫1996-2003: Digitization of files and website links
⚫2004-2013: Internet-based court hearings and live
broadcasts
⚫2014 onwards: Introduction of 'smart courts' and online
platforms
⚫Creation of Internet Courts for online dispute resolution
⚫Use of AI for facial recognition, machine learning, and block
chain
⚫United States' AI Initiatives in Justice
⚫AI helps judges make fair and unbiased decisions
⚫PSA system for preventive measures and bail decisions
⚫PSA Pretrial Public Safety Assessment (PSA)
⚫COMPAS system for assessing risk of reoffending
China
⚫First Phase (1996-2003)
⚫Started after the 1996 Conference
⚫Completed digitization of court files and website links
⚫Second Phase (2004-2013)
⚫Conducted court hearings using the Internet
⚫First full hearing via videoconferencing in 2007
⚫Live broadcast of court hearings to the public
⚫Third Phase (2014-Present)
⚫Introduction of 'smart courts' initiative
⚫Completion of online platforms for judicial processes
⚫Technological Advancements
⚫AI in Legal Research
United States of America
⚫AI in Judicial Sphere
⚫Popular in civil and criminal proceedings
⚫Several initiatives implemented
⚫AI Systems for Judicial Decisions
⚫PSA system for preventive measures and bail decisions
⚫Pretrial Public Safety Assessment (PSA)
⚫COMPAS system for assessing reoffending risk
⚫Challenges and Bias
⚫AI-powered Chatbots
PSA an Example
⚫Judges are required to consider three risk factors along
with others arrestee may fail to appear in court (FTA)
arrestee may engage in new criminal activity (NCA)
arrestee may engage in new violent criminal activity
(NVCA)
⚫PSA as an algorithmic recommendation to judges
classifying arrestees according to FTA and NCA/NVCA
⚫Risks derived from an application of a machine learning
algorithm to training data set based on past
observations
https://imai.fas.harvard.edu/talk/files/Taiwan20.pdf
United Kingdom
⚫House of Lords Report on AI in Criminal Justice
⚫Published by Justice and Home Affairs Committee in
November 2022
⚫Highlights potential miscarriages of justice due to
unregulated AI use
⚫Harm Assessment Risk Tool (HART)
⚫Developed by Durham Police and University of Cambridge
⚫Predicts likelihood of repeat offenses using 34 indicators
⚫Excludes race to prevent racial disparities
⚫Used to inform rehabilitation program selection
⚫PredPol System by Kent Police
⚫Predicts future crime locations based on past data
⚫Concerns and Recommendations by Officials
⚫Digital Case System (DCS)
European Union
⚫Ethical Charter on AI in Judicial Systems
⚫Adopted by CEPEJ on December 3, 2018
⚫Five basic principles: respect for fundamental rights, non-
discrimination, quality and safety, transparency, impartiality
and reliability, under user control
⚫Ethics Guidelines for Trustworthy AI
⚫Approved by the European Commission in 2019
⚫Three components: lawful, ethical, robust
⚫Main ethical principles: respect for human autonomy,
prevention of harm, fairness, explicability
⚫Digital Europe Strategy Programme
⚫EU financial consolidation for 2021–2027
⚫Objective: stimulate digital transformation
⚫Artificial Intelligence Act
⚫France's AI in Justice
Russia : Areas of Use of AI in Justice
⚫Automated Court Composition
⚫Formed considering workload and specialization of judges
⚫Excludes influence by interested parties
⚫AI can determine case categories and distribute cases
⚫Digital Writs of Execution
⚫Traditional writs of execution may be revised
⚫AI can determine the need for issuance and process
requests
⚫Automated sending to Federal Bailiff Service or bank
⚫Research and Assessment of Evidence
⚫AI can assist in analytics without direct conclusions
⚫AI can evaluate evidence and form conclusions
⚫Risks of judge's dependency on AI conclusions
Language of Proceedings
⚫Language of Proceedings
⚫Legal proceedings conducted in the state language
⚫Participants can use their native language or a chosen
language with an interpreter
⚫AI Prospects in Legal Proceedings
⚫Multilingual document submission and speech during court
hearings
⚫Speech recognition programs and translation into text
⚫Emotional and psychological speech recognition
⚫Speech polygraphs to assess integrity and detect perjury
⚫Intellectual processing of speech and documents
⚫Reduces translation time and legal costs
⚫Example: Biorg system in Russia for recognizing documents
and objects in different languages
Digital Protocol
⚫Development of Electronic Justice
⚫Paper protocol written by hand or using technical means
⚫Audio protocol kept in digital format
⚫Audio Recording in Russian Arbitration Process
⚫Main means of recording court hearing information
⚫Ensures openness of court proceedings
⚫Material medium attached to protocol
⚫Protocol as an Additional Recording Means
⚫Records completed procedural actions
⚫Advancements in Telecommunication Technologies
⚫Enable exclusive electronic recording of court hearings
⚫Printing Audio Protocol on Paper
Formation of the Court Composition &
Determination of the Category of
Cases
⚫Automated Formation of Court Composition
⚫Ensures impartiality by excluding influence from
interested parties
⚫Utilizes an automated information system
⚫Role of AI in Judicial Processes
⚫Automates court composition formation
⚫Determines case categories and distributes cases among
judicial panels
⚫Considers judges' specialization
⚫Handling Borderline Specialization Disputes
⚫AI can quickly limit claims filed in court
⚫Examples include tax authority decisions in corporate
disputes
Digital Writs of Execution
⚫Revision of Traditional Executive Documents
⚫Traditional writs of execution may be revised soon
⚫Optimization of Russian Arbitration Courts
⚫Issuance of writs of execution at claimant's request
⚫Changes due to unclaimed or returned writs
⚫Irrelevance of writs in cases like debtor bankruptcy
⚫AI Integration in Court Information Portals
⚫Automatic determination of need for writ issuance
⚫Processing requests from claimants
⚫Sending writs for execution to relevant authorities
Research and Assessment of Evidence,
Establishment of Legally Significant
Circumstances
⚫Principle of Immediacy in Legal Proceedings
⚫Examination and assessment of evidence directly by the
court
⚫Question of AI violating this principle
⚫Functionality of AI in Evidence Research
⚫Analytics without direct conclusions
⚫Evaluation of evidence and forming conclusions
⚫Risks of AI in Evidence Evaluation
⚫Judge's dependence on AI conclusions
⚫Need for conditions preventing automatic AI decision
approval
⚫Principles of Equality and Adversarial Law
⚫Ensuring Compliance with Adversarial Principle
⚫Practical Issues and AI Integration
Elsewhere
⚫AI is used in Africa in at least 10 countries in different
degrees.
⚫So is the case with South America
⚫UNESCO is helping countries with its AI and Rule of Law
Program
⚫Columbia is the first country to adopt UNESCOs
guidelines on use of AI in Judicial Decision Making
https://www.unesco.org/en/articles/justice-meets-
innovation-colombias-groundbreaking-ai-guidelines-
courts
Next
⚫In the next session we will see more details on use of
AI in Law and Justice in other parts of the world
Artificial Intelligence, Law and Justice

Session 31
AI in Law and Justice in USA

Dr. Krishna Ravi Srinivas


Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence
and Law
NALSAR University of Law
Recap for Session 30
⚫In the last session we discussed
implementation of AI in law and justice in
selected countries and gave some examples.
We highlighted while the experience is
uneven some of the issues are common.
Moreover in many countries different
institutions including bar associations and
departments of justice are developing
guidelines or have come out with some
guidelines and reference materials.
County Level Example 1
⚫Implementation of AI-driven tools
⚫Deployed in courts and clerks’ offices over the past five
years
⚫Reduces inefficiencies and errors in document processing
⚫Palm Beach County's Digital Innovation
⚫Implemented Lights-Out Document Processing program
⚫Analyzes, tags, and indexes document filings
⚫Software Testing and Training
⚫Tested on limited document types as a pilot in 2018
⚫Trained on hundreds of documents
⚫Audited all documents organized by the software
Efficiency and Accuracy
Improvements
⚫High Accuracy Rate
⚫Machines achieved 98% to 99% accuracy
⚫Significantly higher than human counterparts
⚫Workload Capacity
⚫Five robotic systems equaled the workload of
19 human employees
⚫Employee Benefits
⚫Freed up workers for more thoughtful jobs
⚫Enabled skillset growth and increased earning
potential
Public Defender Offices:
Enhancing Legal Advocacy
⚫Miami-Dade County Public Defender's
Office
⚫Advocates for AI tools in legal document
drafting and research
⚫First in the US to use AI for attorneys and
teams
⚫Los Angeles County Public Defender's
Office
⚫Integrated AI solutions into their toolkit
⚫Migrated 24 legacy systems to cloud-based
platforms
Humanizing Legal Processes
⚫Technology's Role in Improving Case
Outcomes
⚫Diverting people from prison
⚫Humanizing the indigent
⚫Data Management and Organization
⚫Managing troves of data
⚫Organizing data by person, not by case
⚫Transforming Data into Human Narratives
⚫Advocating for alternative treatments
⚫Reducing incarceration
AI’s Role in Reducing
Incarceration and Predicting
Recidivism
⚫ AI Tools in Judicial Decision-Making
⚫Tulane University study on AI in over 50,000 convictions in Virginia
⚫AI scores offenders' risk of re-offending
⚫AI advises judges on sentencing options
⚫ Bias Correction and Challenges
⚫AI helps correct gender and racial bias in judges' decisions
⚫Judges often decline alternative punishments for defendants of color
⚫ Utilization of AI Tools
⚫New York State Parole Board uses COMPAS Risk and Needs Assessment
⚫Factors include education level, age, and re-entry plans
⚫ Studies on AI Effectiveness show mixed results
⚫ Some are critical some are positive
⚫ Example of COMPAS
⚫ Ethical Considerations
AI in Sentencing Decisions
⚫Study Overview
⚫Conducted by Tulane University
⚫Assessed AI tools in over 50,000 convictions in Virginia
⚫AI Tool Functionality
⚫Scored offenders' risk of re-offending
⚫Advised judges on sentencing options
⚫Impact on Bias
⚫AI tools aimed to correct gender and racial bias
⚫Final decisions made by human judges
⚫Findings
⚫Judges often ignored AI recommendations for defendants of
color
⚫Disproportionate decline in offering alternative punishments
Parole Board Assessments
⚫AI Tools in Parole Decisions
⚫New York State Parole Board uses COMPAS Risk and Needs
Assessment
⚫Factors include education level, age at conviction, and re-entry
plans
⚫University of California-Davis Study
⚫Analyzed data from over 4,000 individuals released on parole
(2012-2015)
⚫Evaluated outcomes based on COMPAS scores and parole
decisions
⚫Findings: Parole often denied to low-risk individuals due to
severity of initial offenses
⚫Critiques of COMPAS
⚫Dartmouth College study and ProPublica investigation
⚫Algorithm may be no better than human judgment
⚫Potential bias in COMPAS algorithm
Ethical Considerations and Human
Oversight
⚫Ethical Gen AI Principles
⚫Formulation of guidelines for ethical AI use
⚫Addressing issues of bias, fairness, and accuracy
⚫Governance Frameworks
⚫Establishment of structures to oversee AI implementation
⚫Ensuring reliability and data privacy
⚫Interdisciplinary Expert Engagement
⚫Continuous collaboration with experts to tackle ethical
issues
⚫Human Oversight
⚫AI as a tool, not a replacement for human Judgment
⚫Essential for evaluating and identifying unconscious bias
Preliminary Findings
⚫Access to Justice
⚫Improving accessibility through AI
⚫AI & Future Technologies
⚫Exploring future applications of AI in the legal system
⚫Best Practices in Courts & Administration
⚫Implementing AI for efficient court administration
⚫Establishing Gen AI Literacy in Courts
⚫Educating court staff on generative AI
⚫Generative AI
⚫Utilizing AI to generate legal documents
⚫Government
⚫Justice Tech- Need for Justice Tech that can use AI tools
⚫Effectively for access to Justice
Civil Rights Division Guidance &
Other Documents
⚫Guidance for Employers Using Automated Software
⚫Issued on December 1, 2023
⚫Discusses considerations for using software to handle
Form I-9
⚫Guidance on AI and Disability Discrimination in
Employment
⚫Released on May 12, 2022
⚫Describes how AI can lead to disability discrimination
in hiring
⚫Article on Civil Rights in the Digital Age
⚫Published in January 2022
⚫Overview of issues with AI in employment decisions
⚫Discusses work by the Department of Justice and
other federal agencies
AI and Disability Discrimination in
Employment
⚫Department of Justice's Technical
Assistance Document
⚫Released on May 12, 2022
⚫Title: “Algorithms, Artificial
Intelligence, and Disability
Discrimination in Hiring”
⚫Content of the Document
⚫Describes how algorithms and AI can
lead to disability discrimination in
hiring
Civil Rights in the Digital Age
⚫Overview of AI in Employment
Decisions
⚫Examines the impact of AI on
employment practices
⚫Highlights predominant issues arising
from these practices
⚫Department of Justice's Role
⚫Discusses the work being done to
address AI-related issues
⚫Collaboration with other federal
Cases and Matters
⚫Robocalls and Voting Rights Act
⚫Filed statement supporting private plaintiffs
⚫Challenged AI-generated robocalls as coercive
⚫Fair Housing Act and Tenant Screening
⚫Filed SOI in Louis et al. v. SafeRent et al.
⚫Alleged discrimination against Black and Hispanic
applicants
⚫Accessibility at University of California, Berkeley
⚫Filed consent decree for inaccessible online content
⚫Meta Platforms Housing Advertisements
⚫Discrimination in Job Advertisements
⚫Microsoft Citizenship Status Discrimination
⚫Ascension Health Alliance Investigation
Robocalls and Voting Rights Act
⚫Department of Justice's Involvement
⚫Filed a statement of interest
⚫U.S. District Court for the District of New
Hampshire
⚫Nature of the Lawsuit
⚫Challenging AI-generated robocalls
⚫Known as 'deepfake' robocalls
⚫Violation of Voting Rights Act
⚫Section 11(b) violation
⚫Intimidating, threatening, or coercive robocalls
Algorithm-based Tenant Screening
Systems
⚫Statement of Interest Filed
⚫Filed by Department of Justice and Department of
Housing and Urban Development
⚫Filed in January 2023
⚫Case Details
⚫Louis et al. v. SafeRent et al.
⚫Complaint against SafeRent, formerly CoreLogic
Rental Property Solutions, LLC
⚫Allegations
⚫Discrimination against Black and Hispanic rental
applicants
⚫Applicants using federally-funded housing choice
vouchers
⚫Violation of Fair Housing Act and Massachusetts state
laws
University of California, Berkeley
Consent Decree
⚫University of California, Berkeley Settlement
⚫Filed in November 2022 by the Civil Rights Division
⚫Allegations of inaccessible online content for individuals
with disabilities
⚫Inaccurate automated captioning technology for hearing
impairments
⚫Decree approved on December 2, 2022
⚫University to provide accurate captions and not rely
solely on YouTube’s AI-based technology
⚫Meta Platforms, Inc. Settlement
⚫Filed in June 2022 by the Civil Rights Division and U.S.
Attorney’s Office for the Southern District of New York
⚫Complaint and proposed settlement agreement in United
States v. Meta Platforms, Inc.
⚫Settlement agreement signed on June 26, 2022, and final
judgment entered on June 27, 2022
Meta Platforms Settlement
⚫Initial Round of Settlements Announced
⚫Occurred in June 2022
⚫Involved 16 employers
⚫Discriminatory Job Advertisements
⚫Posted on college and university online
recruitment platforms
⚫Included Georgia Tech’s platform
⚫Discriminated against non-U.S. citizens
Settlements with Employers Using
Recruitment Platforms
⚫Settlement with Microsoft Corporation
⚫Resolved claims of discrimination based on citizenship status
⚫Microsoft engaged in unfair documentary practices
⚫Used employment eligibility verification software improperly
⚫Settlement with Ascension Health Alliance
⚫Investigation settled in August 2021
⚫Unfair documentary practices in employment verification
⚫Improper programming of verification software
⚫Sent unnecessary reverification e-mails to non-U.S. citizens
⚫Immigration and Nationality Act's anti-discrimination provision
⚫Prohibits requesting more or different documents than
necessary
⚫Protects against discrimination based on citizenship,
immigration status, or national origin
Purpose of the Guidelines
⚫Collaboration of Experts
⚫Five judges and a lawyer/computer science
professor
⚫Members of the Working Group on AI and the
Courts
⚫Development of Guidelines
⚫Part of the ABA’s Task Force on Law and
Artificial Intelligence
⚫Consensus view of Working Group members
⚫Purpose of Guidelines
⚫Provide a framework for responsible AI use
⚫Targeted at U.S. judicial officers
Judicial Authority and AI
⚫Indispensable Judiciary
⚫Independent, competent, impartial, and ethical
⚫Essential for justice in society
⚫Judicial Authority
⚫Vested solely in judicial officers
⚫Not in AI systems
⚫Technological Advances
⚫Offer new tools to assist the judiciary
⚫Core Obligations of Judicial Officers
⚫Maintain professional competence
⚫Uphold the rule of law
⚫Promote justice
Maintaining Judicial Integrity
⚫Maintaining Judicial Independence and Impartiality
⚫AI must strengthen, not compromise, judicial integrity
⚫Judicial officers must remain impartial to ensure
public confidence
⚫Judges' Responsibility and Proficiency
⚫Judges are solely responsible for their decisions
⚫Must understand and appropriately use AI tools
⚫Risk of relying on extrajudicial information from AI
⚫Balancing AI's Promise with Core Principles
⚫AI can increase productivity and advance justice
⚫Overreliance on AI undermines human judgment
⚫Judicial officers must ensure AI enhances their
responsibilities
Limitations of Gen AI
⚫Understanding Gen AI Tools
⚫Gen AI tools generate content based on prompts and training
data
⚫Responses may not be the most correct or accurate
⚫Gen AI does not engage in traditional reasoning or exercise
judgment
⚫Vigilance Against Bias
⚫Avoid becoming anchored to AI responses (automation bias)
⚫Account for confirmation bias
⚫Disclosure Obligations
⚫May need to disclose AI use under local rules
⚫Obligation to avoid ex parte communication
⚫Verification of Work Product
⚫Judicial officers are responsible for materials produced in their
name
Confidentiality and Privacy
Concerns
⚫Gen AI Tools and Information Usage
⚫Prompts and information may be used to train models
further
⚫Developers may sell or disclose information to third
parties
⚫Handling Confidential Information
⚫Do not use health data, or privileged information in
prompts
⚫Ensure the Gen AI tool treats information in a
privileged manner
⚫Settings and Prompt History
⚫Pay attention to the tools’ settings
⚫Consider retaining, disabling, or deleting prompt
history after sessions
Quality and Reliability of Gen AI
Responses
⚫Importance of Training and Testing AI Tools
⚫Criticalfor pretrial release decisions and criminal
convictions
⚫Ensures validity, reliability, and minimizes bias
⚫Quality of Gen AI Responses
⚫Depends on the quality of the prompt
⚫Responses can vary even with the same prompt
⚫Training Data Sources
⚫May include general Internet information or proprietary
databases
⚫Not always trained on non-copyrighted or authoritative
legal sources
⚫Review Terms of Service
⚫Check for confidentiality, privacy, and security
considerations
Operational Data Analysis
⚫Time and Workload Studies
⚫AI and Gen AI tools assist in analyzing time and workload
⚫Real-Time Transcriptions
⚫Gen AI tools create unofficial/preliminary transcriptions
in real-time
⚫Translation of Documents
⚫Gen AI tools provide unofficial/preliminary translations of
foreign-language documents
⚫Operational Data Analysis
⚫AI tools analyze court operational data and routine
administrative workflows
⚫Identify efficiency improvements
⚫Document Organization and Management
⚫AI tools assist in organizing and managing documents
Editing and Proofreading
⚫Editing and Proofreading
⚫AI and Gen AI tools for checking spelling and grammar in
draft opinions
⚫Legal Filings Review
⚫Gen AI tools to check for misstated law or omitted legal
authority in filings
⚫Generating Court Communications
⚫Gen AI tools for standard court notices and communications
⚫Court Scheduling
⚫AI and Gen AI tools for scheduling and calendar
management
⚫Enhancing Accessibility
⚫AI and Gen AI tools to assist self-represented litigants and
improve accessibility services
Implementation
⚫Regular Review and Updates
⚫Reflect technological advances
⚫Incorporate emerging best practices in AI and
Gen AI usage
⚫Improve AI and Gen AI validity and reliability
⚫As of February 2025
⚫No Gen AI tools have fully resolved the
hallucination problem
⚫Human verification of AI and Gen AI outputs
remains essential
⚫Some tools perform better than others
AI and How to Get Started
⚫Decide Whether to Use Open or Closed AI Models
⚫Evaluate the benefits and limitations of each model type
⚫Ensure Permission and Understand the Terms of Use
⚫Review legal and ethical considerations
⚫Select a Few Simple “Low Risk” Tasks
⚫Start with tasks that have minimal impact if errors occur
⚫Use a “Human-in-the-Loop” Approach
⚫Incorporate human oversight to ensure accuracy
⚫Train Staff and Judges on AI Systems
⚫Provide comprehensive training on AI functionalities
⚫Prepare for Advanced Tasks
⚫Engage in Knowledge Sharing
Accusatory vs. Inquisitorial Models
⚫Accusatory Model (Common Law)
⚫Used in the US
⚫Two equal and autonomous parties: suspect and
prosecuting party
⚫Each party makes its own case before a neutral judge
⚫Inquisitorial Model (Civil Law)
⚫Used in most European continental systems, like France
⚫Official authority collects evidence independently
⚫Evidence is used to uncover the truth without consulting
any party
Relevance and Exclusion of AI
Evidence
⚫Relevance of AI Output in Law Enforcement
⚫AI output must prove something to be deemed
relevant
⚫Federal Rules require relevance for evidence
⚫AI Tools Enhancing Video Footage
⚫Enhancement or augmentation of low-quality footage
⚫Significant modifications to increase quality, details,
or resolution
⚫Potential Issues with AI-Generated Content
⚫Creation of new content may not show what really
happened
⚫Relevance may be questioned if AI fails to show actual
events
Implementation of AI Policies at
State Level
⚫State-Level Regulations
⚫Rules vary depending on the stage of the
criminal procedure
⚫Concrete obligations imposed on law
enforcement and criminal justice authorities
⚫California's 2024 Rules of Court
⚫Set standards for risk assessment technologies
⚫Used specifically for sentencing purposes
⚫Privacy Concerns
⚫Use of risk assessments impacts the right to
privacy
Human Experts to Understand AI
⚫Adversary System for AI Use in Court
⚫Challenges AI evidence
⚫Ensures fair trial
⚫Federal Rules on Evidence (2023 Amendments)
⚫Updated to address AI
⚫Improves judicial gate keeping
⚫Regulatory Interventions
⚫2023 amendments to Federal Rule of Evidence
702
⚫2024 AI Policy
Legal Framework for AI Use
⚫Authorities' Access to Information
⚫Necessary for accurate investigation and adjudication
⚫Includes personal data
⚫AI in Criminal Procedure
⚫Processes personal data in sophisticated ways
⚫Raises privacy and data protection concerns
⚫Need for Regulations
⚫Data integrity and quality
⚫Reliability and security
⚫Storage, retention, and sharing
⚫General Discussion on Privacy
⚫Reference: Daniel Marshall and Terry Thomas,
PRIVACY AND CRIMINAL JUSTICE
Broadened Ex-ante Regulation
Approach
⚫Avoid AI in Critical Decisions
⚫AI usage should be comprehensible and scrutinizable by
human experts
⚫Required by evidence-related law and due process
⚫Critical decisions need adequate and concrete reasons
⚫Permissive AI Use in Less Critical Contexts
⚫AI can be used where statistical precision is required
⚫No human rights should be at stake
⚫AI should corroborate human-made decisions
⚫AI performance should be subjected to enhanced checks
and balances
Overview of New AI Guidelines
⚫Issuance of New AI Guidelines
⚫Released on April 3, 2025
⚫Includes memoranda M-25-21 and M-25-22
⚫Replacement of Previous Directives
⚫Replaces AI directives from March 28, 2024, and
September 24, 2024
⚫Key Requirements
⚫Develop minimum risk management practices for
high-impact AI
⚫Reduce vendor lock-in
⚫Improve transparency
⚫Protect intellectual property and public data
Literature (Selected)
⚫https://www.thomsonreuters.com/en-us/posts/ai-in-
courts/humanizing-justice/
⚫Artificial Intelligence and Civil Rights
https://www.justice.gov/archives/crt/ai
⚫ Hon. Herbert B. Dixon Jr. et al., Navigating AI in the Judiciary: New
Guidelines for Judges and Their Chambers, 26 SEDONA CONF. J. 1
(forthcoming 2025),
https://thesedonaconference.org/sites/default/files/publications/Navig
ating% 20AI%20in%20the%20Judiciary_PDF_021925.pdf
AI Rapid Response Team at the National Center for State Courts 2024
Use of AI and Generative AI in Courts
⚫Regina Sam Penti, Jianing (Jenny) Zhang White House Issues
Guidance on Use and Procurement of Artificial Intelligence
Technology, April 25, Ropes & Gray
Next
⚫AI and Judges
Session 32
AI and Judges

Dr. Krishna Ravi Srinivas


Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and
Law
NALSAR University of Law
Recap
 The last session on overview of
current scenario and emerging picture
of use of AI in different countries was
given.
 We pointed out while some
applications are futuristic the
hardcore issues cannot be wishes
away.
Overview of AI in Judicial Systems
 Advantages of AI in Justice
 Reduces administrative burden and backlog
 Increases court efficiency and reduces costs
 Makes justice more accessible
 Fairness and Accuracy
 Strictly follows precedents
 Prevents personal biases and preferences
 Handles large amounts of information
 AI in Courtroom Functions
 Auxiliary administrative functions
 AI Judicial Tools
 Fully Automated Judicial Decision-Making
 This session addresses selected issues in using AI Particularly
generative AI as a tool for Judges
Human Judgement in Judicial
Decision-Making
 Human Judgment in Judicial Decisions
 Example: King Solomon's judgement to reveal the true mother
 Based on emotional intelligence and credibility assessment
 Legal Judgements
 Based on evidence and rules
 Involves practical and reflective judgement
 AI in Judicial Decision-Making
 Discriminative AI for determinative judgement
 Generative AI for practical judgement
 Discriminative AI
 Trained to distinguish categories (e.g., spam detection)
 Generative AI
Predictive analytics models
 Predictive Analytics Models
 Utilize statistical algorithms and machine learning on
historical data
 Identify patterns and make projections
 Examples: credit score generation, fraud detection
 Assign scores to predict future events
 Prescriptive AI Models
 Suggest the best course of action for desired outcomes
 Integrate predictive analytics outputs
 Employ AI processes, optimization algorithms,and expert
systems
 Examples: personalized healthcare plans, dynamic airline
ticket pricing
Generative AI models
 Predictive AI in Judicial Decision-Making
 Suggests likely outcomes based on past cases
 Prescriptive AI for Systemic Optimization
 Optimizes resource management for case resolution
 GenAI's Role in Legal Research
 Drafts arguments
 Identifies relevant statutes
 Summarizes complex documents
 Influence on Judges
 Aids in research and understanding of law
 Impacts application of legal principles
 Opportunities and Challenges
Mexico case study
 Hearing on March 29, 2023
 Magistrate Reyes Rodríguez Mondragón presided
 Superior Chamber of the Electoral
 Tribunal of the Mexican Judiciary
 Consultation with ChatGPT
 Magistrate Reyes used ChatGPT on his phone
 Consulted on matters concerning the hearing discussion
 Illustration of ChatGPT's usefulness
 Read outputs obtained from ChatGPT
 Showed how ChatGPT can help in legal arguments
India case study
 Judge Anoop Chitkara's Bail Decision
 Refused bail to a man accused of serious crimes
 Consulted ChatGPT for guidance
 Purpose of Using ChatGPT
 Ensure impartiality in decision-making
 Balance personal bias with AI input
 Legal Basis and Consistency
 Judge's consistent view from past cases
 High level of cruelty allegations considered
Ex-ante verification process
 Ex-Ante Verification Process
 Guarantees GenAI meets basic standards
 Applies to third-party and in-house developed GenAI
 Relevance to Judicial Use
 Particularly important for GenAI due to extra-legal data
sources
 Licensing and Verification Regime
 Assesses GenAI systems for functionality and legal
adherence
 Minimizes potential risks in judicial processes
 Integration Standards
 GenAI must meet high standards for judicial decision-
making
Algorithmic fairness and bias
mitigation
 Verification Process for Algorithms
 Scrutinize algorithms for potential biases
 Ensure implementation of fairness testing and de-biasing
methods
 Data Utilization in Risk Assessment
 Data should be representative of diverse populations
 Avoid perpetuating historical biases
 Specific Techniques to Minimize Bias
 Avoid algorithmically integrating biometrics in judicial
decisions
 Enhance unbiased data
 Preserve data privacy
Oversight by independent body
 Oversight by Qualified Independent Body
 Expertise in AI, law, and ethics required
 Demonstration of Standards Compliance
 Developers must show Gen AI systems
meet standards
 Pre-Usage Requirement
 Standards must be met before use in
courtrooms
Internal verification for in-house
Gen AI
 Internal Verification for In-House Gen AI Models
 Courts developing their own Gen AI Models
should implement rigorous internal verification
processes
 These processes should mirror the external
licensing regime proposed in the article
 Adherence to High Standards
 Internal verification ensures in-house Gen AI
systems meet high foundational standards
 Standards should be equivalent to those of third-
party developed systems
Importance of high-quality
datasets
 Importance of High-Quality Datasets
 Gen AI system's potential is determined by the
quality of its training data
 Biased or incomplete data can perpetuate
societal biases
 Impact on Fairness and Accuracy
 Inaccurate legal reasoning can undermine
 justice system cornerstones
 Considerations for Integration
 Factors to be considered when integrating
 Gen AI into courts
Data quality monitoring and
strategies
 Importance of Data Quality
 Gen AI systems rely on high-quality data
 Data must be accurate, unbiased, representative, and
complete
 Data Quality Monitoring and Mitigation
 Strategies to monitor and mitigate biases
 Historical tracking of court hierarchy and legal
developments
 Weighted analytics on legal interpretation progressions
 Strategies for Ensuring Data Quality
 Employing diverse data sources
 Using procedural and material fairness metrics
 Closed-Network Datasets
 Success of Gen AI in Judicial Decision-Making
Data access, Explainability, and
origin visibility
 Data Access Protocols
 Clear protocols for data access are essential for transparency and
fairness
 Judges and relevant parties should understand the data sources
used by Gen AI systems
 Explainability of Gen AI Outputs
 Explainability helps judges and litigants understand the
reasoning behind Gen AI outputs
 Fosters trust and transparency in decision-making
 Origin Visibility of Data
 Gen AI systems should disclose data sources and characteristics
 Includes precedents, laws, and regulations used in conclusions
 Data Summaries and Anonymized Access
 Provide summaries of data used
 Audits and Verification
Developer accountability
 Developer Accountability
 Developers should be accountable for algorithmic design
and functionality
 Clear disclosures of AI’s modeling and reasoning process
are required
 Liability Allocation
 Avoid overly punitive developer liability frameworks
 Balance responsibility to avoid high barriers to entry
 Certification Process
 Ex-ante certification process to balance incentives and
safety
 Certification or licensing process to assess development
practices
Judicial discretion and training
 Judicial Discretion and Responsibility
 Judges retain ultimate decision-making authority
 Absolute responsibility for Gen AI outcomes could discourage
use
 Training and Resources
 Comprehensive training for judges to evaluate Gen AI outputs
 Understanding limitations of Gen AI
 Liability Distribution
 Tiered approach integrating training and risk-based
responsibility
 High-risk cases require rigorous review
 Low-risk cases require lesser scrutiny
 Review Process and Scrutiny Obligations
 Higher degree of responsibility for judges in high-risk cases
Shared responsibility model
 Proposed Shared Responsibility Models
 Developers and court system share liability
 Depends on case circumstances and algorithmic output
 Factors Influencing Liability
 Judges’ Gen AI training
 Complexity and risk profile of cases
 Transparency of Gen AI system
 Goals of the Framework
 Encourage responsible development and Gen AI integration
 Empower judges and increase efficiency
 Protect integrity of judicial decision-making
Human-designed prompts and
expertise
 Role of Human-Designed Prompts
 Used by all judges in case studies
 Assume human expertise in prompt creation
 Effectiveness of Human Prompts
 Depend on human's expertise
 Effective for tasks with well-defined input and structured
output
 Expertise Required for Designing Prompts
 Understanding of specific field and human-machine
interaction
 Competency and experience vary across jurisdictions
Challenges and automated prompt
design
 Quality of Human-Designed Prompts
 Ensuring quality is challenging
 Affected parties cannot assess quality to challenge
decisions
 Limitations of Human-Designed Prompts
 Researchers exploring automated prompt design
 Improves efficiency and adaptability
 Automated Prompt Design
 Generated using various algorithms and techniques
 Already deployed in sectors like the medical field
 Future Exploration
 Choice of prompt structure depends on task and resources
 Needs further exploration in judicial systems and Gen AI
Risk-based deployment and
accuracy thresholds
 High-Risk Use Cases
 Significant potential consequences for defendant’s rights
 Examples: criminal sentencing, loss of liberty
 Stricter Accuracy Thresholds
 Mandated for high-risk cases
 Based on metrics relevant to the legal domain
 Higher Responsibility in Reviewing Process
 Enhanced scrutiny within the judicial system
 Ensures accuracy and fairness
 Scope of GenAI Application
 Exclusion from deployment in high-risk cases
 Limited application: reviewing case law, weighing
evidence
Graduated approach to review and
responsibility
 Review and Scrutiny Levels
 Low-risk cases: Basic review by the judicial system
 High-risk cases: Thorough examination by judges, clerks, or
specific organizations
 Increased Review Requirements
 Legal professionals delve deeper into GenAI’s reasoning process
 Examine data used, such as case precedents, laws, and extra-
legal information
 Identify potential biases
 Heightened Responsibility
 Judges: Evaluate GenAI outputs and ensure alignment with legal
reasoning
 Developers: Accountable for algorithmic design and functionality
Disclosure to parties
 Transparency in Judicial Processes
 Mandatory disclosure of Gen AI use to all
relevant parties
 Ensures fairness and accountability
 Promotes trust in the judicial system
 Key Purposes of Disclosure
 Informs all parties involved about the use of Gen
AI
 Allows for scrutiny and evaluation of Gen AI's
role facilitates informed decision-making
Empowering parties and ensuring
fairness
 Transparency in Legal Proceedings
 Parties are informed about Gen AI involvement
 Allows for informed decision-making regarding legal
strategies
 Options for Challenging Gen AI Output
 Parties can choose to challenge the Gen AI output
 Request for traditional review process if needed
 Promoting Fairness
 Ensures all parties understand the tools being used
 Enables parties to shape their participation
accordingly
Promoting trust and procedural
awareness
 Transparency in Judicial Process
 Builds trust among parties involved
 Ensures awareness of procedural
status
 Use of Gen AI
 Informs parties of their rights
 Boosts confidence in due process
Methods and content of disclosure
 Tailored Disclosure Methods
 Adapted to the risk profile of the case
 Based on the type of Gen AI used
 Consideration of court's means and
resources
 Due Inclusions and Potential Approaches
 Specific methods can vary
 Approaches depend on case specifics
Scrutiny tailored to risk
 Scrutiny Tailored to Risk
 High-risk cases require rigorous verification
 Involves criminal sentencing or significant impacts on
rights and freedoms
 Independent verification by court personnel or qualified
third parties
 Cross-referencing data sources and verifying legal
citations
 Ensuring factual accuracy
 Low-risk cases
 Reduced review process
 Tailored to the needs and rights affected by each specific
process
Gen AI as an enhancer, not a
bottleneck
 Gen AI as an Efficiency Enhancer
 Verification should not slow down the
judicial system
 Goal is to enhance efficiency while
mitigating risks
 Streamlined Verification Procedures
 Utilize technology-assisted verification
tools
 Achieve balance between efficiency and
risk mitigation
Literature (Selected)
 David Uriel Socol de la Osa and Nydia Remolina (2024)
Artificial intelligence at the bench: Legal and ethical
challenges of informing—or misinforming—judicial decision
making through generative AI Data & Policy (2024), 6: e59
doi:10.1017/dap.2024.53
 Vasiliy A. Laptev · Daria R. Feyzrakhmanova (2024) Application
of Artificial Intelligence in Justice: Current Trends and Future
Prospects Human-Centric Intelligent Systems (2024) 4:394–
405
 https://doi.org/10.1007/s44230-024-00074-2
 Australian Institute of Judicial Administration (2023) AI
Decision-Making and the Courts : A guide for Judges, Tribunal
Members and Court Administrators
 Felicity Bell and Michael Legg (2025) Judicial Impartiality: AI in
Courts Cambridge Handbook of AI in Courts (forthcoming)
Next
 AI and Human Rights (Session 33)
Session 33
AI and Human Rights

Dr. Krishna Ravi Srinivas


Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and
Law
NALSAR University of Law
Recap
• In the last session we discussed how AI will impact the roles and
functioning of Judges, and, looked at whether current guidelines
are enough
• We discussed issues in ‘human in the loop’ concept’s application
to Judiciary.
• Moreover the need for capacity building and more use of EXAI in
Judiciary was emphasized.
Key Human Rights and Ethical
Challenges of AI
• Deliberate Use of AI for Suppression
• Mass surveillance of minorities, e.g., Uyghur in China
• Limiting freedom of expression and assembly
• Monitoring public compliance with behavioral rules
 Collateral Consequences of AI Operation
• Embedding and exaggerating bias and discrimination
• Invasion of privacy and reduction of personal autonomy
• Potential harm in healthcare and welfare decisions
• Detrimental Outputs of AI
• Manipulating audience views and violating freedom of thought
• Prioritizing content that incites hatred and violence
• Exacerbation of Social Divides
Governing AI: Why Human Rights?
• Human rights overlooked
• Issues and challenges in AI governance
• Myths about human rights
• Common misconceptions
• What human rights have to offer
• Benefits and protections provided by human rights
Principles of AI Governance: Human
Rights Contribution
• Three Dimensions of AI Governance
• Substantive standards for AI developers and implementers
• Processes to ensure standards are met
• Accountability and remedies for breaches
• Principles: Human Rights Law
• Integration of human rights law into AI governance principles
Ethical values in AI systems
• AI Ethical Risk Assessment Processes
• Involves developers, providers, and users
• Focuses on refining ethical values and assessing AI products
• Identifies and mitigates risks
• Data Governance Tools
• Includes data sheets or 'nutrition labels'
• Summarizes characteristics and intended uses of data sets
• Government Impact Assessments
• Canada's Directive on Automated Decision-Making
• US's draft Algorithmic Accountability Act
• Challenges in Ethical Risk Assessment
• Audit Process Challenges
Prohibition of certain AI forms
• Inconsistent Prohibitions
• Governments and companies are prohibiting AI forms with
serious ethical concerns
• Lack of consistency in prohibitions
• Rationale behind prohibitions often not openly acknowledged
• Examples of Prohibitions
• US states banning certain uses of facial recognition technology
• EU’s Artificial Intelligence Act prohibiting manipulative AI
practices and biometric identification systems in public spaces
• Twitter banning political advertising in 2019
Transparency measures in AI
 Public Transparency Measures
 Registries and release of source code or algorithmic logic
 Required in France under the Digital Republic Law
 UK Government's Algorithmic Transparency Standard
 Launched in November 2021
 Public sector organizations provide information on their use of
algorithmic tools
 Information published online in a standardized format
 Several government algorithms made public
Processes: human rights law
 Government Duty to Comply with Human Rights
 Ensure AI usage in public decision-making respects human rights
 Protect Individuals from Human Rights Abuses
 Prevent abuses by companies and non-state actors
 State Obligations
 Take appropriate steps to prevent abuse
 Investigate and punish human rights violations
 Redress abuses through effective policies and regulations
Governmental duty to protect
 Appropriate Mix of Laws, Policies, and Incentives
 National and international measures
 Mandatory and voluntary measures
 Fostering Business Respect for Human Rights
 Requiring suitable corporate structures
 Identifying and addressing human rights risks
 Engaging with external stakeholders
 Additional Steps for State-Owned or Public Sector Businesses
 Management control
 Contractual control
Compliance with human rights
 Governments' Obligations
 Cannot wait to see how AI develops
 Must engage in proactive governance activities
 Actions Required
 Regulation and impact assessments
 Audits to ensure AI does not infringe human rights
 Understanding Human Rights Implications
 Deploy dedicated capacity-building efforts
 Establish technology and human rights office
Smart mix of laws and policies
 Need for Effective Regulation
 Ensure companies do not infringe human rights
 Provide effective remedies for infringements
 Limitations of Voluntary Approaches
 Ambiguity of ethical commitments
 Strong commercial considerations
 Obligation of States
 Enact legally binding norms
 Protect human rights against AI challenges
 Regulation of AI Applications
 Prohibit or constrain risky AI applications
 Focus on biometric technologies
Urgent need for regulation
• Systematic AIA and Audit Processes
• Employ rigorous standards and due process
• Consider potential human rights impacts
• Human Rights Risk Assessment
• Make assessment of human rights risks explicit
• Incentivizing Corporate Good Practice
• Demonstrate respect for human rights
• Facilitate remedy
• Public Reporting Requirements
• Require companies to report on due diligence
• Report on human rights impacts identified and addressed
Systematic AIA and audit processes
 Importance of Supervision
 Conducted by regulatory and administrative authorities
 Ensures accountability for compliance with human rights
responsibilities
 Legal Liability
 Parallel to supervision
 Addresses harms and violations
Supervision and accountability
 Importance of Supervision by Authorities
 Ensures accountability for human rights responsibilities
 Works alongside legal liability for harms
 Implementation in Europe and EU
 Mandatory human rights and environmental due diligence for
larger businesses
 Role of Human Rights Experts
 Exploring administrative supervision of corporate duties
 Complementing liability for harms in courts
Government procurement
obligations
 Legal Obligations
 Governments must not breach human rights with AI systems
 Knowledge and Information
 Understanding AI's capacity and implications is crucial
 Ensuring AI meets standards on equality, privacy, and other
rights
 Public-Private Contracts
 Negotiating terms to ensure AI aligns with human rights
 Deploying procurement conditions for compliance
 Encouraging Improvements
 Public procurement can enhance human rights standards in AI
industry
 Compliance of Adopted Systems
 Ensuring existing AI systems comply with human rights
standards
UN Guiding Principles on Business
 Respect for Human Rights
 Companies should avoid infringing on human rights
 Address any adverse human rights impacts from their activities
 Policy Commitment
 Approved at senior level
 Publicly available
 Embedded in the business culture
 Due Diligence Process
 Ongoing human rights impact assessment
 Tracked for responsiveness
 Reported externally
 Benefits of Responsible Business Agenda
Challenges in AI due diligence
 Distinguishing Features of AI
 AI's capacity for self-improvement makes predicting consequences
difficult
 Human rights impact depends on technology and deployment
context
 Need for Extensive Due Diligence
 Involves a wide set of stakeholders
 Must be extensive to address human rights impacts
 Regular Review of AI Systems
 AI must be reviewed regularly once in operation
 Comprehensive due diligence throughout AI system's life cycle
 Current Gaps in Company Structures
 Many companies lack processes to detect and act on human rights
issues
 Results of due diligence should be made public
Examples of corporate AIAs
 AIAs Labelled as Human Rights Assessment
 Verizon's ongoing human rights due diligence
 AI Ethics Assessments Similar to Human Rights Due Diligence
 IEEE's adopted AI ethics assessment
 Google's AI Deployment Review Process
 References AI Principles
 Includes consultation with human rights experts
Fostering a pro-human rights
culture
 General Corporate Statements
 Intentions and activities are publicly available
 Human Rights Risks
 Identification and mitigation through due diligence
 Less public information available
Remedies in AI governance
 Scope of Corporate Processes
 Focus on specific issues like bias and privacy
 Brief mention of other human rights
 Effect of Impact Assessments
 Unclear effects on company activities
 Human rights risks need mitigation
 Business processes balance risks and benefits
 Access to Remedy for AI Failures
 Effective reparation and accountability
 Measures to prevent recurrences
Finland Decision
 Tribunal's Decision in March 2018
 Finland’s National Non-Discrimination and Equality Tribunal
ruled against a credit institution
 Reason for the Decision
 Credit decision was based on assumptions from statistical data
 Criteria included gender, first language, age, and residential
area
 Outcome
 Tribunal found the decision discriminatory
 Prohibited the use of such decision-making methods
Practical actions for companies
 Hague District Court's Decision
 Ordered Dutch government to stop using SyRI in February 2020
 SyRI reviewed personal data to predict benefit or tax fraud
 Lack of Transparency
 Government refused to reveal how SyRI used personal data
 Difficult for individuals to challenge investigations or risk scores
 Violation of Privacy Rights
 Legislation regulating SyRI did not comply with Article 8 ECHR
 Failed to balance societal benefits with privacy violations
 Discriminatory Practices
 SyRI used only in 'problem neighbourhoods'
 Proxy for discrimination based on socio-economic background and
immigration status
South Wales Case
 Case Overview
 First challenge to AI invoking UK human rights law
 South Wales Police trialled live automated facial recognition
technology (AFR)
 AFR Technology
 Compared CCTV images of public event attendees with a
database
 Non-matching images were immediately deleted
 Legal Challenge
 Complainant challenged AFR’s capture and comparison of his
image
 Referenced Article 8 ECHR and UK Data Protection Act
South Wales Case
 Improper Legal Basis for AFR Use
 AFR use breached the Data Protection Act
 Balance Between Individual Rights and Community Interests
 Court did not find police use of AFR to strike the wrong balance
 Failure to Discharge Public Sector Equality Duty
 South Wales Police did not ensure AFR software was free from
racial or gender bias
 No evidence of bias in the software
 Temporary Halt of AFR Use
 South Wales Police's use of AFR was temporarily halted
 Possibility of reintroduction with proper legal footing
 Reintroduction of AFR
 South Wales Police has reintroduced facial recognition technology
in certain circumstances
Promote AI ethics
 Case Overview
 First challenge to AI invoking UK human rights law
 South Wales Police trialled live automated facial recognition
technology (AFR)
 AFR Technology
 Compared CCTV images of public event attendees with a
database
 Non-matching images were immediately deleted
 Legal Challenge
 Complainant challenged AFR’s capture and comparison of his
image
 Referenced Article 8 ECHR and UK Data Protection Act
Holistic commitment to human
rights
 Improper Legal Basis for AFR Use
 AFR use breached the Data Protection Act
 Balance Between Individual Rights and Community Interests
 Court did not find police use of AFR to strike the wrong balance
 Failure to Discharge Public Sector Equality Duty
 South Wales Police did not ensure AFR software was free from
racial or gender bias
 No evidence of bias in the software
 Temporary Halt of AFR Use
 South Wales Police's use of AFR was temporarily halted
 Possibility of reintroduction with proper legal footing
 Reintroduction of AFR
 South Wales Police has reintroduced facial recognition
technology in certain circumstances
Recruit human rights expertise
 Initial Ruling in 2019
 Administrative decisions based on algorithms deemed
illegitimate
 Reversal in 2021
 Courts recognized the benefits of speed and efficiency
 Algorithmic decisions must adhere to administrative review
principles
 Principles for Algorithmic Decision-Making
 Transparency
 Effectiveness
 Proportionality
 Rationality
 Non-discrimination
 Rights of Complainants
Conduct human rights due
diligence
 Complaint Filed by Big Brother Watch
 Issued in July 2022
 Filed to the British information commissioner
 Alleged Use of Facial Recognition Technology
 Involves Facewatch and Southern Co-op
 Used to scan, maintain, and assess profiles of supermarket
visitors
 Breach of Data Protection and Privacy Rights
Establish remedy mechanisms
 Requirement of Remedy in Human Rights Law
 Governments and companies must provide suitable remedies
 Remedies are necessary for breaches of obligations and
responsibilities
 Components of Effective Remedy
 Effective reparation for victims
 Accountability for those responsible
 Measures to prevent future breaches
 Significance of Remedy Availability
 Ensures human rights and ethical principles have real impact
 Balances against commercial considerations
Standards bodies and human rights
 Accountability for AI-related Harm
 Businesses should pursue accountability against companies
causing harm
 Harm may result from malfunctioning AI systems
 Interference from another company's AI can also cause harm
Understanding of human rights
 Understanding the Complaint Process
 Complainants need to know how to complain and to whom
 Confidence that their complaint will be addressed timely
 Transparency and Explainability
 Complainants should understand how decisions about them
were made
 Information on the role and operation of AI in decision-making
 Access to Data
 Details on AI design and testing
 Information on intended and actual AI operation in specific
cases
 Role of human decision-making or oversight
Expertise in AI and human rights
 Remedy Providers
 Courts
 Governmental mechanisms (regulators, ombudspersons,
complaints processes)
 Non-governmental mechanisms (corporate remediation
processes)
 UN Guiding Principles Recommendations
 Businesses should establish or participate in effective
operational-level grievance mechanisms
 Characteristics of Effective Mechanisms
 Legitimate (enabling trust)
 Accessible
 Predictable
 Equitable
 Transparent
Cross-cutting regulation
 Expected Challenges in the Field
 Numerous challenges anticipated in the coming years
 Guiding Principle
 Provision of an effective right to remedy
 Includes addressing breaches of human rights responsibilities
Resources for human rights bodies
 Effective Practical Steps Needed
 Companies must prioritize human rights in AI governance
 Governments should implement regulations to protect human
rights
 International organizations need to set global standards
 Civil society must advocate for human rights in AI policies
 Investors should support ethical AI initiatives
Incentivize beneficial AI
development
 Importance of Human Rights in AI
 AI is reshaping human experience
 Human rights should be central to AI governance
 Benefits of Human Rights-Based AI Governance
 Nothing to fear from this approach
 Much to gain by taking human rights as the baseline
Harmonize international
understanding
 Ignoring Human Rights Undermines Established Norms
 Liberty, fairness, and equality are compromised
 Accountability processes are disregarded
 Creation of Confusing Alternatives
 Inadequate substitutes to existing norms
 Duplication of efforts in norm development
 Implementation and Remedy Issues
 Processes for norm implementation are duplicated
 Remedies for breaches are inadequately addressed
Multi-stakeholder forum
 Promote AI Ethics
 Ensure ethical practices in AI development and deployment
 Responsible Business Agendas
 Encourage businesses to adopt responsible practices
 Complementary Role of Human Rights Frameworks
 Acknowledge the importance of existing human rights
frameworks
UN alignment with human rights
 Champion holistic commitment to human rights
 Ensure top-level organizational support
 Promote adherence to all human rights standards
 Enable change in corporate mindset
 View human rights as a useful tool
 Avoid seeing human rights as a constraint on innovation
Next
 AI and Legal Education Session 34
Artificial Intelligence, Law and Justice
Session 34

AI in Legal Education
Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and
Law
NALSAR University of Law
Recap
• In the last session we discussed how AI can impact
human rights positively and negatively
• We also pointed out some innovations in AI can be
banned or
• Their use restricted on account of implications for
human rights
• Moreover it was emphasized that AI and Human
Rights should
• Have a role in governance of AI
Balancing Progress and
Responsibility
⚫Transformative Impact of AI in Legal Education
⚫Revolutionizes the way law is taught, practiced, and
understood
⚫Streamlines research and enhances access to legal
resources
⚫Provides novel solutions to complex legal challenges
⚫Ethical and Sustainability Challenges
⚫Ethical dilemmas linked to AI adoption in legal education
⚫Need for rigorous investigation of these dilemmas
⚫Multi-faceted Sustainability Considerations
⚫Goes beyond environmental aspects
⚫Includes equity, ethics, and long-term viability of AI-driven
practices
Ethical Standards and Sustainability
⚫Ethical Standards in Sustainability
⚫Fostering a protective relationship toward the
environment and humanity
⚫Considering long-term ethical implications of
actions and policies
⚫Complexity of Sustainability Ethics
⚫Beyond protecting the environment or reducing
resource consumption
⚫Intersection with various domains like
technology, education, and law
⚫Importance in AI Adoption
⚫Decisions made today impact future
generations
Long-term Implications of AI
Adoption
⚫Impact on Future Legal Professionals
⚫Long-term effects on privacy and equity
⚫Increased prevalence of AI in legal education and practice
⚫Need for Sustainability Ethics
⚫Embedding ethics in institutional decision-making
⚫Influence on curriculum development and resource
⚫allocation
⚫Ethical Implications of AI Adoption
⚫Potential biases in AI algorithms
⚫Strategies to promote fairness and inclusivity
⚫Resource Allocation Strategies
⚫Optimizing investments in AI
⚫Sustainable Faculty Training
Importance of Sustainability in Legal
Education
⚫Importance of Sustainability in Legal Education
⚫Meeting present needs without compromising future
generations
⚫Preparing lawyers to address environmental issues
⚫Incorporating Environmental Sustainability in Law Curriculum
⚫Highlighted in universities in England
⚫Growing importance of environmental sustainability
issues
⚫Need for legal professionals to be well-versed in this field
⚫Environmental Law as a Specialized Field
⚫Focuses on legal aspects of environmental protection
⚫Conservation and sustainable resource management
⚫Law schools offer courses and programs in environmental
law
Role of Legal Education in Advancing
Sustainability
⚫Significance of Sustainability in Legal Education
⚫Advancing sustainability principles across social,
⚫ economic, and cultural dimensions
⚫Law Schools and Ethical Decision-Making
⚫Sustainability as a framework for guiding ethical
⚫decisions
⚫Application across various legal disciplines,
⚫not just environmental law
⚫Global Goal of Sustainable Development
⚫Legal professionals shaping policies and regulations
⚫Inclusion of international law, trade law, and human
rights
⚫law in coursework
⚫Impact of Businesses on Sustainability
⚫Emerging Trends in Legal Profession
Sustainability in Legal Operations
⚫Green Initiatives and Carbon Footprint Reduction
⚫Emphasis on sustainability in legal operations
⚫Promoting Social Responsibility and Ethical Governance
⚫Growing importance in legal practices
⚫Empowering Future Lawyers
⚫Advocacy for sustainability and environmental protection
⚫Environmental Law Clinics and Advocacy Programs
⚫Practical experience for students in environmental causes
⚫Lawyers' Role in Sustainability
⚫Drafting, interpreting, and enforcing sustainability
regulations
⚫Legal Education for Government and Policy Advising
⚫Preparing students for roles in government agencies and
as legal advisers
Interdisciplinary Collaboration for
Sustainability
⚫Importance of Interdisciplinary Collaboration
⚫Engages students through practical case studies and
experiential learning
⚫Encourages collaboration with fields like environmental
science, engineering, and economics
⚫Ethical Dimensions of Sustainability
⚫Involves questions of equity, justice, and future well-being
⚫Encourages students to consider ethical aspects in their
practice
⚫Contribution of Legal Academics and Researchers
⚫Conduct research on legal frameworks, case studies, and
policy evaluations
⚫Significance of Sustainability in Legal Education
⚫Equips future lawyers to address environmental, social,
and economic challenges
⚫Promotes engagement with broader societal issues
Integration of AI in Legal Curricula
⚫Integration of AI in Legal Curriculum
⚫Courses on AI and the law
⚫Legal tech and data privacy
⚫AI-Assisted Legal Research
⚫Innovative tools for research
⚫Practice Simulations
⚫Legal Tech Competitions
⚫Ethical Considerations
⚫Research Opportunities
⚫AI in Legal Clinics
⚫Continuing Legal Education
⚫AI and Access to Justice
AI-Assisted Legal Research and
Practice Simulations
⚫AI Tools in Legal Education
⚫Law schools provide training on AI tools for research and
document review
⚫Incorporation of AI-based practice simulations in
programs
⚫Competitions related to legal technology and AI foster
innovation
⚫Ethical and Regulatory Concerns
⚫Addressing ethical implications of AI in legal practice
⚫Guidance on navigating AI-related challenges
⚫Research and Practical Exposure
⚫Opportunities for AI-related research in law schools
⚫Legal clinics using AI tools to improve efficiency and
access
⚫Continuing Legal Education (CLE)
⚫Courses on AI relevance to legal practice
Ethical Considerations in AI Adoption
⚫Benefits of AI in Legal Education
⚫Improves efficiency in legal tasks
⚫Assists in complex legal tasks
⚫Concerns About AI Reliance
⚫Potential compromise of essential legal skills
⚫Critical thinking and analytical reasoning
development
⚫Governance and Oversight Challenges
⚫Adapting to unknown challenges introduced by AI
⚫Ensuring effective human oversight mechanisms
⚫Need for Balance in Legal Education
⚫Ensuring students develop essential legal skills
⚫Careful integration of AI tools
AI and Access to Justice
⚫AI and Legal Education
⚫AI raises questions about ethics and regulation
⚫Legal education can help develop policies and guidelines
⚫Enhancing Access to Justice
⚫AI automates routine legal tasks
⚫Provides affordable legal assistance
⚫Adaptation and Assimilation
⚫AI technologies are becoming relevant to the legal
profession
⚫Legal education institutions are incorporating AI-related
topics
⚫Fostering Innovation and Improving Legal Services
⚫Ensures law students and lawyers are prepared for the
changing landscape
⚫Addresses ethical and regulatory challenges associated
with AI
Trends in AI Integration
⚫Adoption of AI Technologies in Legal Education
⚫33% of institutions regularly use generative AI
tools
⚫AI-Driven Research Tools and Virtual Assistants
⚫Support for student activities and enhanced learning
platforms
⚫Collaborations with Legal Tech Companies
⚫Incorporation of AI-driven solutions into curriculum
and research
⚫Access to cutting-edge AI technologies and expertise
⚫AI-Powered Adaptive Learning Platforms
⚫Personalised learning experiences tailored to individual
needs
⚫Use of ML algorithms to analyse student performance
data
Promising AI Tools and Platforms
⚫AI-Powered Research Platforms
⚫Westlaw Edge and Lex Machina scan vast legal databases
⚫Identify relevant cases and statutes with high accuracy
⚫Document Review Tools
⚫Kira and eDiscovery sift through large amounts of
documentation
⚫Highlight key clauses and extract vital information
⚫Identify inconsistencies
⚫Alternative Dispute Resolution (ADR) Online Platforms
⚫Modria simulates online negotiation environments
⚫Personalised Learning Platforms
⚫Virtual Assistants
⚫Impact on Legal Profession
Equitable Access to AI Tools
⚫Equitable Access to AI Technologies
⚫Limited access to technology
⚫Digital literacy barriers
⚫Potential to exacerbate existing disparities
⚫Data Privacy and Security Concerns
⚫Need for robust safeguards
⚫Protection of sensitive student data
⚫Adherence to data protection regulations
⚫Ethical Considerations
⚫Algorithmic bias
⚫Transparency and accountability
⚫Ethical use of AI-generated content
AI's Potential in Legal Education
⚫Immersive Learning Experiences
⚫AI-powered simulations and case studies
⚫Interactive tutorials enhancing learning
⚫Fostering Critical Thinking Skills
⚫Real-time feedback to students
⚫Streamlining Administrative Tasks
⚫Grading and document review
⚫Research assistance
⚫Innovation in Teaching Methods
⚫Experimentation with new pedagogical approaches
⚫Interdisciplinary Collaboration
⚫Opportunities for collaboration between legal and
⚫AI fields
Sustainability Ethics in Legal
Education
⚫Principles and Values of Sustainability Ethics
⚫Promotes long-term well-being and equity
⚫Encourages environmental stewardship for present
⚫ and future generations
⚫Role in Legal Education
⚫Shapes curriculum and institutional practices
⚫Defines ethical responsibilities of law schools
⚫Challenges in AI and Data Management
⚫Collection and processing of sensitive data
⚫Risks of data breaches and unauthorized access
⚫Legal, ethical, and reputational consequences
⚫Data Protection Regulations
⚫Biases in AI Tools
Balancing AI Implementation with
Sustainability Initiatives
⚫Integrating Environmental Law and Sustainability Principles
⚫Equip students with knowledge and skills to address
environmental challenges
⚫Advocate for sustainable practices in legal practice
⚫Balancing Costs of AI Implementation
⚫Complex task to balance with other sustainability
initiatives
⚫Reducing energy consumption and supporting diversity
and inclusion
⚫Preparing Faculty and Staff
⚫Resource intensive and time consuming
⚫Sustainable professional development programmers
needed
⚫Integrating Ethical Considerations
⚫Prepare students for complex ethical dilemmas in legal
⚫practice
⚫Embedding Sustainability Ethics
Promoting Diversity and Inclusion
in AI Adoption
⚫Promoting Diversity and Inclusion
⚫Strive for diversity in law schools
⚫Ensure access to legal education for all students
⚫Creating Inclusive Learning Environments
⚫Provide financial aid and support services
⚫Address systemic barriers to participation
⚫Challenges of AI Adoption
⚫Exacerbation of the digital divide
⚫Ensure equal access to AI-driven resources
⚫Ethical Implications of AI
⚫Align AI use with ethical principles
Ensuring Equitable Access to AI-
Driven Tools
⚫Link between Equity and Sustainability in AI
⚫Equitable access contributes to long-term viability
⚫of AI
⚫Without equity, AI implementation could worsen
⚫existing inequalities
⚫Impact on Marginalised Groups
⚫Disproportionate access to AI resources could widen
⚫educational gap
⚫Essential for producing competent and socially responsible
⚫legal professionals
⚫Ensuring Equal Access
⚫Maintains long-term relevance and fairness of legal education
⚫Fosters a legal profession that serves all segments of society
⚫Sustainability Ethics in Legal Education
⚫Promotes environmental sustainability, social justice, and
⚫ethical decision-making
⚫Comprehensive AI Strategies
Theoretical Framework for Ethical
AI Adoption
⚫Importance of Ethical AI in Legal Education
⚫Connections among sustainability, ethics,
⚫and AI
⚫Ethical questions related to bias, fairness,
⚫and transparency
⚫Sustainable AI Practices
⚫Discussions on ethical implications in decision-making
⚫Focus on sustainability and environmental justice
⚫Integrating Sustainability and AI Ethics
⚫Cultivating ethical awareness and social responsibility
⚫Promoting environmental stewardship among students
⚫and practitioners
⚫Sustainability Ethics Principles
⚫Promoting long-term well-being of individuals, communities,
⚫and the environment
⚫AI Ethics Principles
Tension Between AI Ethics and
Sustainability
⚫Conventional AI Ethics
⚫Focuses on fairness, transparency, accountability, and
privacy
⚫Addresses issues like algorithmic bias and discrimination
⚫Aims to respect fundamental human rights and values
⚫Sustainability in AI Development
⚫Considers environmental impact of AI technologies
⚫Focuses on energy consumption, carbon emissions, and
resource depletion
⚫Aims to minimize environmental footprint throughout AI
system lifecycle
⚫Potential Conflict Between Ethics and Sustainability
⚫Ethical considerations may conflict with sustainability
goals
⚫Data protection and transparency may not align with
energy efficiency and waste reduction as data protection
and transparency
Data Privacy as a Sustainability
Issue
⚫Importance of Evaluating AI's Impact
⚫Environmental impact of technologies
⚫Influence on societal structures and individual freedoms
⚫Data Privacy as a Sustainability Issue
⚫Long-term implications of data usage
⚫Consequences of failing to protect data privacy
⚫Intersection of Social Justice and Environmental Sustainability
⚫Equity in access to technology
⚫Long-term societal impacts of AI
⚫Role of Data Privacy in Promoting Social Justice
⚫Marginalised groups affected by data misuse
⚫Ethical Implications of Data Usage in AI
Sustainable AI Development
Strategies
⚫Interconnected Nature of Sustainability Issues
⚫Environmental, social, and economic aspects
⚫Impact on legal education and societal
dimensions
⚫Decreasing Environmental Footprint
⚫Promoting energy efficiency
⚫Leveraging AI for sustainable development
⚫Social and Economic Sustainability
⚫Addressing digital divides
⚫Promoting equitable access to AI technologies
Ethical Considerations in AI-Driven
Pedagogy
⚫Pedagogical Impact
⚫Influence on teaching methods
⚫Changes in assessment practices
⚫Effects on student engagement
⚫Privacy Concerns
⚫Transparency of AI algorithms
⚫Protection of student data
⚫Equity Issues
⚫Potential to perpetuate inequalities
⚫Access to education and resources
⚫Sustainability Ethics
⚫Ethical AI Practices
Push and Pull Factors in AI
Adoption
⚫Importance of AI, Sustainability, and Law
⚫Law schools need to educate students on AI law, ethics,
and sustainability
⚫Knowledge and skills in AI are increasingly valued by legal
employers
⚫Push and Pull Factors for Law Schools
⚫Universities prioritize sustainability objectives
⚫Law firms value AI literacy in graduates
⚫Limited budgets and infrastructure can hinder AI adoption
⚫Influence of External Factors
⚫Centralized decisions on cloud computing resources
⚫Requirements from legal professional bodies
⚫Strategies for Responsible AI Integration
⚫Collaboration and strategic leadership
Strategies for Sustainable AI
Integration
⚫Holistic and Coordinated Approach
⚫Involves administrators, faculty, students,
⚫and external partners
⚫Requires continuous monitoring and adaptation
⚫Promoting Ethical AI Practices
⚫Aligns with sustainability goals and values
⚫Addressing AI Biases
⚫Investigates sustainability implications
⚫Mitigating the Digital Divide
⚫Sustainable Faculty Development
⚫Addressing Evolving Regulations
⚫Data Privacy and Security
⚫Long-term Viability of AI Solutions
Literature
⚫AI and Legal Education Anil Balan
Routledge 2025
Next
⚫AI and Constitution (Part 1 of 2)
Session 35

AI and Constitution-Part-I
Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and
Law
NALSAR University of Law
Recap
 In the last session we discussed AI and legal
education.
 We pointed out how AI can impact legal
education and raised the issues on AI and
sustainability and need to incorporate such
concerns in AI and legal education.
 We also discussed about AI exacerbating
current inequities in Access in legal
education.
Constitutional Law and Power
 Definition of Power
 Ability of an actor to direct the behavior of another actor
 Regulation of Power by Constitutional Law
 Limits the ability to exercise power
 Historical Regulation of Power
 Focused on actors with governmental functions (king,
nobility, clergy)
 Balanced prerogatives of different social classes
 Evolution of Power Limitation
 From the seventeenth century, included limitations in
respect to individuals
 Emergence of the concept of 'individual rights'
 Philosophical Contributions
 Locke's idea of 'inalienable rights' by natural law
Constitutional Norms
 Purpose of Constitutional Law
 Guarantees stable and long-lasting equilibrium
 Uses relative rigidity of norms
 Nature of Constitutional Equilibrium
 Not absolute, but intrinsically relative
 Impact of Societal Developments
 Pressure can cause collapse of balance
 Leads to inevitable changes in landscape
 But counterbalancing mechanisms exist as
constitutional scheme of powers and separation
Historical Context of Technological
Impact
 Scientific Revolution and Enlightenment
 Laid foundations for Enlightenment
 Transition from man-subject to man-citizen
 Second Generation of Rights
 Twentieth-century emergence
 Recognition of workers' and social rights
 Impact of industrial revolutions
 Third Generation of Rights
 Second half of the twentieth century
 Role of scientific developments
 Right to a healthy environment
 Digital Revolution
State's Role in Digital Age
 Constitutional Law and Balance of Power
 Aims to balance powers within a community
 Strives for equilibrium among societal actors
 Role of the State in Constitutional Systems
 Framework for institutional powers
 Intermediary between social classes
 Dominant actor and ultimate power holder
 Key guarantor of fundamental rights
State's Role in Digital Age
 Constitutional Law and Balance of Power
 Aims to balance powers within a community
 Strives for equilibrium among societal actors
 Role of the State in Constitutional Systems
 Framework for institutional powers
 Intermediary between social classes
 Dominant actor and ultimate power holder
 Key guarantor of fundamental rights
Emergence of New Powerful Actors
 State's Reinforcement of Power
 Utilizes sophisticated technologies
 Monitors individuals' digital lives
 Emergence of New Powerful Actors
 Global nature of virtual space
 Multinational tech companies' control
 Impact on Digital Selves
 Tech companies shape our digital
identities
Mass Surveillance and
Technological Developments
 Development of Digital Technology
 Key factor in societal changes
 Influences various aspects of life
 Vicious Circle Effect
 People's behavior influenced by
technology
 Technology evolves based on user
behavior
Constitutional Equilibrium and
State Surveillance
 Enhanced Surveillance Capabilities
 Intelligence services use sophisticated systems
 Automatic collection and analysis of
communications
 Constitutional Equilibrium and Surveillance
 State monitoring of citizens' behavior
 Secrecy of spyware tools and their use
 Restriction of Individual Rights
 Challenges of State Surveillance
 Impact of Digital Advancements
Private Corporations and
Fundamental Rights
 Technology Companies as Peers to States
 Massive income generated by multinationals
 Influence on individuals' lives globally
 Power of Private Corporations
 Regulate use of digital technology instruments
 Impact on fundamental rights
 Exercise of Fundamental Rights
 Accessing information
 Communicating
 Searching for jobs
 Organising protests
 Interference with Fundamental Rights
Digital Technology and
Constitutional Ecosystem
 Introduction of New Actors
 Digital revolution has introduced new actors besides
nation-states
 These actors play a dominant role in the constitutional
 ecosystem
 Subversion of Constitutional Equilibrium
 New actors subvert the existing constitutional balance
 Historical Dominance of the State
 The state historically became the main dominant actor within the
polity
 Modern constitutional law established ways to limit state power
 Guarantee of individual fundamental rights (vertical dimension)
 Legal Obligations
 Legal obligation to respect individual rights binds only the state
 Private entities are not directly subject to these obligations
Historical Evolution of Rights
 Traditional Notion of Constitution
 Focused on the organization of society
 Concerned with the relationship between power holders
 Emergence of Natural Rights
 Started at the end of the eighteenth century
 Individuals enjoy a series of natural rights
 Horizontal Limitation of Power
 Traditional concept in constitutional law
 Vertical Limitation of Power
 Aimed to restrict the power of dominant societal actors
 New conception embraced by constitutional law
Constitutional Norms and Digital
Technology
 Expansion of Information Transmission
 Digital technology enhances the exchange of information
 Fundamental rights like freedom of expression and
religious freedom are enhanced
 Positive Transformation with Challenges
 Constitutional norms face a new societal landscape
 Broad or evolutionary interpretation may be needed
 Example of Freedom of Expression
 Internet communication can be protected under freedom
of expression
 Norms may not mention new instruments but can still
apply
Access to Internet as a
Fundamental Right
 Digital Technology and Fundamental Rights
 Essential for exercising many fundamental rights
 Raises questions about new ancillary rights
 Challenges in the Constitutional System
 Material, economic, or legal obstacles to technology use
 Issues in factual enjoyment of specific rights
 Central Dilemma
 Should Internet access be a fundamental right?
 Is analogue exercise of rights still valid?
 Constitutional Quandaries
 Existing norms may not provide explicit solutions
New Threats to Fundamental
Rights
 Connection Between Freedoms and Risks
 Digital technology enhances freedoms
 Increased risks to fundamental rights
 Threats from Digital Instruments
 Defamation and hate speech
 Disinformation and cyber bullying
 Child pornography
 Impact on Fundamental Rights
 Human dignity
 Equality and non-discrimination
 Protection of the child
 Scale of the Phenomenon
Monitoring and Data Collection
 Blocking or Limiting Transmission
 Can violate rights based on information exchange
 Affects freedom of expression, information, and
association
 Monitoring Transmitted Information
 Involves confidential communications and personal data
 Unauthorized access can violate privacy and data
protection rights
 Registering Information Related to Transmission
 Involves collecting when, who, where, and how of
transmission
 Impacts the personal sphere of individuals
Increased Risks and Power
Imbalance
 Increased Risks from Digital Technology
 Development of digital technology increases risks
 Enhanced power of the state and emergence of private
companies as dominant actors
 Rising Power of Dominant Actors
 Unbalanced power leads to increased risk of rights infringements
 Personal freedom of the individual remains the ultimate
fundamental right
 Constitutional Systems and Digital Technology
 Constitutional systems may still face changes from digital
technology
 Ultimate aim of constitutional law remains the same
 Specific principles for digital society may not be fully elaborated
or recognized
 Consequences of the Digital Revolution
 Constitutional systems may need to adapt principles
Factual Constitutional Equilibrium
 Constitutional Equilibrium
 Not a permanent condition
 Based on present and past community conditions
 Societal Developments
 Constantly change the landscape
 Transformations could have various outcomes
 Protection of Original Values
 Constitutional law may protect original values
 Norms may need to 'stretch' to fit new conditions
 Inability to Adapt
 Constitutional law may fail to adapt to new societal
scenarios
 Norms may become outdated
Constitutional Vacuum and
Disequilibrium
 Emergence of Vacuum in Constitutional Law
 General values and principles not articulated
in new societal context
 Inability to guide societal actors
 Violation of Ultimate Values and Principles
 Protected values and principles violated in
new context
 Danger to Factual Constitutional Equilibrium
 Unbalanced outcomes within the community
 Norms not matching societal reality
Nature of Constitutional Systems
 Constitutional Law's Aim
 Seeks to achieve equilibrium
 Rebalances when disequilibrium occurs
 Normative Imperative
 Constitutional law must adapt to societal changes
 Historical moments shape constitutional law
 Homeostatic Nature
 System counteracts to restore balance
 Amendments and integrations are necessary
 Analogy with Polanyi's 'Double Movement'
Generations of Rights
 First Generation of Rights
 Emergence in the late 18th century
 Response to oppression by privileged classes
 Includes civil rights like freedom of thought, press,
religion
 Political rights such as the right to vote
 Second Generation of Rights
 Developed during the 20th century
 Result of industrial revolution and societal changes
 Includes social rights like the right to work, strike, and
education
 Public health-care rights
 Third Generation of Rights
 Link Between Societal and Constitutional Change
New Constitutional Moment
 Ongoing Transformations in Society
 Challenging existing constitutional law apparatuses
 Prompted by the digital revolution
 Impact on Relationships and Society
 Changes in our relationships with others
 Societal changes under constitutional norms shaped for
'analogue' communities
 Emergence of Constitutional Counteractions
 Constitutional ecosystem is not inert
 New constitutional moment observed
 Concept of Constitutional Moment
 Originally coined by Bruce Ackerman
 International and Digital Context
Targeted Transformations
 Digital Technology's Impact on Constitutional Equilibrium
 Reinforces state power to control digital lives
 Promotes tech multinationals as dominant actors
 Enhances fundamental rights through information
exchange
 Increases risk of individual rights violations
 Constitutional Ecosystem's Reaction
 Restores balance of powers
 Protects individual rights
 Transformation of Constitutional Settings
 Adapting to digital age challenges
 Series of targeted transformations
Introduction to AI and Democracy
 Core Elements of Western Liberal Constitutions
 Human rights, democracy, and the rule of law
 Supreme law of the land
 Impact of AI on Society
 AI's pervasiveness in modern societies
 Governance of core societal functions
 Need for AI to support constitutional principles
 Principle of 'Rule of Law, Democracy, and Human Rights by
Design'
 Binding new technology to constitutional principles
 Addressing the absence of legal framing in the Internet
economy
 Preventing dangers to democracy
 Power Concentration of Internet Giants
Influence of digital giants
 AI and the Digital Internet Economy
 AI as an add-on to the digital economy
 Potential dominance of AI in the future
 Internet and AI Development
 Difference between Internet as a technical structure and its
usage
 AI's theoretical potential vs. actual development context
 Role of Mega Corporations
 Influence of Google, Facebook, Microsoft, Apple, and Amazon
 Economic power and stock market valuations
 Impact on Society
 Influence on governments, legislators, civil society, and education
 Holistic Analysis Required
Four sources of digital power
 Accumulation of Digital Power
 Shapes AI development and deployment
 Influences debate on AI regulation
 Four Sources of Power
 Key factors in AI's impact
Financial influence
 Financial Power
 Deep pockets allow significant investments in
politics and markets
 Political and Societal Influence
 Ability to invest heavily in political and
societal spheres
 Acquisition of New Ideas
 Capacity to buy up new ideas and start-ups in
AI and other areas
Control over public discourse
 Control of Digital Environment
 Corporations control infrastructures of public discourse
 Decisive for elections
 Reliance on Internet Services
 Essential for candidates in democratic processes
 Main source of political information for citizens, especially
younger generation
 Impact on Journalism
 Detriment to the Fourth Estate, classic journalist
publications
 Targeted advertising business model drains journalism
 80% of new advertisement revenue concentrated in
Google and Facebook
Data collection and profiling
 Collection of Personal Data for Profit
 Mega corporations gather data based on online
and offline behavior
 They know more about individuals than their
friends or themselves
 Use of Data for Various Purposes
 Information is used for profit, surveillance, and
security
 Data is also utilized in election campaigns
 Centralization of Power
 Claim to empower people while centralizing power
 The Cloud and profiling are key tools for this
centralization
Dominance in AI innovation
 Corporations' Role in AI Development
 Dominating development and systems integration into
AI services
 Basic AI research may be publicly accessible
 Resource Intensive Work
 Systems integration and AI applications for commercial
use
 Conducted in a black box with significant resources
 Impact on Public Investments
 Resources surpass public investments in similar research
 Strengthening Dominance
 Combining profiling, information collection, and AI
optimization
 Extending dominance to new areas of activity
Challenges to fundamental rights
and rule of law
 Challenges to Fundamental Rights and Rule of
Law
 AI poses numerous challenges to fundamental
rights
 AI impacts the rule of law significantly
 Need for Strong Rules
 AI cannot serve the public good without strong
regulations
 Potential capabilities of AI necessitate strict rules
 Risks of Repeating Past Mistakes
 Risk-taking in the Internet age led to lawlessness
 AI capabilities can cause major and irreversible
damage to society
Need for precautionary legal
framework
 Catastrophic Impacts of AI
 Potential for more severe consequences than
the unregulated Internet
 Need for Precautionary Measures
 Importance of a legal framework to safeguard
public interest
 Experience with the Internet
 Lessons learned from the unregulated growth of
the Internet
 Potential Capabilities of AI
 AI's widespread use and advanced capabilities
Next
 The Next Session will be Part 2 of AI
and Constitution
Artificial Intelligence, Law and Justice
Session 36

AI and Constitution-Part-II
Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
⚫In the last session we looked at how AI can impact
constitutionand constitutional values. We highlighted
power of the digtal technology companies and their
control.
⚫We also touched upon the need to regulate technology
and why constitutional values should not be subverted by
technology
Historical Context of
Technology Regulation
⚫Historical Context of Technology Regulation
⚫Architects must learn and follow building
codes
⚫Building codes ensure public safety by
preventing collapses
⚫Automobile Safety Regulations
⚫Cars must undergo type approval for safety
reasons
⚫Seatbelt laws significantly reduced traffic
deaths
⚫Law vs. Absence of Law
⚫Law serves the public interest in critical
technology
⚫Absence of law can lead to public harm
Existing Ethical Principles & Codes
⚫Challenges AI Poses
⚫Impacts rule of law
⚫Affects democracy
⚫Threatens individual rights
⚫Catalogues of Ethic Rules
⚫10 codes of ethics identified by Alan Winfield
⚫Statement by European Group on Ethics in Science and
New Technologies (9 March 2018)
⚫Paper by French Data Protection Authority CNIL (15
December 2017)
Principle of Essentiality in Legislation
⚫Principle of Essentiality
⚫Guides legislation in constitutional democracies
⚫Prescribes handling of essential matters by democratically
legitimized law
⚫Essential Matters
⚫Concerns fundamental rights of individuals
⚫Important to the state
Risks of Unregulated AI
⚫Risk of Unregulated AI
⚫Potential for substantial negative impacts
⚫Comparison with the lawless Internet experience
⚫AI Development and Deployment
⚫Controlled by powerful Internet technology
corporations
⚫Contrasts with the Internet's academic and idealist
origins
Impact of GDPR on Innovation
⚫Greening of Industry
⚫Environmental protection legislation
incentivized innovation
⚫Focus on environmental sustainability
⚫GDPR and Data Protection
⚫EU's General Data Protection Regulation
(GDPR)
⚫Drives innovation in data collection and
processing
⚫Respects individual rights and privacy
⚫Importance of privacy in democracy
Technology neutral law and GDPR
⚫Technology Neutral Law
⚫Applies to various technological
advancements
⚫Remains relevant despite rapid tech
changes
⚫GDPR as a Modern Example
⚫Adapts to new technologies, including AI
⚫Ensures continued legal relevance
⚫US and Europe Implementation
⚫Shows effectiveness of technology neutral
law
⚫Disproves claim that law lags behind
technology
Personal Data & AI
⚫AI and Personal Data
⚫AI algorithms can track and identify individuals online and
offline
⚫Personal data definition may change with AI advancements
⚫AI Tool by Facebook
⚫5000-point per human body in movement AI analysis tool
⚫Developed by Facebook's Paris-based Machine Learning
⚫Research Director Antoine Bordes
⚫Applications in Fashion Industry
⚫Tool can match body forms and fashion
⚫Privacy Concerns
⚫Tool may identify individuals from the back
⚫Body form patterns could be considered personal data
Enforcement & Democratic Legitimacy
⚫Law and Technological Development
⚫Corporations and neo-liberals prefer no legal obligations
⚫Law holds entities accountable through enforcement
⚫Ethics Code vs. Law
⚫Ethics codes lack democratic legitimacy
⚫Ethics codes cannot be enforced
⚫Advantages of Law
⚫Law has democratic legitimacy
⚫Law can be enforced against powerful corporations
⚫Creates a level playing field beneficial to all
⚫Provides orientation and incentives for innovation
⚫towards public interest
Relationship Between Technology &
Law
⚫Technology and Law Interdependence
⚫Technology shapes the law
⚫Law shapes technology
⚫Necessity for Regulation
⚫Silicon Valley and digital Internet industry must accept legal
regulation
⚫Regulation is essential for democracy
⚫Impact of Internet and AI
⚫Internet and AI are becoming all-pervasive
⚫Lack of regulation could threaten democracy
Importance of Democratic Regulation
⚫Democracy's Role in Regulation
⚫Democracy must not abdicate its responsibilities
⚫Particularly important in times of pressure from populists
and dictatorships
⚫Technology and Democratic Rules
⚫All technologies, including the Internet and AI, must be
subject to democratic laws
⚫Ensures technology does not become all-pervasive and
powerful without oversight
Role of Ethics in Technology
⚫Role of Ethic Rules in Technology
⚫Can serve as precursors to legal rules
⚫Provide orientation on potential legal content
⚫Limitations of Ethic Rules
⚫Lack democratic legitimacy
⚫Do not have binding nature
⚫Cannot be enforced by government or judiciary
Principle of Essentiality in AI Law
⚫Guiding Principle
⚫Essentiality principle helps in decision-making for AI
laws
⚫Identifying AI Challenges
⚫Focus on challenges affecting fundamental rights or
state interests
⚫Existing Laws
⚫Check if current laws can address AI challenges
⚫Assess sufficiency and proportionality of existing laws
⚫New Law Consideration
⚫Determine the need for new laws based on existing
law's scope and problem-solving ability
Transparency in Political
Discourse
⚫Importance of Identifying Human vs. Machine in Political
Discourse
⚫Ensures integrity of democratic discussions
⚫Prevents distortion of discourse
⚫Lack of Legal Framework
⚫No current laws mandate disclosure of machine
participation
⚫Need for Legal Transparency
⚫Law should ensure transparency in political dialogue
⚫Sanctions for intransparent machine speech and
impersonation
⚫Responsibility of Infrastructure Maintainers
⚫Ensure full transparency regarding machine speech
Technology Impact Assessment
⚫Return to Old Principles
⚫Apply the most recent state of the art of technology impact
assessment systematically to AI
⚫Renaissance of Technology Impact Assessment
⚫Good tradition of parliaments in Europe and the US since
the 1970s
⚫Increased dialogue between democracy and technology
⚫Instilling a Culture of Responsibility
⚫Obligatory and flexible approach
⚫Encompasses any new technology developments
Parliamentary Technology
Impact Assessment
⚫Purpose of Technology Impact Assessment
⚫Evaluate essential interests affected by technology
⚫Determine necessary legislation to protect public interest
⚫Timing of Assessment
⚫Ideally conducted before deployment of high-risk
technologies
⚫Decision Makers
⚫Governments and legislators
⚫EU level: Commission, Council, and Parliament as co-
legislators
Developer & User Level Impact
Assessment
⚫Obligation of Impact Assessment
⚫Extend by law to all aspects of democracy, rule of law, and
fundamental rights
⚫Applicable when AI is used in public power, democratic and
political sphere, or general services
⚫Importance of Impact Assessments
⚫Enhance public knowledge and understanding of AI
⚫Address the current lack of transparency in AI capabilities
and impacts
⚫Benefits for Corporations and Developers
⚫Encourage responsibility among leaders and engineers
⚫Promote a culture of responsibility in technology
Public Transparency & Risk Mitigation
⚫Legal Standards for AI Impact Assessment
⚫Must be set in law before public release or marketing
⚫Similar to GDPR standards for data protection impact
assessments
⚫Compliance and Enforcement
⚫Controlled by public authorities
⚫Non-compliance subject to deterrent sanctions
⚫Public and High-Risk AI Use
⚫Impact assessment must be public
⚫Public authorities to conduct complementary assessments
⚫Risk reduction and mitigation plan required
⚫EU Agency and Legal Framework
⚫Ex ante certification and registration
Individual Rights to AI Explanations
⚫Right to Explanation of AI
⚫Individuals should have a legal right to understand AI
functions
⚫Explanation should cover AI logic and its impact on
individuals
⚫Impact on Individuals
⚫AI use affects individual interests
⚫Right to information even if AI does not process
personal data
⚫Existing GDPR Rights
⚫GDPR already provides right to information for
personal data processing
AI & Constitutional Democracy
⚫AI's Role in Decision Making
⚫AI will make or prepare decisions previously made by humans
⚫Decisions will follow certain rules
⚫AI as Law
⚫AI must be treated like the law itself?
⚫AI must be checked against higher laws and constitutional
democracy
⚫Legal Tests for AI
⚫AI must align with fundamental rights
⚫AI must not contradict democratic principles
⚫AI must comply with the rule of law
⚫Designing AI for Compliance
⚫Incorporate principles of democracy, rule of law, and
fundamental rights from the outset
Potential of AI
⚫Advancements in AI Systems
⚫OpenAI's ChatGPT has shown significant improvements
⚫Potential to interpret the Constitution objectively
⚫Challenges in Constitutional Law
⚫Human bias and subjectivity have been persistent issues
⚫AI could offer a neutral, data-driven approach
⚫Potential Benefits of AI Interpretation
⚫Neutral analysis of text, history, and precedent
⚫Resolution of contentious constitutional disputes
Misunderstanding AI & Constitutional
Interpretation
⚫AI's Limitations in Constitutional Interpretation
⚫AI tools like ChatGPT and Claude are powerful but cannot
replace human judgment
⚫Judges using AI will still face moral and political questions
⚫Conservation of Judgment in AI Usage
⚫AI shifts, disperses, or concentrates decision-making
⚫Decisions may be made implicitly rather than explicitly
⚫Judgment is transferred across different stages of the
decision-making process
Development of AI Systems
⚫Evolution of AI in Language Tasks
⚫Early systems relied on manually coded rules
⚫Struggled with complexity and nuance of human language
⚫Breakthrough with Transformer Architecture in 2017
⚫Improved understanding of context
⚫Analyzed relationships between words in a text
⚫Advancements Leading to Modern LLMs
⚫Training on massive datasets
⚫Increased computational power
Breakthrough with Transformer
Architecture
⚫Release of ChatGPT 3.5 in Late 2022
⚫First AI system to reliably engage with text-based
tasks
⚫Trained on billions of documents including legal
texts, academic articles, and historical materials
⚫Learning to Detect Intricate Patterns
⚫Improved understanding of human language and
reasoning
⚫Earlier AI systems often produced nonsensical responses
⚫Emergence of Advanced Models
⚫GPT-4o/o1 and Claude 3.5 Sonnet
⚫Enhanced reasoning, problem-solving, and analytical
abilities
Example of Third Amendment
Interpretation
⚫Importance of Human Judgment
⚫Technical capabilities of AI are impressive
⚫Human judgment remains essential in legal decisions
⚫Example of Third Amendment Interpretation
⚫Third Amendment: No Soldier shall be quartered in any
house without consent
⚫ChatGPT's interpretation: Governors are not soldiers
⚫Claude's interpretation: Amendment applies to any
government officials
Interpretive Choices by AI
⚫Different Interpretations by AI Systems
⚫ChatGPT's literal approach
⚫Claude's purposive interpretation
⚫ChatGPT's Interpretation
⚫Reads 'soldier' as only members of the armed forces
⚫Claude's Interpretation
⚫Focuses on protecting private homes from government
intrusion
⚫Human Judges' Interpretive Choices
⚫Often made after careful deliberation
⚫Weigh various legal and policy considerations
Invisible Decision-Making by AI
⚫Complex Statistical Computations
⚫AI systems make choices invisibly
⚫Even AI experts don’t fully understand these
computations
⚫Perceived Objectivity
⚫Judges might think AI provides objective answers
⚫AI decisions are influenced by training data patterns
⚫Value-Laden Decisions
⚫AI makes numerous decisions behind the scenes
⚫These decisions are based on how questions are posed
Human Biases and Cognitive
Limitations
⚫Human Judges
⚫Possess biases and cognitive limitations
⚫True motives for decisions often opaque
⚫AI Decision-Makers
⚫May be superior for certain tasks
⚫Capabilities will improve with technology
⚫Comparative Choice
⚫Humans better for some tasks, AI for others
⚫Categories will evolve with technological advancements
⚫Limitations of AI
⚫Cannot eliminate need for moral and political judgment
⚫Reflects nature of constitutional interpretation
High-Stakes Constitutional Cases
Simulation
⚫Simulation of AI in Constitutional Cases
⚫ChatGPT and Claude were asked to decide on two major cases
⚫Cases: Dobbs v. Jackson Women’s Health Organization
⚫and Students for Fair Admissions v. Harvard
⚫Different Interpretive Approaches
⚫AI asked to follow originalism or living constitutionalism
⚫Results varied based on interpretive method
⚫Adherence to Supreme Court Precedent
⚫Without specific method, AI upheld abortion rights and
⚫affirmative action
⚫Responses to Interpretive Instructions
⚫As liberal living constitutionalists, AI upheld precedents
⚫As originalists, AI overruled precedents
⚫AI's Response to Counterarguments
AI Sycophancy
⚫AI Sycophancy
⚫AI systems tend to tell users what they want to hear
⚫Raises questions about their reliability in constitutional
cases
⚫Interpretive Approach
⚫AI adopts the interpretive approach it is instructed to use
⚫Reverses itself when presented with counterarguments
⚫Judicial Reliance
⚫Judges may find it difficult to rely on AI answers
⚫Moral and Political Judgment
⚫Framing questions for AI requires moral and political
judgment
⚫Similar to traditional constitutional interpretation
Research and Drafting Assistants
⚫AI as Research Assistants
⚫Quickly synthesizes large amounts of legal information
⚫Identifies relevant precedents
⚫Summarizes complex arguments
⚫AI for Resource-Constrained Lower Courts
⚫Valuable for judges with numerous pending motions
⚫Helps process hundreds of pages of briefs
⚫Assists in getting up to date on unfamiliar legal issues
Sounding Board for Judges
⚫AI as a Sounding Board
⚫Helps judges pressure-test their reasoning
⚫Identifies potential blind spots in decision-making
⚫Promotes Balanced Decision-Making
⚫Presents multiple perspectives on legal questions
⚫Surfaces counterarguments for thorough analysis
Routine Constitutional Questions
⚫AI's Role in Routine Constitutional Questions
⚫AI can help courts process cases more efficiently
⚫Requires careful oversight and robust frameworks
⚫Determining which cases qualify as 'routine' is crucial
Understanding AI Model Variations
⚫Importance of AI Literacy for Judges and Lawyers
⚫Essential for responsible use of LLMs
⚫Understanding variations in AI models
⚫Different AI Models and Their Conclusions
⚫Conclusions vary based on training data
⚫Technical architecture influences outcomes
Recognizing the Need for Human
Judgment
⚫Delegating Constitutional Decisions to AI
⚫Does not eliminate the need for moral and political judgment
⚫Shifts judgments to different stages of the process
⚫Stages of AI Constitutional Decision-Making
⚫Choosing which AI system to use
⚫Framing constitutional questions
⚫Providing interpretive instructions, if any
⚫Human Judgment in Constitutional Meaning
⚫Essential for resolving fundamental questions
⚫Crucial for the allocation of government power
Sensitivity to Question Framing
⚫AI Responses Sensitivity
⚫Highly influenced by question framing
⚫Context provided affects responses
⚫Randomness in AI Models
⚫Different answers to the same question
⚫Built-in randomness in systems
⚫Implicit Interpretive Choices
⚫Not always obvious to users
⚫Testing AI Responses
⚫Crucial for difficult and high-stakes questions
⚫Pose questions in multiple ways
⚫Consider counterarguments
Competing Values and Interests
⚫Inherent Nature of Constitutional Interpretation
⚫Involves competing values and interests
⚫Requires weighing and balancing by human decision-
makers
⚫Judicial Decision-Making
⚫Judges may defer to the legislature
⚫Following original meaning is another approach
⚫Technological Advances
⚫Cannot transform constitutional questions into purely
objective inquiries
⚫Demonstrably correct answers are not possible
Frameworks for AI Integration
⚫Advancing AI Capabilities
⚫Continuous improvement in AI tools
⚫Challenges for Courts and Judges
⚫Developing frameworks for AI integration
⚫Maintaining human oversight on value choices
⚫Initial Applications
⚫Focus on enhancing judicial efficiency
⚫Improving analytical thoroughness
⚫Refining Best Practices
⚫Careful refinement before expanding AI use
Future of Constitutional Interpretation
⚫Complex Interplay Between Human and AI
⚫AI's impressive capabilities
⚫Limitations of AI
⚫Necessity of human judgment
⚫Harnessing AI in Judicial Decision-Making
⚫Balancing AI's strengths and weaknesses
⚫Ensuring accurate interpretation of legal documents
New Ideas and Contextualisation
⚫There are new ideas like Digital Constitutionalism
⚫and use of AI to review/revise Constitutions
⚫AI and Constitution has to be contextualized in the light of Indian Constitution. We have not
done that in this course but have cited a relevant work in the literature slide.
Literature (Selective)
⚫Nemitz P. 2018 Constitutional democracy and technology in the
age of artificial intelligence. Phil. Trans. R.Soc. A 376: 20180089.
http://dx.doi.org/10.1098/rsta.2018.0089
⚫April G. Dawson, Algorithmic Adjudication and Constitutional AI—
The Promise of A Better AI Decision
⚫Making Future?, 27 SMU SCI. & TECH. L. REV. 11 (2024)
⚫Coan, Andrew and Surden, Harry, Artificial Intelligence and
Constitutional Interpretation (February 12, 2025). 96 University of
Colorado Law Review 413 (2025) , Arizona Legal Studies Discussion
Paper No. 24-30, U of Colorado Law Legal Studies Research Paper
No. 24-39, Available at
SSRN: https://ssrn.com/abstract=5018779 or http://dx.doi.org/10.2
139/ssrn.5018779
⚫Edoardo Celeste Digital ConstitutionalismThe Role of Internet Bills
of Rights Routledge 2023
⚫Shruthi Bedi (Ed) AI and Constitutionalism : The Challenges in Law
Oakbridge Publishing 2025
Next
⚫AI, Law, Justice and Innovation
Session 37

AI, Law, Justice and Innovation


Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap AI and Constitution
 In the second session of AI and Constitution we dwelt
further on AI’s implications for Constitution and
Constitutional Democracy.
 We discussed about interpretting constitution through AI
systems and emergence of new ideas like Digital
Constitutionalism. how AI can be
Overview of AI in the Legal Profession
 Generative AI in the Legal Field
 Generative AI includes algorithms like ChatGPT
 Creates new content such as audio, code, images, text,
simulations, and videos
 Comparison to Fictional AI
 Not as advanced as HAL 9000 from 2001: A Space
Odyssey
 Not as human-like as the child in AI: Artificial Intelligence
 Rapid Evolution of AI
 AI continues to evolve at a staggering speed
Predicting AI's Impact on Justice
 Rapid Evolution of Generative AI
 Technology is new and evolving quickly
 Example: Bing AI search engine results changed in three
months
 Current Use in ODR Platforms
 Human facilitators assist in pretrial settlements
 Facilitators available in chat spaces or via email
 Potential of Generative AI
 Could improve real-time facilitation in ODR platforms
Avoiding Litigation & Conflict
 Avoiding Litigation
 Use of court-adjacent online dispute resolution (ODR) for high-volume
civil disputes
 Current ODR platforms require human facilitators for pretrial settlement
 Generative AI could facilitate real-time settlement, saving on human
labor costs
 AI could narrow the settlement space more accurately and quickly than
seasoned mediators
 Avoiding Conflict
 Generative AI could guide parties to avoid litigation or conflict
altogether
 AI could diagnose difficult issues before suits are filed
 Help parties create fair agreements, such as residential leases, to
prevent disputes
 Educate self-represented parties on solving problems without legal
identification
Providing Legal Advice
 Benefits of Accurate Legal Information
 Improves efficiency for lawyers and judges
 Provides better tools for risk assessment
 Delivers more actionable advice
 Challenges for Unrepresented Litigants
 Unlikely to have a lawyer for life-altering litigation
 Common cases include unemployment benefits, eviction
suits, consumer or medical debt, and family disputes
 Potential of Generative AI
 Provides comprehensible information to self-represented
litigants
 Offers strategic advice and counsel
 Suggests which issues to litigate fiercely
Streamlining the Court Experience
 Understanding Self-Represented Parties
 Why they avoid technology like electronic document filing
 Challenges they face such as obtaining childcare and taking
time off work
 Potential of Generative AI
 Identifying reasons for avoiding technology
 Improving court attendance by identifying optimal days and
times
 Improving the Legal System
 Enhancing court-based litigation and court-adjacent efforts
 Assisting those who get lost in the legal shuffle
Democratizing Legal Information
 AI Platforms and Tools for Legal Assistance
 Answer legal questions
 Generate legal documents
 Offer guidance and advice
 Benefits for People Without Legal Access
 Helps those who cannot afford lawyers
 Connects users with licensed professionals
 Promoting Access to Justice
 Delivers law on demand to digital devices
 Digital Divide is reduced
Simplifying the Law
 Complexity of Legal Text and Structure
 Law's text and structure are often unnecessarily complex
 Generative AI could identify and simplify problematic
bottlenecks
 Self-Represented Plaintiffs
 AI could analyze reasons for claim dismissals
 Identify issues with service of process rules
 Macro-Level Pattern Detection
 AI can detect patterns that elude even intelligent lawyers
 Streamlining Procedural Rules
 Improving Legal Arguments
 Enhancing Legal Decisions
 Transparency and Accountability
Improving Decision-making
 Rationalizing Complex Legal Systems
 Generative AI can help determine valid justice indicators
 Uses large datasets to set better standards
 Measuring Success in Drug-treatment Courts
 Generative AI can help define recidivism metrics
 Considers community goals and relevant offenses
 Optimizing Judicial Oversight
 Helps answer questions about judicial oversight and
discretion
 Acceptance of Generative AI Results
 Depends on stakeholder acceptance and dataset availability
 Requires willingness to accept data-based changes
Transparency Concerns
 Importance of Transparency in AI
 Due process relies on notice and opportunity to be heard
 Ability to challenge evidence is crucial
 Challenges with AI Systems
 Many AI systems lack reasoning transparency
 Creators may withhold proprietary algorithmic
information
 Testing Prototypes with Target Populations
 Informs development process
 Helps identify potential biases
 Ensures innovation aligns with user needs
Create 'Human-in-the-Loop' Systems
 Create Human-in-the-Loop Systems
 Human oversight in AI decision-making processes is
essential
 Particularly important in high-volume, high-stakes civil
litigation
 Benefits of Human Oversight
 Timely human intervention can override detrimental AI
decisions
 Ensures AI system decisions are beneficial to users
 Factors Influencing Human Oversight
 Level of risk involved
 Potential implications of delay
Develop Impact Assessments
 Regular Review of AI Models
 Ensure expected outcomes align with observed
outcomes
 Refine models to address unexpected outputs
 Incorporate New Data and Societal Norms
 Reduce bias over time
 Adapt to changing societal expectations
 Flagging Issues Early
 Identify potential problems before they cause harm
 Example: NEDA’s Tessa chatbot
Be as Transparent as Possible
 Importance of Transparency
 Explain how AI models are developed
 Show how AI models work
 Building Trust and Understanding
 Open about algorithmic inputs and calculations
 Educate users about AI capabilities and limitations
 Effective and Responsible Use
 Provide clear guidelines for usage
 Mitigate risks through education
What to Evaluate
 Key Criteria for Evaluating AI Systems
 Focus on user comprehension for inclusivity and transparency
 Measure 'win rates' for self-represented litigants
 Use time to disposition for efficient dispute resolution
 Outcome Variables
 No 'right' choice for outcome variables
 Reflect values and needs of the community deploying the AI
tool
How to Evaluate
 Three Primary Approaches to Evaluation
 Subjective surveys
 Observational data
 Experimental methods
 Subjective Surveys
 Reveal user interaction and satisfaction
 Must be combined with objective data for complete analysis
 Observational Studies
 Rely on large datasets
 Seek evidence consistent with causal inference
 Subject to selection effects and confounding influences
 Randomized Control Trials (RCTs)
When to Evaluate
 Importance of Timing in Evaluation
 Evaluating AI tools before public release is crucial
 Prevents adverse consequences and unintended effects
 Testing Prototypes with Target Populations
 Involves those who often represent themselves in court
 Identifies potential areas of bias
 Ensures innovation aligns with user needs
 Pilot Testing with RCT
 Small, statistically powerful user groups
 Provides preliminary evidence and identifies bugs
 Refine and scale up based on solid evidence
 Iterative Process for Full Deployment
AI in Legal Services
 Potential Benefits of AI in Legal Services
 Increases efficiencies in legal processes
 Democratizes access to legal information
 Helps consumers solve their own legal problems
 Connects consumers with licensed professionals
 Concerns About AI in Legal Services
 Risk of creating two-tiered systems
 Poor might receive inferior AI-driven assistance
 Only expensive law firms might effectively use legal AI
 AI might not disrupt the status quo
 Current Regulation Issues
 Proposed Regulatory Reforms
Framework for Equitable Legal AI
 Framework for Collaboration
 Combines legal and technical expertise
 Design and deploy AI-driven tools and services
 Calibrated for specific consumers, legal issues, and processes
 Addressing Barriers
 Insufficient resources
 Lack of resilience
 Weak relationships
 Advocating for Regulatory Reforms
 Regulatory priorities
 Reforms and mechanisms
 Fostering legal AI access
Technological Innovations in
Legal Problem Solving
 Access to Legal Guides
 Free guides available online from nonprofit organizations
 Automated Legal Services
 Legal services organizations automate intake processes
 Direct clients to relevant resources quickly
 Automatically generate legal documents
 Consumer Assistance
 Help consumers recognize when professional legal help is
needed
 Connect consumers with appropriate legal service
providers
Document Automation & Chatbots
 Significant Investment in Technology
 Over a billion dollars spent annually
 Client Intake via Chatbots
 Automated client intake processes
 Advanced Legal Research Tools
 Processing natural language questions
 Providing individualized results
 Transformation of Document Management
 Efficient e-discovery processes
 Machines complete discovery faster and more accurately
Predictive Coding & Legal Analytics
 Predictive Coding and Legal Analytics
 Machines mine vast data from previous cases
 Helps craft legal arguments
 Challenges of Legal Technologies
 Tools come with challenges
 Future of Legal Problem Solving
 Increasingly data-driven
 Assisted by artificial intelligence
Challenges & Barriers
 Fear of Inequitable Two-Tiered Systems
 Increased reliance on AI-driven legal services
 Potential for superior human lawyers vs. inferior AI
assistance
 Concerns About AI Superiority
 AI may be superior but expensive
 Accessible only to large law firms and wealthy clients
 Status Quo Concerns
 AI may not change the affordability of legal services
 Some may still be unable to afford legal services
Overview of Access to Justice Gap
 Overview of Factors Perpetuating the Justice Gap
 Various contributing factors
 Role of legal technologies, especially AI
 Potential for Widening the Justice Gap
 Scenarios where AI could exacerbate the gap
 Fears of inequitable two-tiered systems
 Calibrating AI Use in Legal Contexts
 Taxonomy of important considerations
 Importance of appropriate AI calibration
 Barriers to Effective AI Calibration
 Lack of resources and resilience
 Policy Priorities and Regulatory Reforms
Barriers to Accessing Legal Services
 High Cost of Legal Services
 Legal services are generally available only to those
with sufficient educational and economic resources
 Excludes low-income and many middle-income individuals
 Social Disparities
 Cost is not the only barrier
 Myriad social disparities keep legal services elusive for
many groups
 Lack of Knowledge and Resources
 Majority of consumers remain silent due to lack of knowledge,
experience, or resources
 Limited English Proficiency
 Language and Cultural Barriers
Global Perspective on Justice Gap
 Justice Gap as a Worldwide Crisis
 Affects not only the United States but globally
 Individuals denied access to justice
 Societal Consequences
 Disengagement from legal systems
 Distrust in law and legal institutions
Need for Revolutionary Change
 Multifaceted Crisis Requires Multifaceted Solutions
 Pro bono work alone is insufficient
 Revolutionary change is necessary
 Recognizing Various Forms of Access to
Justice
 Understanding different approaches
 Implementing diverse strategies
Legal Technology & Self-Help
Services
 Transformation of Legal Services
 Legal technology can significantly change how legal
services are delivered
 Potential to lower prices and increase access to justice
 Role of AI in Legal Technology
 AI is recognized as a necessary driver for the
transformation
 Scholars, commenters, and practitioners widely discuss
this potential
Self-Help Market & Chatbots
 Importance of Access to Legal Information
 Essential for justice
 Not always requiring courts or lawyers
 History of Legal Self-Help
 Dating back to 1965
 First 'how-to' manuals for consumers
 Modern Self-Help Methods
 Guided interviews for users
Exclusion of Undigitized Records
 Service Providers' Caution
 Be wary of using generic tools
 Consider industry-specific needs
 Algorithmic Bias
 Tools designed for privileged may not suit unprivileged
 Exclusion of certain groups in training data
 Mindfulness in Tool Design
 Understand the design purpose
 Consider the target user group
Impact on marginalized communities
 Exclusion of Certain Legal Areas
 AI tools rely on digitized records
 Areas with uncommon digitization may be excluded
 Impact on Marginalized Communities
 Affects poor, marginalized, or rural communities
Calibrating AI for Legal Processes
 Clear-cut cases of unproblematic automation are rare
 Automation in legal expertise often faces challenges
 Democratizing legal expertise through apps
 Mass-market problem solver apps aim to make
legal expertise accessible
 These apps can sometimes conceal issues
requiring human input
Consumer Sophistication & Legal Tech
 Consumer Sophistication
 Understanding the complexity of legal technology
 Evaluating the success of legal tech tools
 Key Question
 Will the individual know if the legal tech has succeeded?
Technology in initial client triage
 Technology as a Gatekeeper
 Determines extent of services available to clients
 Initial 'triage' step involves technology
 Client Service Allocation
 Full service by an attorney for some clients
 Self-help or technologically assisted service for others
Core Values and AI
 Core Values in AI Discussions
 Minimal reference to core values in AI hearings
 AI's technological allure often overshadows ethical concerns
 Challenges in the Criminal Legal System
 Racial injustice and inhumanity are significant issues
 Technology is often seen as a solution to value
basedproblems
 Information vs. Values
 Belief that more information leads to better policy
 Values remain a barrier despite abundant data
Electronic Monitoring &
Risk Assessments
 AI's Role in Criminal Justice
 Potential for permanent integration into the criminal justice
system
 Mass Supervision
 Five million people under community supervision includes
probation and parole
 AI Opportunities Identified by National Institute of Justice
 Real-time assessment of risk and need
 Mobile service delivery
 Intelligent tracking of individuals
 Biometric Use Case
 AI wearable device to monitor stress and mood
 AI-Enhanced Devices
Bias in AI Systems
 AI's Role in Society
 AI is becoming a permanent fixture in our lives
 It will influence both personal and criminal justice systems
 Judge Herbert B. Dixon Jr.'s Perspective
 Advocates for a balanced view on AI's impact
 Recognizes both potential benefits and drawbacks
Values Framework for AI
 Technology is not neutral
 Smithsonian's AI Values Statement emphasizes this point
https://datascience.si.edu/ai-values-statement
 Framework for engagement
 Three preliminary recommendations offered
 Encourages cross-sector dialogue and reflection
 Prioritize values over technology
 Criminal legal reform field's interest in predictive risk
assessment algorithms
 Lack of discussion on guiding values for evidence-based
information
Electronic Monitoring
 Historical Context of GPS Technology
 Developed in the early 1970s through military and
civilian partnerships
 First used for electronic monitoring in 1983 in
Albuquerque, New Mexico
 Electronic Monitoring in the Criminal Legal System
 Initially seen as an alternative to mass incarceration
 Critics view it as an extension of mass incarceration
 Efforts to mitigate harms are still ongoing
 AI in Criminal Legal Systems
 Implementation includes complex risk assessment algorithms
 Urgent need for a values framework prioritizing transparency,
fairness, and wellbeing
 Challenges for Criminal Legal Reformers
 AI's complexity surpasses previous technologies
AI's Black Box and Glass Box
 Trade-off Between AI Accuracy and Explainability
 AI becomes more optimal as its
complexity increases
 Complexity can elude human understanding
 Glass Box AI Systems
 Fully interpretable by people like judges
and attorneys
 Requires sustained effort for legal reform to understand AI
 Understanding AI Failures
 Practitioners need to understand how AI
tools can fail
 Mitigation of harms through policy and regulation
 Efficiency vs. Unintended Consequences
 Criminal legal systems are under-resourced and overwhelmed
 AI can create efficiencies without large budgetary requirements
AI's Role in Bias Identification
 Analyzing Legal Texts for Bias
 AI can analyze laws, sentencing guidelines, court decisions, and
transcripts
 Identifies linguistic patterns and criteria that may encode racial
biases
 Suggests alternative language to promote equity
 Highlighting Racial Disparities
 AI analyzes datasets to identify harsher sentences for people of
color
 Uncovers specific cases of racially biased sentencing
 Accelerates advocacy for new sentencing guidelines
 Expediting Petitions for Incarcerated Individuals
 AI helps with compassionate release and clemency petitions
 Summarizes large volumes of applicant case files
 Transforming Pretrial Processes
Unintended Consequences of AI
 Welcome Applications of AI in Criminal Justice
 Technology with no direct negative impact on liberty
interests
 Potential to positively influence the criminal legal space
 Lessons from Algorithmic Risk Assessment and Electronic
Monitoring
 Unanticipated and unintended consequences
 Exacerbation of racial and ethnic disparities in jails and
prisons
 Extension of the carceral system beyond physical facilities
Need for a Values Framework
 Proceed with Caution
 Importance of careful advancement in AI
 Avoiding hasty implementation
 Regulatory Schemes
 Need for numerous regulatory frameworks
 Ensuring responsible AI use
 Shared Values Framework
 Defining jurisdictional goals for AI
 Guiding responsible AI development
Recommendations for Engagement
 Polarizing Nature of Technology in Criminal Legal Reform
 Overreliance on technology by many systems
 Concerns from reformers and advocates for the incarcerated
 Technology as More Than a Neutral Tool
 Historical perspective on technology's role
 AI as a unique and complex challenge
 Importance of Human Values in AI Design
 Need for competence in technical arts and sciences
 Reflective understanding of relevant values
 Maximizing AI's Potential for Good
 Effort to harness AI through human values
 Minimizing potential harm
Literature (Selected)
 How to harness AI for justice A PRELIMINARY AGENDA
FOR USING GENERATIVE AI TO IMPROVE ACCESS TO
JUSTICE BY CHRISTOPHER L. GRIFFIN, JR., CAS LASKOWSKI
& SAMUEL A. THUMMA Bolch Judicial Institute at Duke
Law. 2024
 Access to A.I. Justice: Avoiding an Inequitable Two-Tiered
System of Legal Services Drew Simshaw 151 Yale Journal
of Law & Technology 2022
 Julian Adler, Jethro Antoine, Laith Al-Saadoon Minding the
Machines: On Values and AI in the Criminal Legal Space
Policy Brief, Center for Justice Innovation , New York 2024
Next Session
 AI, Law and Justice : Beyond Techno solutionism
Session 38

AI, Law and Justice : Beyond


Technosolutionism
Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and Law
NALSAR University of Law
Recap
 In the last session we explored how AI tools including
Generative AI can be use to democratize access to justice
and information and why they have to be assessed and
how to assess them. We further discussed Equitable
Access to AI in Law and Justice and how to move towards
it.
Legal Technology & Self-Help Services
 Transformation of Legal Services
 Legal technology can significantly change how legal services
are delivered
 Potential to lower prices and increase access to justice
 Role of AI in Legal Technology
 AI is recognized as a necessary driver for the transformation
 Scholars, commenters, and practitioners widely discuss this
potential
Self-Help Market and Chatbots
 Importance of Access to Legal Information
 Essential for justice
 Not always requiring courts or lawyers
 History of Legal Self-Help
 Dating back to 1965
 First 'how-to' manuals for consumers
 Modern Self-Help Methods
 Guided interviews for users
Exclusion of Undigitized Records
 Service Providers' Caution
 Be wary of using generic tools
 Consider industry-specific needs
 Algorithmic Bias
 Tools designed for privileged may not suit unprivileged
 Exclusion of certain groups in training data
 Mindfulness in Tool Design
 Understand the design purpose
 Consider the target user group
Impact on Marginalized Communities
 Exclusion of Certain Legal Areas
 AI tools rely on digitized records
 Areas with uncommon digitization may be
excluded
 Impact on Marginalized Communities
 Affects poor, marginalized, or rural
communities
Calibrating AI for Legal Processes
 Clear-cut cases of unproblematic automation are rare
 Automation in legal expertise often faces challenges
 Democratizing legal expertise through apps
 Mass-market problem solver apps aim to make
legal expertise accessible
 These apps can sometimes conceal issues
requiring human input
Consumer Sophistication & Legal Tech
 Consumer Sophistication
 Understanding the complexity of legal
technology
 Evaluating the success of legal tech tools
 Key Question
 Will the individual know if the legal tech has
succeeded?
Technology in initial client triage
 Technology as a Gatekeeper
 Determines extent of services available to
clients
 Initial 'triage' step involves technology
 Client Service Allocation
 Full service by an attorney for some clients
 Self-help or technologically assisted service
for others
Core Values and AI
 Core Values in AI Discussions
 Minimal reference to core values in AI discourse in law and
justice
 AI's technological allure often overshadows ethical
concerns
 Challenges in the Criminal Legal System
 Racial injustice and inhumanity are significant issues
 Technology is often seen as a solution to value-based
problems
 Information vs. Values
 Belief that more information leads to better policy
 Values remain a barrier despite abundant data
Electronic Monitoring &
Risk Assessments
 AI's Role in Criminal Justice
 Potential for permanent integration into the criminal justice
system
 Mass Supervision
 Five million people under community supervision
 Includes probation and parole
 AI Opportunities Identified by National
Institute of Justice
 Real-time assessment of risk and need
 Mobile service delivery
 Intelligent tracking of individuals
 Biometric Use Case
 AI wearable device to monitor stress and mood
 AI-Enhanced Devices
Values Framework for AI
 Technology is not neutral
 Smithsonian's AI Values Statement emphasizes this point
https://datascience.si.edu/ai-values-statement
 Framework for engagement
 Three preliminary recommendations offered
 Encourages cross-sector dialogue and reflection
 Prioritize values over technology
 Criminal legal reform field's interest in predictive risk
assessment algorithms
 Lack of discussion on guiding values for evidence-based
information
Electronic Monitoring
 Historical Context of GPS Technology
 Developed in the early 1970s through military and civilian
partnerships
 First used for electronic monitoring in 1983 in Albuquerque,
New Mexico
 Electronic Monitoring in the Criminal Legal System
 Initially seen as an alternative to mass incarceration
 Critics view it as an extension of mass incarceration
 Efforts to mitigate harms are still ongoing
 AI in Criminal Legal Systems
 Implementation includes complex risk assessment algorithms
 Urgent need for a values framework prioritizing
transparency, fairness, and wellbeing
 Challenges for Criminal Legal Reformers
 AI's complexity surpasses previous technologies
AI's Role in Bias Identification
 Analyzing Legal Texts for Bias
 AI can analyze laws, sentencing guidelines, court decisions,
and transcripts
 Identifies linguistic patterns and criteria that may encode
racial biases
 Suggests alternative language to promote equity
 Highlighting Racial Disparities
 AI analyzes datasets to identify harsher sentences for people
of color
 Uncovers specific cases of racially biased sentencing
 Accelerates advocacy for new sentencing guidelines
 Expediting Petitions for Incarcerated Individuals
 AI helps with compassionate release and clemency petitions
 Summarizes large volumes of applicant case files
 Transforming Pretrial Processes
Unintended Consequences of AI
 Welcome Applications of AI in Criminal Justice
 Technology with no direct negative impact on liberty
interests
 Potential to positively influence the criminal legal space
 Lessons from Algorithmic Risk Assessment and
Electronic Monitoring
 Unanticipated and unintended consequences
 Exacerbation of racial and ethnic disparities in jails and
prisons
 Extension of the carceral system beyond physical facilities
Need for a Values Framework
 Proceed with Caution
 Importance of careful advancement in AI
 Avoiding hasty implementation
 Regulatory Schemes
 Need for regulatory frameworks
 Ensuring responsible AI use
 Shared Values Framework
 Defining jurisdictional goals for AI
 Guiding responsible AI development
Recommendations for Engagement
 Polarizing Nature of Technology in Criminal Legal Reform
 Overreliance on technology by many systems
 Concerns from reformers and advocates for the incarcerated
 Technology as More Than a Neutral Tool
 Historical perspective on technology's role
 AI as a unique and complex challenge
 Importance of Human Values in AI Design
 Need for competence in technical arts and sciences
 Reflective understanding of relevant values
 Maximizing AI's Potential for Good
 Effort to harness AI through human values
 Minimizing potential harm
Technosolutionism, Technofixes
 A techno-fix is a technological remedy for a social problem
that only involves minimal use of political power or economic
incentives to change an individual’s behavior and does not
require changing social norms”
 Solutionism is a way of thinking about social problems. It is
characterized by confidence in human reason and our
capacity to find and implement numerous strategies for
overcoming obstacles and capitalizing on opportunities,
typically through courses of action That significantly alter
social norms.
 Techno-solutionism is a technology-driven approach to
solutionism. It is rooted in Promethean optimism about the
Revolutionary potential of science and engineering. Henrik
Skaug Sætra · Evan Selinger (2024)
Messy Real World, Neat
Technical Solutions
 I have introduced two concepts in the last slide. Often those
argue for bMore AI tools, more data and better AI integration in
law and justice do not implicitly argue that they are for
technofixes or technosolutionism.
 But both these are reflected in the literature when a rosy picture
is hightighted with AI as a big beautiful solution to many issues
in law and Justice. The allure of technology is such that we often
forget that the real world is too messier to be solved by magic
bullets. Anyone who has dealt with law, courts and justice will
understand that this is very true for law and Justice.
 But without taking extreme positions it is possible to arrive at a
pragmatic perspective on this issue. I summarize that as below
 Technology’s potential can be compromised by many
nontechnical factors and this should be factored in any thing.
Messy Real World, Neat
Technical Solutions
 AI systems and tools can make a huge difference but many of the issues in
law and justice are structural with entrenched actors and their respective
Positions. So how different actors will respond to them makes a difference
 Institutional values, long standing practices and cultural practices are
Important. Technology can impact them and vice versa. For example AI can
Be used to challenge hierarchies or by pass entrenched procedures or to
strengthen them. Often unitendended consequences can add to the
complexity and belie or modify our expectations.
 Democratising AI is an ideal and a good objective but technologies are not
100% malleable or can be democratized fully. So there will be other
aspects and factors
 Unrealistic expectation and revolutions through AGI in law and justice may
deter us from addressing what matters most in Law and Justice as we may
end up wishing more and better technology enabled solutions than to
address harder real world issues.
 Can there be a position that goes beyond hype without undervaluing role
of AI
AI as Normal Technology
 AI as Normal Technology
 Contrasts utopian and dystopian views of AI
 Views AI as a tool under human control
 Description, Prediction, and Prescription
 Current AI is a normal technology
 Future AI will remain a tool, not a super intelligent entity
 We should treat AI as a controllable tool
 Rejects Technological Determinism
 AI is not an agent in determining its future
 Lessons from past technological revolutions
 Emphasizes continuity in societal impact
Part I: The Speed of Progress
 AI Progress and Diffusion
 Impact materializes when improvements are translated
into applications
 Speed limits at each stage of adoption and diffusion
 Gradual vs. Disruptive Progress
 Analyzing highly consequential tasks separately
 Speed of adoption and diffusion of AI
 Definitions
 Invention: Development of new AI methods
 Innovation: Development of products and applications
using AI
 AI Diffusion in Safety-Critical Areas
 AI Adoption Outside Safety-Critical Areas
Part I: The Speed of Progress
 AI Progress and Generality
 AI methods progress as a ladder of generality
 Reduces programmer effort and increases task performance
 Machine learning increases generality by using training
examples
 Challenges in Real-World Applications
 Self-driving cars development is slow due to safety concerns
 Much organizational knowledge is tacit and not
documented
 The Bitter Lesson in AI
 General methods surpass human domain knowledge
 Limits to AI Learning
 Benchmarking and Forecasting Issues
 Gradual Economic Impacts
Part I: The Speed of Progress
 Exponential Increase in AI Research
 Doubling time of AI/ML papers on arXiv is under two years
 Unclear how volume increase translates to progress
 Turnover of Central Ideas
 High degree of herding around popular ideas
 Inadequate exploration of unfashionable ideas
 Example: Sidelining of neural network research
 Current Era of AI Research
 Incremental accrual of ideas
 Other Speed Limits
 AI-Conducted AI Research
 Misleading Benchmarks
Ability, Technology &
Normal Technology
 Human Ability and Technology
 Human ability is not constrained by biology
 Technology increases our ability to control our environment
 Modern humans are 'superintelligent' compared to pre-
technological humans
 AI Capabilities and Power
 Intelligence is not the main concern; power is
 AI capabilities may lead to loss of control
 Focus on preventing AI from acquiring significant power
 Human Limitations and AI
 Human abilities have limitations, notably speed
 AI Control Techniques
 Adapting Ideas from Other Fields
Literature (Selected)
 Arvind Narayanan and Sayash Kapoor (2025) : AI as
Normal Technology: An alternative to the vision of AI as a
potential super intelligence KNIGHT FIRST AMENDMENT
INSTITUTE: Columbia University April 2025
 Abe Jonathan Selvas (2024) A.I. In Law: Adversary or Ally?
Addressing the Possible Implications of A.I. Technology in
Law and the Necessity of Regulation– Stanford University
 Henrik Skaug Sætra· Evan Selinger (2024) Technological
Remedies for Social Problems: Defining and Demarcating
Techno-Fixes and Techno-Solutionism Science and
Engineering Ethics (2024) 30:60
 https://doi.org/10.1007/s11948-024-00524-x
Next Session
 Law, Technology and Future of AI in Law and Justice
Session 39

Law, AI, Technology and Future


Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and
Law
NALSAR University of Law
Recap
 In the last session we discussed the relationship
between AI, Law and Justice in light of earlier
sessions.
 We highlighted that we should not take a
technology deterministic position nor a techno
utopian position.
 Instead we sought to understand the development
in a broader context.
 We highlighted that AI in Law and Justice is much
more than a mere technical tool.
 We emphasised the need for responsible and
ethical AI in law and justice.
 Hence we pointed out that AI should not be
deciding or determining criterion in Law and
Justice
Initial Thoughts on AI in Legal
Industry
 Future of Legal Industry
 AI will not completely replace human legal
professionals
 Human legal advice will still hold significant value
 Economic Impact
 Society will spend less on human legal services
 Reduction in expenditure by orders of magnitude
 Roles for Attorneys
 Important roles will remain for attorneys
 Human legal advice will be more valuable at the
final frontier
Imagining the Future Without
Limits
 Importance of Imagining the Future
 Essential for inspiring innovation
 Helps in fundamentally retooling our approach to law
 Inevitable Future
 Even if core AI capabilities hit a wall today
 Traditions and Procedures in Law
 Founded in a world with limited human resources
 Evolution of Legal Practice
 Requires willingness to write on a blank slate
 Envisioning without constraints of the past
Technology vs. Legal Systems
 Technology's Rapid Advancement
 Technology evolves at a fast pace
 Slow Response of Legal Bodies
 Common law and legislative bodies lag behind
 Imminent Impact on Legal Practice
 Attorneys underestimate the current impact
 Lag in Governmental Structures
 Governmental structures and laws will take time to catch
up
 Role of Private Ordering
 Market demand and attorney associations will drive
changes
Revolution Timeline
 Revolution in Technology
 Belief that revolution will happen sooner than expected
 Rapid development of models and workflows
 Technological Capabilities
 Reasoning models
 Easy SaaS integrations
 Performance of free models
 Daily Impact
 Technology that people will see and feel daily
 Comparison to Historical Tech Adoption
 Unlike historically slow and resistant tech adoption
 Shaping up to be bigger than electricity
AI Tools for Legal Research
 Revolution in Legal Research
 Instant generation of accurate and insightful
research reports
 Enhanced Legal Theories
 Models producing superior legal theories
compared to humans
 Automated Drafting
 Proficient drafting becoming mostly automated
 Accelerated Legal Processes
 Significant acceleration in legal processes
 Market structure persisting early on
Case Law Databases
 Instant Legal Research and Drafting
 Operate at extraordinary levels of
competency
 Minimal human touch required
 Outperforming Human Lawyers
 Sheer scope of context
 Meaningful Integrations by 2025
 Large language models improve with
web search features
AI vs. Human Lawyers
 Exceptional Research and Legal Performance
 Systems perform like top-tier research
attorneys
 Exhibit skills of lawyers with doctorates in
various fields
 Human Guidance in the Process
 Humans will still oversee and guide the process
 Superior Competency in Research and Writing
 Machines generally outperform humans in
fundamental practices
Improved Work Product
 Enhanced Proofing
 Attorneys report better accuracy in
documents
 Ideation Support
 AI tools assist in generating new ideas
 Conversational Assistance
 AI tools provide feedback on writings
Judicial System Assumptions
 Current Judicial System and Practice Workflows
 Rest on flawed assumptions about human time
and analytical capabilities
 Changes in Litigation and Practice
 Internet and email significantly altered the
pace
 Future Metamorphosis with AI
 AI will bring deeper changes to the judicial
system and practice workflows
AI in Legal Strategies
 Holistic and Sophisticated Advocacy
 Executed by AI-enabled, human superlawyers
 Achieved at an unimaginable level of efficiency
 Comprehensive Consideration
 All practice areas
 All facts
 All real-time developments
 Advanced Legal Strategies
 Crafting legal strategies
 Drafting papers
 Constructing arguments
Synergies with Other Technologies
 AI's Impact on Legal Industry
 AI will create synergies with other
technological advancements
 These synergies may reduce the need for
many legal services
 Potential Reduction in Legal Services
 AI advancements may minimize
elementary legal service requirements
met by courts
Universal Legal Literacy
 Universal Legal Literacy
 Accessible to normal people
 Reliable free legal advice
 Instant Availability
 For most disputes
 Planning and advocacy needs
 Socioeconomic Spectrum
 Available across all socioeconomic levels
Dynamic Legal Instruments
 Evolution of Legal Instruments
 Contracts and insurance policies becoming
 dynamic and adaptable
 Introduction of 'smart contracts' to counter
 planning fallacies
 Automated Dispute Resolution
 Standardization of automated dispute
 resolution for various disputes
Future of Legal Advice
 Reduction in Attorney Hours
 Overall decrease in the quantity of
attorney hours
 Value of Human Advice
 Human advice will become significantly
more valuable
Automated Dispute Resolution
 Automated Dispute Resolution
 Standard for many types of disputes
 AI-Enhanced Courts
 Radical enhancement by AI
 Human judges ratifying AI model
determinations
 Obviation of Legal Pillars
 Information science replacing elaborate legal
structures
Future of Legal Industry
 Negligible Costs
 Costs on using AI will become
insignificant over time
 Transformation of Legal Industry
 The legal industry will undergo a
significant transformation
 Lawyers and law firms will still exist
 The industry will become more
technology Oriented and more efficient
Human Legal Advice
 Essential Circumstances for Human Legal Advice
 Complex and high-stakes matters
 Issues dealing with personal freedom
 Hybrid Models for Legal Matters
 Endurance in complex and high-stakes cases
 Human-Centric Legal Activities
 Oral advocacy
 Certain ethical judgments
 High-touch activities
Ethical Frameworks and Data
Privacy
 Establishing Ethical Frameworks
 Ensuring AI-driven determinations
are fair
 Preventing bias in AI decisions
 Data Privacy and Security
 Evolving with AI tools
 Emergence of Novel Legal Roles
Market Growth Projections
 Projected Market Value
 Expected to reach $37 billion by 2030
 Growth Rate
 Growing at a CAGR of 35%
 Demand for AI-driven Legal Solutions
 Skyrocketing demand in the legal sector
 AI as a Necessity
 Becoming essential for law firms to stay competitive
 Technological Advancements
 Driven by machine learning, natural language
processing, and automation
Necessity of AI Adoption
 Efficiency and Cost
 AI adoption is crucial for law firms
 Firms leveraging AI early gain a
competitive edge
 Resisting AI leads to struggles in
keeping up
Predictive Analytics
 Accuracy Rate of Predictive Analytics
 75-90% accuracy in predicting case outcomes
 AI Applications in Law
 Analyzes past rulings, judge behaviors, and case details
 Estimates likelihood of success in litigation
 Benefits for Law Firms
 Helps decide whether to settle or proceed to trial
 Provides data-driven insights on legal strategies
 Actionable Steps
 Use AI-powered litigation analytics to assess court risks
 Incorporate predictive insights into case strategy discussions
 Combine AI predictions with human expertise for better
decision-making
Document Review
 AI Revolutionizing Document Review
 Processes contracts 60-80% faster than human
lawyers
 Highlights key clauses, risks, and compliance issues
 Benefits in Corporate Law
 Impractical to review hundreds of contracts
manually
 Ensures greater accuracy and reduces analysis time
 Actionable Steps
 Use AI-powered contract review software
 Train legal teams to verify AI-reviewed documents
efficiently
 Implement AI for compliance checks in filings
Contract Analysis
 AI-Powered Contract Analysis Tools
 Reduce errors by 30-40% compared to traditional
manual review
 Help firms catch errors and inconsistencies before they
become costly mistakes
 Benefits of AI in Contract Analysis
 Highlight ambiguous language
 Identify missing clauses
 Ensure compliance with legal standards
 Reduce the risk of disputes
 Save time on manual reviews
 Actionable Steps
 Use tools like Kira Systems or Evisort to enhance contract
accuracy
 Automate compliance checks to ensure legal standards
are met
Document Automation
 AI-driven document automation
 Generates contracts, pleadings, and agreements quickly
 Reduces time from hours to minutes
 Benefits for legal professionals
 Allows focus on higher-value tasks like case strategy
 Improves client interactions
 Ensures consistency and reduces errors
 Minimizes mistakes from manual drafting
 Actionable steps
 Implement AI-powered document generation tools
 Train staff to review AI-generated documents efficiently
 Standardize frequently used legal agreements
E-Discovery
 Cost Reduction in E-Discovery
 AI reduces costs by 50-70% in large-scale litigation cases
 Streamlines the process by identifying relevant
documents quickly
 Efficiency in Document Review
 AI categorizes information based on case requirements
 Reduces time and cost associated with manual review
 Improved Case Preparation
 Ensures key evidence is found faster
 Enhances case preparation
 Actionable Steps
 Use AI-based e-discovery tools to accelerate document
review
 Train legal teams to interpret AI-suggested evidence
AI Chatbots
 Transformation in Client Interaction
 AI chatbots handle routine legal queries
 Schedule appointments and provide preliminary
guidance
 Improved Client Experience
 Frees up lawyers to focus on complex cases
 Particularly useful for small firms needing 24/7 support
 Actionable Steps
 Integrate AI chatbots on law firm’s website
 Use chatbots for FAQs and appointment scheduling
 Monitor and refine chatbot performance based on client
feedback
Legal Analytics
 Implementation of AI-Based Legal Analytics
 55% of corporate legal teams have adopted AI analytics
 Helps in-house lawyers assess legal risks
 Monitors compliance and predicts litigation outcomes
 Benefits of AI Analytics
 Analyzes past cases and industry trends
 Enables proactive operations
 Reduces legal costs
 Improves risk management strategies
 Actionable Steps
 Offer AI-driven legal analytics services to clients
 Use AI insights to enhance compliance strategies
Perception of AI
 AI Enhances Legal Work
 Automates repetitive tasks
 Provides valuable insights
 Focus Areas for Lawyers Embracing AI
 Strategic thinking
 Client advocacy
 Complex legal analysis
 Risks of Resisting AI
 Falling behind in efficiency
 Decreased client service
 Actionable Steps for Legal Teams
 Educate team on AI benefits
Growth of Legal Tech Startups
 Rapid Evolution of Legal Tech Landscape
 Hundreds of startups developing solutions
 Addressing industry challenges
 Areas of Innovation
 Contract analysis
 E-discovery
 Litigation prediction
 Compliance management
 400% Growth in Last Five Years
 Significant increase in AI-powered startups
AI as Normal Technology
 AI as Normal Technology
 Contrasts utopian and dystopian views of AI
 Views AI as a tool under human control
 Description, Prediction, and Prescription
 Current AI is a normal technology
 Future AI will remain a tool, not a super intelligent entity
 We should treat AI as a controllable tool
 Rejects Technological Determinism
 AI is not an agent in determining its future
 Lessons from past technological revolutions
 Emphasizes continuity in societal impact
Part I: The Speed of Progress
 AI Progress and Diffusion
 Impact materializes when improvements are translated
into applications
 Speed limits at each stage of adoption and diffusion
 Gradual vs. Disruptive Progress
 Analyzing highly consequential tasks separately
 Speed of adoption and diffusion of AI
 Definitions
 Invention: Development of new AI methods
 Innovation: Development of products and applications
using AI
 AI Diffusion in Safety-Critical Areas
 AI Adoption Outside Safety-Critical Areas
 Limits on Speed of Diffusion
Part I: The Speed of Progress
 AI Progress and Generality
 AI methods progress as a ladder of generality
 Reduces programmer effort and increases task performance
 Machine learning increases generality by using training
examples
 Challenges in Real-World Applications
 Self-driving cars development is slow due to safety concerns
 Much organizational knowledge is tacit and not
documented
 The Bitter Lesson in AI
 General methods surpass human domain knowledge
 Limits to AI Learning
 Benchmarking and Forecasting Issues
 Gradual Economic Impacts
Part I: The Speed of Progress
 Exponential Increase in AI Research
 Doubling time of AI/ML papers on arXiv is under two years
 Unclear how volume increase translates to progress
 Turnover of Central Ideas
 High degree of herding around popular ideas
 Inadequate exploration of unfashionable ideas
 Example: Sidelining of neural network research
 Current Era of AI Research
 Incremental accrual of ideas
 Other Speed Limits
 AI-Conducted AI Research
 Misleading Benchmarks
Ability, Technology and Normal
Technology
 Human Ability and Technology
 Human ability is not constrained by biology
 Technology increases our ability to control our environment
 Modern humans are 'superintelligent' compared to pre-
technological humans
 AI Capabilities and Power
 Intelligence is not the main concern; power is
 AI capabilities may lead to loss of control
 Focus on preventing AI from acquiring significant power
 Human Limitations and AI
 Human abilities have limitations, notably speed
 AI Control Techniques
 Adapting Ideas from Other Fields
Risks
 Types of AI Risks
 Accidents
 Arms races
 Misuse
 Misalignment
 Non-catastrophic but systemic risks
 Mitigation of Accidents
 Primary responsibility on deployers and developers
 Arms Races
 Misuse of AI
 Catastrophic Misalignment
 Systemic Risks
Part IV: Policy
 Deep Differences in AI Safety Discourse
 Entrenched camps with differing worldviews
 AI safety coalition vs. skeptics of catastrophic risks
 Challenges in Policy Making
 Compromise is unlikely to work
 Interventions like transparency are unconditionally
helpful
 Nonproliferation may increase market concentration
 Open-source AI may risk unleashing superintelligence
 Estimating Probabilities and Cost-Benefit Analysis
 AI safety community relies on probability estimates
 Adopting Value Pluralism and Robustness
 Reducing Uncertainty
Value of Different Kinds of
Evidence
 Value of Different Kinds of Evidence
 Policymakers can help researchers understand useful and
actionable evidence
 Marginal risk framework is useful for analyzing risks of
open-weight and proprietary models
 Evidence Gathering as a Goal
 Actions should aim to generate better evidence or
reduce uncertainty
 Impact on evidence gathering should be considered in AI
policy evaluation
 Open-weight and open-source models can advance
research on AI risks
 Proprietary models might be favored for easier
surveillance of use and deployment
The Case for Resilience
 Four Approaches to Governing Emerging Technologies
 Ex ante: risk analysis and precaution
 Ex post: liability and resilience
 Challenges with Ex Ante Approaches for AI
 Difficulty in ascertaining risks before deployment
 Limitations of Liability Approach
 Uncertainty about causation
 Potential chilling effects on technology development
 Definition and Importance of Resilience
 Capacity to deal with harm and adapt to changes
 Resilience in AI Context
 Proposed Resilience Strategies for AI
Realizing the Benefits of AI
 AI Diffusion Challenges
 Progress is not automatic; many roadblocks exist
 Capacity to diffuse innovations varies between countries
 Example: Electrification of factories as a bottleneck
 Impact of Regulation
 Regulation can stymie beneficial AI adoption
 Risk of prematurely freezing business models and
product categories
 Nuanced Approach to Regulation
 Regulation vs. diffusion is a false tradeoff
 Need for nuance and flexibility in regulation
Final Thoughts on AI Worldviews
 Worldview of AI as Normal Technology
 Contrasts with the view of AI as impending
superintelligence
 Assumptions, vocabulary, and interpretations of evidence
form a tight bundle
 Assumptions about AI and Past Technologies
 AI and past technologies are sufficiently similar
 Well-established patterns like diffusion theory apply to AI
 Differences in Future Predictions
 Rooted in differing interpretations of present evidence
 Disagreement on the rapid adoption of generative AI
 Epistemic Tools and Emphasis
 Deemphasize probability forecasting
 Articulation of Worldview
Literature (Selected)
 Arvind Narayanan and Sayash Kapoor
(2025) : AI as Normal Technology: An
alternative to the vision of AI as a potential
superintelligence KNIGHT FIRST
AMENDMENT INSTITUTE: Columbia
University April 2025
 INTERNATIONAL BAR ASSOCIATION AND
THE CENTER FOR AI AND DIGITAL POLICY
(2024)
THE FUTURE IS NOW: ARTIFICIAL
INTELLIGENCE AND THE LEGAL
PROFESSION
Next
 Final Session will be a summing up
one With some thoughts on AI, Law
and Justice
Session 40

Final Session and Summing UP


Dr. Krishna Ravi Srinivas
Adjunct Professor of Law &
Director, Center of Excellence in Artificial Intelligence and
Law
NALSAR University of Law
Recap
 In the last session we discussed how
developments in AI are likely to impact Law
and Justice in future and the increasing role of
Technology in Law and Justice
The Journey so Far
 Introduction
 Introducing AI, key concepts and outlined
ML, NLP--1
 Rule of Law-2
 Data and AI--3
Real Life Application
 AI, and Judicial System in India – 4 & 5
 Use of AI in Law in India 6 & 7
 Algorithmic Decision Making 8
Algorithms Matter
 AI and Rule of Law-– 9, 10 and 11
 Algorithmic Justice- 12, 13
AI and IP Rights
 AI and Copy right -14, 15,16,17
 AI and Patents/Patenting - 18, 19
AI Ethics
 AI Ethics 20
 AI Ethics and Law and Justice 21
Responsible AI
 Responsible AI 22
 Responsible AI and Law and Justice 23
 Responsible AI and Law and Justice in Indian
Context 24
Explainable AI
 Explainable AI 25
 Explainable AI and Law and Justice 26
AI in Branches of Law
 AI and Labor Law 27
 AI and Health Law 28
 AI and Competition Law 29
Global Overview
 AI and Law and Justice in select
Jurisdictions– 30,31
AI, Judges, Human Rights, Legal
Education
 AI and Judges 32
 AI and Human Rights 33
 AI and Legal Education 34
AI and Constitution
AI and Constitution – 35, 36
AI, Law, Innovation and Future
 AI, Law, Justice and Innovation – 37
 AI and Technosolutionism -38
 Law, Technology, AI and Future of Law
and Justice-39
Finally
Summing Up 40
Take Away
 AI is playing an increasingly important role
in Law and Justice
 Data and Rule of Law are key themes that
have to be taken into account in AI and Law
and justice in India.
 The scenario in India is encouraging and a
long way to go for fully harnessing AI in Law
and Justice
 Translating AI into applications in Law and
Justice is more than Just developing
technical systems
Take Away
 AI’s impacts on Rule of Law and
use of Algorithmic Decision
Making (ADM) raise new
challenges and issues. More
importantly increasing use of ADM
has resulted in calls for
Algorithmic Justice.
Take Away
 In terms of impacts and implications
for copyright and patents/patenting AI
raises some fundamental and
conceptual issues. While some of
them have been addressed through
exceptions and regulations many of
them are unresolved. The courts have
come up with decisions. But the
picture is far from clear.
Take Away
 AI ethics and ethical AI cannot be just add-ons'
or after thought
 Explainable AI is essential in AI deployment in
Law and Justice
 Translating Responsible AI in to practice in Law
and Justice is big challenge and has to faced
 The three (Ethics, Explainable AI and
Responsible AI) should be at
 The core of any thinking or practice in AI in/for
Law and Justice
AI and Branches of Law
 AI has implications for different branches of
law and there are many new issues and
challenges. Some of them demand novel
responses that have to be in tune with
regulation of AI while some of them need a
calibrated approach that should be agile
and with an understanding of technology
raising fundamental questions and blurred
boundaries
An Overview
 We had a quick overview of AI in Law and
Justice elsewhere in the world
 And pointed out that the application of AI is
not uniform. They are many unresolved
issues and stakeholders including Bar
Associations, Judiciary are grappling with
them. We pointed out AI poses a challenge
to Judiciary and there is a dilemma in this.
Human Rights, Education and
Constitution
 In these three sessions we
touched upon key themes and
concerns. We also identified some
emerging issues and why AI’s
impact on Constitution and
Constitutional Values is important
for both AI and Constitution.
Innovation, Caution and Future
 In sessions 37, 38 and 39 we
discussed the issues in innovation
in AI for Law and Justice, need for
a balanced perspective and
offered some thoughts of what
could be the future in AI, Law and
Justice.
Coverage
 We have thus covered many themes and topics and
the coverage is such that in most cases we have just
introduced the theme or scratched the surface.
 In a course of 20 hours and forty sessions we have
tried to justice to the themes by highlighting key
issues and concerns and also looked at responses.
 There are limitations in/for this course. But that’s
inevitable.
 More importantly if the course has helped those
who have taken it to get a better understanding and
kindled interest in knowing more that itself is a
positive and welcome sign.
Thanks
 Thanks for taking this course and
your interest in this topic
 I hope you found it interesting and
useful. Your comments and
Suggestions are welcomed.

You might also like