Liability of Artificial Intelligence
Liability of Artificial Intelligence
School Of Law
B.A.,LL.B(Hons.)
Semester IX
Project
For
ON
Submitted by:
1
Roll Number(s) 18 & 10
In 1997, an IBM supercomputer called Deep Blue beat the then world chess champion, Garry
Kasparov at an intense game of chess. This was a rematch following Deep Blue’s initial defeat in
1996. In what can only be called human nature, Kasparov was perhaps reckless in the last game,
where Deep Blue emerged victorious using a seemingly strategic approach. He lost that day, but
maybe we didn’t.1 Artificial Intelligence (“AI”) is not a new concept, especially to the readers of
science fiction. In recent times however, it is becoming more science and less fiction. The world of
technology is changing rapidly, with computers and now robots, replacing simple human activities.2
AI, simply put, is the capability of a machine to imitate intelligent behaviour.2
It is an umbrella term that refers to information systems inspired by biological systems, and
encompasses multiple technologies including machine learning, deep learning, computer vision,
natural language processing (“NLP”), machine reasoning, and strong AI.3
In 1950, Alan Turing proposed what has come to be known as the ‘Turing Test’ for calling a machine
“intelligent”: that a machine could be said to “think” if a human could not tell it apart from another
human being in conversation. Roger C. Schank, in a 1987 paper laid down five attributes one would
expect an intelligent entity to have: (1) Communication, (2) Internal knowledge, (3) External
knowledge, (4) Goal-driven behavior, and (5) Creativity.
Young entrepreneurs and big corporations alike, have set sail to implement the various applications
that AI can accomplish. The Global Artificial intelligence market was valued at USD 126.4 billion in
2015 and is forecast to grow at a CAGR of 36.1% from 2016 to 2024 to reach a value of US$ 3,061.35
Billion in 2024.6 The United States represents the biggest market for Artificial Intelligence (AI). The
highest growth potential is expected to be in Asia-Pacific region.
Artificial Intelligence may be best defined by analyzing the two components of the term i.e. artificial
and intelligence. While defining “artificial” may prove to be an easier task, it is the definition of
“intelligence” over the years which has proved to be the difficult task. It has been held by consensus
that defining “artificial” may not prove to be as much of a task as defining “intelligence”. Herein we
have delved into the development of the concept of AI to understand its definition and its nexus with
1
Isabelle Boucq, Robots for Business, available at http: //www.Atelier-us.com/emergingtechnologies/article /robots-
for-business.
2
PR Newswire, Artificial Intelligence Market Forecasts, available at http://www.prnewswire.com/news-
releases/artificial-intel- ligence-market-forecasts-300359550.html.
3
http://dl.acm.org/citation.cfm?id=38300
2
our understanding of intelligence. It was in the 1940s that McCulloch and Walter Pitts first made an
attempt to understand intelligence in mathematical terms. Albeit while the subject whose intelligence
was being mapped was a human in this case, and not a machine, this model, even though not capable
of encapsulating human intelligence, was a stepping stone for those interested in the field of artificial
neural networks in computing which is the basis for artificial intelligence. Popularly known as the
father of computer science, Alan Turing in his paper “Computing Machinery and Intelligence” argued
that if a machine could pass the Turing test then we would have grounds to say that the computer was
intelligent.4 The Turing test involves a human being (known as the ‘judge’) asking questions via a
computer terminal to two other entities, one of which is a human being and the other of which is a
computer. If the judge regularly failed to correctly distinguish the computer from the human, then the
computer was said to have passed the test. In this paper Turing also considered a number of arguments
for, and objections to, the idea that computers could exhibit intelligence.5 It is from this point forward
that the disposition of holding human intelligence began being used as the yardstick to measure and
evaluate artificial intelligence.
The neologism- term “Artificial Intelligence” was used for the first time in a Dartmouth Conference
wherein John McCarthy at the Massachusetts Institute of Technology defined AI as science and
engineering of making intelligent machines, especially intelligent computer programs. It is related to
the similar task of using computers to understand human intelligence, but AI does not have to confine
itself to methods that are biologically observable.8 According to him there existed no “solid definition
of intelligence that doesn’t depend on relating it to human intelligence” because “we cannot yet
characterize in general what kinds of computational procedures we want to call intelligent.” Further
down the road another definition surfaced, provided by Marvin Minsky in 1968 stating that artificial
intelligence is the science of making machines do things that would require intelligence if done by
men.
Under the broad ambit of AI, multiple silo technologies have also developed over the years. Below
are a few definitions for the different focus technologies developed over the years and their current
market share:
• Machine Learning (ML) – uses computer algorithms based on mathematical models using
probability to make assumptions and can make predictions about similar data sets.
• Cognitive Computing – builds upon ML using large data sets with the goal to simulate human
thought process and predictive decisions. Training the systems tends to utilise human curation.
4
https://globenewswire.com/news-release/2016/09/27/874854 /0/en/Global-Artificial-Intelligence-Market-to-Exhibit-
US-3- 061-35-Bn-in-2024-Global-Industry-Analysis-Size-Revenue- Growth-Trends-Forecast-2024-TMR.html
5
2017 – The Year Ahead: Artificial Intelligence; the Rise of the Machines; Report by Merrill Lynch – Bank of
America, dated 09 December 2016
3
• Deep Learning – builds on ML using neural nets to make predictive analysis. The use of neural
nets is what is differentiating Deep Learning from Cognitive Computing right now. Deep Learning
is also helping improve image and speech recognition.
• Predictive application programming interfaces (APIs) – A predictive API basically uses AI to
provide a predictive output (from a standardised set of outputs), when you have data sets.
• Natural Language Processing (NLP) – programming computers to understand written and spoken
language just like humans, along with reasoning and context, and finally produce speech and
writing. Many machine learning companies use NLP for training on unstructured data. and objects
as humans, as well patterns in visually represented data, which may not be apparent.
• Speech Recognition – converting spoken language to data sets that can be processed by NLP. The
global artificial intelligence market size was valued at USD 641.9 million in 2016 on the basis of
its direct revenue sources and at USD 5,970.0 million in 2016 on the basis on enabled revenue and
AI based gross value addition (GVA) prognoses.
The market is projected to reach USD 35,870.0 million by 2025 by its direct revenue sources, growing
at a CAGR of 57.2% from 2017 to 2025, whereas it is expected to garner around USD 58,975.4
million by 2025 from its enabled revenue arenas. Considerable improvements in commercial
prospects of AI deployment and advancements in dynamic artificial intelligence solutions are driving
the industry growth.6
AI might just be the single largest technology revolution of our live times, with the potential to disrupt
almost all aspects of human existence. Andrew Ng, Co-founder of Coursera and formerly head of
Baidu AI Group / Google Brain, compares the transformational impact of AI to that of electricity 100
years back. With many industries aggressively investing in cognitive and AI solutions, global
investments are forecast to achieve a compound annual growth rate (CAGR) of 50.1% to reach
USD57.6 billion in 2021.7
AI is not a new phenomenon, with much of its theoretical and technological underpinning developed
over the past 70 years by computer scientists such as Alan Turing, Marvin Minsky and John
McCarthy. AI has already existed to some degree in many industries and governments. Now, thanks
to virtually unlimited computing power and the decreasing costs of data storage, we are on the cusp
of the exponential age of AI as organisations learn to unlock the value trapped in vast volumes of
data.
6
https://www.grandviewresearch.com/industry-analysis/ artificial-intelligence-ai-market, last accessed on May, 2,
2018.
7
Worldwide Semi-annual Cognitive Artificial Intelligence Systems Spending Guide from International Data Corp.
(IDC), 2017
4
AI is a constellation of technologies that enable machines to act with higher levels of intelligence and
emulate the human capabilities of sense, comprehend and act. Thus, computer vision and audio
processing can actively perceive the world around them by acquiring and processing images, sound
and speech. The natural language processing and inference engines can enable AI systems to analyse
and understand the information collected. An AI system can also take action through technologies
such as expert systems and inference engines or undertake actions in the physical world. These human
capabilities are augmented by the ability to learn from experience and keep adapting over time. AI
systems are finding ever-wider application to supplement these capabilities across enterprises as they
grow in sophistication.
Irrespective of the type of AI being used, however, every application begins with large amounts of
training data. In the past, this kind of performance was driven by rules-based data analytics programs,
statistical regressions, and early “expert systems.” But the explosion of powerful deep neural
networks now gives AI something a mere program doesn’t have: the ability to do the unexpected.
AI gets categorised in different ways and it may be useful to understand the various categories, their
rationale and the implications.
a) Weak AI vs. Strong AI: Weak AI describes "simulated" thinking. That is, a system which appears
to behave intelligently, but doesn't have any kind of consciousness about what it's doing. For example,
a chatbot might appear to hold a natural conversation, but it has no sense of who it is or why it's
talking to you. Strong AI describes "actual" thinking. That is, behaving intelligently, thinking as
human does, with a conscious, subjective mind. For example, when two humans converse, they most
likely know exactly who they are, what they're doing, and why.
b) Narrow AI vs. General AI: Narrow AI describes an AI that is limited to a single task or a set
number of tasks. For example, the capabilities of IBM's Deep Blue, the chess playing computer that
beat world champion Gary Kasparov in 1997, were limited to playing chess. It wouldn't have been
able to win a game of tic-tac-toe - or even know how to play. General AI describes an AI which can
be used to complete a wide range of tasks in a wide range of environments. As such, it's much closer
to human intelligence.
c) Superintelligence: The term "superintelligence" is often used to refer to general and strong AI at
the point at which it surpasses human intelligence, if it ever does.
While big strides have been made in Artificial Narrow Intelligence – algorithms that can process
documents, drive vehicles or beat champion chess players, no one has yet claimed the first production
or development of General AI. The weight of expert opinion is that we are a long way off the
emergence of General AI.
India has ambitions to fire up its artificial intelligence capabilities — but experts say that it's unlikely
to catch up with the U.S. and China, which are fiercely competing to be the world leader in the field.
5
An Indian government-appointed task force has released a comprehensive plan with
recommendations to boost the AI sector in the country for at least the next five years — from
developing AI technologies and infrastructure, to data usage and research. The task force, appointed
by India's Ministry of Commerce and Industry, proposes that the government work with the private
sector to develop technologies, with a focus on smart cities and the country's power and water
infrastructure.
As a transformative technology, AI has the potential to challenge any number of legal assumptions
in the short, medium, and long term. Precisely how law and policy will adapt to advances in AI; and
6
how AI will adapt to values reflected in law and policy depends on a variety of social, cultural,
economic, and other factors, and is likely to vary by jurisdiction.8
I. Legal Personality of AI
Legal personhood is invariably linked to individual autonomy, but has however not been granted
exclusively to human beings. The law has extended this status to non-human entities as well, whether
they are corporations, ships, and other artificial legal persons.9 No law currently in force in India
recognizes artificially intelligent entities to be legal persons, which has prompted the question of
whether the need for such recognition has now arisen. The question of whether legal personhood can
be conferred on an artificially intelligent entity boils down to whether the entity can and should be
made the subject of legal rights and duties. The essence of legal personhood lies in whether such
entity has the right to own property and the capacity to be sue and be sued.10
There are a few arguments against granting AI’s legal personhood:
The Responsibility Objection: That AI’s by nature, would not be responsible. This objection focuses
on the capability of an AI to fulfill its responsibilities and duties, as well the consequent liability for
breach of trust. That AI entities cannot be trusted to make the judgment calls that humans are faced
with in their work. This argument basically follows from the moral dilemma of empowering AI to
make decisions which are moral and subjective in nature.
Perhaps an attributable dilemma and discomfort with exploring the idea of expansion of legal
personhood, or even going beyond the theory of legal personhood which allows corporations to be
held liable, could be because of the uneasiness that concerns the relationship between our concept of
legal personhood and our concept of humanity. Thus, any questions in relation to legal personhood
are neither easy nor available, but with the increase in technological development which brings with
itself the sentient robot, or the conscious machine, will warrant answers to tougher questions soon.
Corporations are a prime example of an artificial person. The legal fiction created for corporates,
8
STANFORD UNIVERSITY, One Hundred Year Study on Artificial Intelligence (AI100), Policy and Legal Considerations,
https:// ai100.stanford.edu/2016-report/section-iii-prospects-and-rec- ommendations-public-policy/ai-policy-now-and-
future/ policy
9
MIGLE LAUKYTE,’Artificial and Autonomous: A Person?’(2012) Social Computing, Social Cognition, SOCIAL
NETWORKS AND MULTIAGENT SYSTEMS SOCIAL TURN, available at http://events.
cs.bham.ac.uk/turing12/proceedings/11.pdf.
10
L. B. Solum. Legal Personhood for Artificial Intelligences. North Carolina Law Review, 70: 1231–1287 (1992).
7
serves as a good precedent for the argument for granting the same to AI. However, there exists an
important distinction between Corporations and AI. Corporations are fictitiously autonomous. Their
actions are decided by their stakeholders. AI may however, be actually autonomous. AI’s users or
even creators, may not be in control of the actions of the AI. The status of AI needs to
be examined further and a simple analogy with corporations would not suffice. On the other hand, AI
cannot be treated on par with natural persons as AI lacks (i) a soul, (ii) intentionality, (iii)
consciousness, (iv) feelings, (v) interests, and (vi) free will.11 However, with Sophia, a social
humanoid robot developed by “Hanson Robotics”, a Hong Kong based company, launched in April
2015, being granted citizenship by Saudi Arabia in 201712, it has become the need of the hour for
legal systems across the world to address issues pertaining to the legal standing of AI, at the earliest.
The prominence of this need is highlighted by the recent accident caused by an autonomous / self-
driving car being tested by Uber, wherein an individual died61 and there was no certainty as to
whether Uber Technologies Inc should be held responsible or whether the AI which was running the
autonomous car should be held responsible, on its own. In order to find a middle ground, Migle
Laukyte (“Laukyte”), in his paper ‘Artificial and Autonomous: A Person?,’13 suggests the possibility
of granting AI a hybrid personhood, a quasi-legal person that would be recognized
as having a bundle of rights and duties as selected from those currently ascribed to natural and legal
persons.
In 1996, Tom Allen and Robin Widdinson noted that “soon, our autonomous computers will be
programmed to roam the Internet, seeking out new trading partners - whether human or machine”.14
A rising concern is that contract law, as it stands, cannot keep up with the rise in technology. While
the United Nations Convention on the Use of Electronic Communications in International Contracts
recognized contracts formed by the interaction of an automated system and a natural person to be
valid and enforceable, a need for more comprehensive legislation on the subject. An explanatory note
by the UNCITRAL Secretariat on the matter clarifies that messages from such automated systems
11
https://www.cnbc.com/2016/03/16/could-you-fall-in-love- with-this-robot.html, last accessed on March 21, 2018.
12
https://techcrunch.com/2017/10/26/saudi-arabia-robot-citi- zen-sophia/, last accessed on March 21, 2018.
13
https://www.bloomberg.com/amp/news/articles/2018-03-19/ uber-crash-is-nightmare-the-driverless-world-feared-
but-ex- pected last accessed on March 21, 2018.
14
TOM ALLEN, ROBIN WIDDISON,’Can computers make contracts?’ (1996) 9(1) HARVARD JOURNAL OF LAW &
TECHNOLOGY.
8
should be regarded as ‘originating’ from the legal entity on behalf of which the message system or
computer is operated. This circles back to the debate of giving AI entities a legal personality.
The primary objective behind the growth and development in AI and robotics systems
is the demand for automation across a wide variety of industries and sectors. With the ultimate
objective of reducing man hours and increasing efficiency, several prominent companies across the
world have actively prescribed to the practice of utilizing AI systems as a replacement for the human
workforce. This wave of automation, driven by AI is creating a gap between the current employment
related legislation in force and the new laws / employment framework that is required to be brought
into place to deal with the emerging automation via the use of AI and robotics systems in the
workplace. As employers incorporate AI and robotics systems into the workplace, it is pertinent that
they simultaneously must adapt their compliance systems accordingly. Therefore a synergy is
required between the members of the industry and the regulators to arrive a reasonable and
technologically relevant employment framework to address such issues.
The Constitution of India is the basic legal framework which allocates rights and obligations to
persons or citizens. Unfortunately, Courts are yet to adjudicate upon the legal status of AI machines,
the determination of which would clear up the existing debate of the applicability of existing laws to
AI machines. However, the Ministry of Industry and Commerce in India, whilst recognizing the
relevance of AI to the nation as a whole and to highlight and address the challenges and concerns AI
based technologies and systems and with the intention to facilitate growth and development of such
systems in India, the Ministry of Industry and Commerce had constituted an 18 member task force,
comprising of experts, academics, researchers and industry leaders, along with the active participation
of governmental bodies / ministries such as NITI Aayog, Ministry of Electronics and Information
Technology, Department of Science & Technology, UIDAI and DRDO in August 2017, titled “Task
9
force on AI for India’s Economic Transformation”, chaired by V. Kamakoti, a professor at IIT Madras
to explore possibilities to leverage AI for development across various fields. The task force has
recently published its report, wherein it has provided detailed recommendations along with next steps,
to the Ministry of Commerce with regard to the formulation of a detailed policy on AI in India.
1. The report has identified ten specific domains in the report that are relevant to India from the
perspective of development of AI based technologies, namely (i)Manufacturing;
(ii) Fin-tech;
(iii) Health;
(iv) Agriculture;
(vii) Environment;
(x) Education.
2. The report has identified the following major challenges in deploying AI systems on a large
scale basis in India,
(i) Encouraging data collection, archiving and availability with adequate safeguards,
possibly via data marketplaces / exchanges;
(ii) Ensuring data security, protection, privacy and ethical via regulatory and technological
frameworks;
(iii) Digitization of systems and processes with IOT systems whilst providing adequate
protection from cyber-attacks; and
(iv) Deployment of autonomous products whilst ensuring that the impact on employment and
safety is mitigated.
3. The task force has laid down the following specific recommendations to the Department of
Industrial Policy and Promotion (“DIPP”) in the report, a. Set up and fund an “Inter – Ministerial
10
National Artificial Intelligence Mission”, for a period of 5 years, with funding of around INR
1200 Crores, to act as a nodal agency to co-ordinate all AI related activities in India: The mission
should engage itself in three broad areas, namely,
(i) Core Activities – bring together relevant industry players and academicians to set up a
repository of research for AI related activities and to fund national level studies and campaigns
to identify AI based projects to be undertaken in each of the domains identified in the report and
to spread awareness amongst the society on AI systems;
(ii) Co-ordination – co-ordination amongst the relevant ministries / bodies of the government to
implement national level projects to expand the use of AI systems in India;
(iv) Centers of Excellence – set up inter disciplinary centers of research to facilitate deeper
understanding of AI systems, establish a universal and generic testing mechanism /
procedure such as for testing the performance of AI systems, such as regulatory
sandboxes for technology relevant to India, fund an inter disciplinary data integration
center to develop an autonomous AI machine that can work on multiple data streams and
provide information to the public across all the domains identified in the report :Data
Banks, Exchanges and Ombudsman: Set up digital data banks, marketplaces and
exchanges to empower availability of cross-industry data and information.
The report goes on to clarify that there should be regulations enacted in relation to sharing and security
of such data. The Ministry of Electronics and Information Technology (“MeitY”) may be the nodal
agency for setting up of such centers, whilst the DIPP can drive the formulation and implementation
of the regulations related to data ownership, sharing and security / privacy.
In addition, the report states that the Ministry of Commerce and Industry should create a data-
ombudsman, similar to the banking and insurance industry to quickly address data related issues and
grievances.
The report proposes that the Bureau of Indian Standards (“BIS”) should take the lead in ensuring that
India proactively participates in and implements the standards and norms being discussed
internationally with regard to AI systems. The task force has recommended that the policies are
enacted that foster the development of AI systems, and has stated that two specific policies be enacted
at the earliest, namely,
(i) Policy dealing with data, which deals with ownership, sharing rights and usage of data –
The report suggests that MeitY and the DIPP drive the effort to bring about this policy;
and
11
(ii) Tax –incentives for income from AI technologies and applications – The report suggests that
MeitY and the Finance Ministry collaborate to drive this policy and fix incentives for socially
relevant projects that utilizing AI systems / technology.
When the remarkable extent of creativity and knowledge exhibited by AI is clearly visible, concerns
pertaining to IP protection ought to be there in the minds of those enforcing the rights associated with
the intellectual property. There is a wide variety of intellectual property legislations which would
impact / affect the functioning of AI in India. Such legislations are discussed in detail below.
A. Copyright in some countries, we can see a conspicuous requirement of creativity, when it comes
to the ownership of copyright works. Even Indian Copyright law requires that in order for a
‘work’ to qualify for copyright protection, it would firstly have to meet the ‘modicum of
creativity’ standard laid down in “Eastern Book Company and Ors. v.D.B. Modak and Anr”15.
In this case, the Court held that a ‘minimal degree of creativity’ was required, that there must be
‘there must be some substantive variation and not merely a trivial variation’. From a reading of
the test laid down in the aforementioned judgment however, there is no definitive conclusion that
may arrived at wherein it may be stated that an AI cannot meet the ‘modicum of creativity’ as
required. In addition to the above, the second requirement to be satisfied by an AI when it comes
to the ownership of copyrighted works is the requirement to fall under the aegis of an ‘author’ as
is defined under the Copyright Act, 1957.
This would be problematic as an AI has generally been regarded to not have a legal personality.
Under Section 2 (d) of the Copyright Act, 1957, “(d) “author’ means, - “(vi) in relation to any
literary, dramatic, musical or artistic work which is computer-generated, the person who causes
the work to be created;” The first issue under the above mentioned definition is its usage of the
terms ‘the person who causes the work to be created’.
Determining who ‘causes’ a work to be created is a question of the proximity of a natural or legal
person to the creation of the ‘expression’ in the content in question – the more closely or directly
a person is involved in creating the ‘expression’, the more he or she contributes to it, and the
more likely he or she is to qualify as a person ‘who causes the work to be created’.
As a result of the above, the current legal framework under the Copyright Act, 1957 may not
effectively deal with / prescribe for creation of works where the actual creator or a contributor of
the ‘expression’ is not a human or a legal person. Thus, when it comes to works that are created
15
Appeal (civil) 6472 of 2004.
12
by AI, their authorship would be contentious under Indian copyright laws. There is no doubt that
a human’s involvement is required in kick-starting the AI’s creative undertaking, however the
process to determine who the author / owner is when the AI steps in to play a pivotal role in the
creation of the work, continues to remains a grey area. B. Patents Section 6 of the Indian Patents
Act, 1970 states that an application for a patent for any invention can be made only by the true
and first inventor of the invention or the persons assigned by such person.16
Whereas, Section 2 (y) of the Act confines the definition of “true and first inventor” to the extent
of excluding the first importer of an invention into India, or a person to whom an invention is
first communicated outside India, and nothing further.17 These provisions do not expressly
impose the requirement of an inventor to be a natural person. Therefore, from a bare reading of
these provisions, it may be interpreted that an AI may fall under the definition of an inventor as
provided in Section 2(y) of the Indian Patents Act, 1970. However, in practice the “true and first
inventor” is always assumed to be a natural person.
Thus, it will be interesting to track the jurisprudence on this front especially the stand taken by
the patent office when the “true and first inventor” on the patent application form is not a natural
person. 18However, AI will certainly play an important role in the evolution of patent law itself.
Sophisticated use of natural language processing has been adopted in generating variants of
existing patent claims so as to enlarge the invention’s scope. The publication of these patent
claims using such technology would help preclude obvious and easily derived ideas from being
patented as they will form the corpus of the prior art that is available in public domain. If the
trend of using such services gains a foothold in the industry, it will substantially increase the
uncertainty associated with the enforceability of a patent as the risk of not discovering prior art
that invalidates the patent would increase. As a result, it could be anticipated that AI would be
developed to assist in discovery of prior art and correspondingly this would certainly increase
the demand of AI (from a patent law perspective) in this sector.
C. Industrial Designs with the progress of artificial intelligence advancements like Watson, Siri,
and Alexa, it can be observed that many companies are working on different forms of smart
intelligent machines at present that could aid in its overall and inclusive development. In the
process of creation of Industrial Designs where numerous components come together at an
effective level to emerge to the final stage, Computer aided Design and Drafting (CAD) systems
16
Section 6 of the Indian Patents Act, 1970.
17
Section 2(y) of the Indian Patents Act, 1970.
18
Erica Fraser, “Computers as Inventors – Legal and Policy Implications of Artificial Intelligence on Patent Law”,
(2016) 13:3 SCRIPTed 305 https://script-ed.org/?p=3195.
13
have their own limitations confining itself to only geometric models and representations. On the
other side, the recent headway in generative techniques where an AI is associated in the process
could be a more creative and systematic way of providing mechanical solutions, thereby
undergirding the industrial design process.
Section 1(j)(iii) of the Designs Act, 2000 interestingly defines the “Proprietor of a new or original
design” as the author of the design and any other person too, where the design has devolved from the
original proprietor upon that person. So, how do we successfully determine the rightful authorship if
an artificial entity such as an AI is behind the original design? Also, what are the odds of an AI
acknowledging the authorship of a design? In addition to that, what is the possibility of authorship of
the design being devolved from the AI to a human being, when the AI itself does not have the
elementary cognizance as to what a proprietorship/authorship would mean in its strict legal sense?
These questions remain unanswered but it is hoped that jurisprudence on the same shall soon evolve.
III. Data Protection Technology is permeating the society at an ever increasing pace. Everyday more
and more devices are being connected to the internet, paving the way to the regime of Internet of
Things. It is only a matter of time before advances in AI combined with the use of smart devices
would lead to profiling more intrusive than ever before. Furthermore, with AI systems being
increasingly involved in functions such as data analytics, healthcare, education, employment, internet
of things, transportation, etc. has resulted in AI being able to access a vast repository of Personally
Identifiable Information (“PII”). With the ability of AI systems such as Siri, Cortana and FBLearner
Flow to use such PII to identify behavioral patterns of individuals and accordingly put forward a
targeted advertising which is preferable to the concerned individual, showcases the extent of the
impact that AI systems may have via using PII. However, it must be noted that data / information,
while invaluable for generating incisive analytics as specified above would also lead to larger
questions pertaining to privacy and resultantly it is important to have an existing / updated framework
that adequately address such concerns.
Such concerns pertaining to privacy have become more prominent in light of the recent judgment of
the Supreme Court in “K.S Puttaswamy & Anr. v Union of India & Ors”19 wherein the right to privacy
was held to be a fundamental right under the Constitution of India. The Supreme Court also went on
the state there is an immediate need for a comprehensive data protection framework / law to be
enacted, which is technology neutral and which encompasses / deals with prominent issues such as
the growing use of AI in India. We have provided a short primer on the relevant data protection
framework in force in India at present to crystallize the reason for the prominence / spurt of the
19
Writ Petition (Civil) No 494 OF 2012
14
privacy concerns in India and identify the reason behind the Supreme Court requiring the formulation
of a more comprehensive data protection framework in India. Section 43-A of the IT Act, 2000
mandates following of ‘reasonable security practices and procedures’ in relation to the Information
Technology (Reasonable security practices and procedures and sensitive personal data or
information) Rules, 2011 (“SPDI Rules”) which was enacted on 13 April 2011.
The section per se primarily concentrates on the compensation for negligence in implementing and
maintaining ‘reasonable security practices and procedures’ in relation to ‘sensitive personal data or
information. The criteria as to what would constitute Sensitive personal data or information of a
person is provided under Rule 3. Information that is freely available or accessible in public domain
or furnished under the RTI Act cannot be categorized under the same.20 Under the Rules, if it is for a
lawful purpose, a body corporate is required to obtain prior consent from the information provider
regarding the purpose of usage of the information collected. The body corporate is also mandated to
take reasonable steps to ensure that the information provider has knowledge about the collection of
information, the purpose of collection of such information, the intended recipients and the name and
address of the agency collecting and retaining the information.
The body corporate has to allow the information provider the right to review or amend the SPDI and
give the information provider an option to retract consent at any point of time, in relation to the
information that has been so provided. In case of withdrawal of consent, the body corporate has the
option to not provide the goods or services for which the concerned information was sought.
However, there have been several questions that have arisen with regard to the effectiveness of the
SPDI Rules recently, due to the fact that the compliances set out under the SPDI Rules were restricted
only to certain kinds of information and there is no protection as such for information that does not
fall under the definition of SPDI. In addition to being highlighted in the above mentioned judgment,
similar privacy concerns have been brought to the forefront with the institution of the following suit
before courts in India, namely: “Karmanya Singh Sareen & Anr. v. Union of India Ors.”, wherein the
manner in which consent for the collection and sharing of sensitive data of consumers by WhatsApp
with Facebook was also challenged under the grounds of being in violation of Articles 19 (1) and 21
of the Constitution of India.
In light of the Supreme Court judgment in “K.S Puttaswamy & Anr. v Union of India & Ors”which
enumerated the need to formulate a comprehensive date protection framework, MeitY has constituted
a committee of experts in July 2017, under the chairmanship of Justice B.N Srikrishna to identify key
20
Rule 3 of the Information Technology (Reasonable security practices and procedures and sensitive personal data or
information) Rules, 2011.
15
data protection issues in India, to recommend methods of addressing such issues and to prepare a
draft data protection bill that may be introduced in the Indian Parliament.21 The committee has
brought out a white paper with its recommendations on the new data protection framework to be
implemented in India, wherein the committee has put forward questions pertaining to the collection
and utilization of data by autonomous entities and has requested for inputs from individuals and
companies in India on whether data protection / security obligations should be imposed on AI and
other similar automated decision making entities under the new data protection framework. The draft
bill that is to be issued by the aforementioned committee has to be looked at to definitively determine
the impact on AI in India.
IV. E-Contracts: The validity of contracts formed through electronic means in India can be derived
from Section 10 A of the IT Act, Electronic contracts are treated like ordinary paper contracts,
provided they satisfy all the essential conditions in the enforcement of a valid contract such as offer,
acceptance, consideration, etc. The IT Act also recognizes “digital signatures” or “electronic
signatures” and validation of the authentication of electronic records by using such digital/ electronic
signatures. The contents of electronic records can also be proved in evidence by the parties in
accordance with the provisions of the Indian Evidence Act, 1872.With the advent of smart contracts
i.e. contracts capable of enforcing a contract on their own, an additional debate has arisen with regard
to enforceability against an AI and it is to be determined how this issue will be resolved. It will not
always be possible for such contracts to capture all the relevant information from the real world to
adequately assess the situation. The contract will enforce the terms on the basis of its programming
which may be inadequate and may cause harm / damage to a party. In such an instance, an aggrieved
party may face practical difficulties in enforcing the same in a different country.
In addition, with the growth and development of AI and robotics, the possibility of an AI entering
into a contract of its own volition has become more prominent. To assess as to whether such a contract
may be considered to be valid in India, reference has to be made to the Indian Contract Act, 1872, to
determine as to whether an AI would be regarded to a person competent to enter into a contract along
with determining if the specific essentials of a valid contract such as offer, acceptance, consideration,
etc., are being satisfied. As the Indian Contract Act, 1872 envisages that only a “legal person” may
be competent to enter into a valid contract and as the general rule / practice thus far has been that
since robots or machines cannot qualify as natural or legal persons, a contract entered into by an AI
of its own volition / accord, may not be regarded to be a valid contract under applicable law in India.
Practical concerns such as court’s ability to understand the terms that has been agreed to will also
21
http://meity.gov.in/writereaddata/files/white_paper_on_data_protection_in_india_18122017_final_v2.1.pdf, last
accessed on March 21, 2018.
16
arise as these terms will be expressed in programming terms that the court may not be acquainted
with. The courts will also need to make an assessment whether the terms that has been agreed to have
been properly instructed to the AI. Another major concern with regard to AI is lack of a conscience.
A contract to kill can be enforced by a smart contract in which funds are released to the shooter
provided he feeds in the proof of death via some biotechnology based contraption. It needs to be
ensured that such technology standards are developed and put in place that prevents enforcement of
similar contracts.
V. Duty / Standard of Care: A pertinent issue that arises with regard to the interplay between AI
and law is the duty / standard of care expected from an AI and the implication when such standards
are not met and there is damage / harm caused as a result. The determination as to the duty / standard
of care expected from an AI becomes additionally relevant from the perspective of imputing
responsibility / liability upon an AI for a supposedly negligent action. Currently, the law treats
machines as if they were all created equal, as simple consumer products. In most cases, when an
accident occurs,
standards of strict product liability law apply. In other words, unless a consumer uses a product in an
outrageous way or grossly ignores safety warnings, the manufacturer (and those associated with the
product) are usually considered at fault. “However, when computers cause the same injuries, it is to
be evaluated whether the standards of strict liability can be applied at all times, this distinction has
significant financial consequences and corresponding impact on the rate of technology adoption. The
essentials of negligence are as provided below,
A. Duty to take care One of the essential conditions of liability for negligence is that the defendant
owed a legal duty towards the plaintiff and that the defendant committed a breach of duty to take care
or he failed to perform that duty.
B. Duty Must Be Specifically Towards the Plaintiff It is not sufficient that the defendant owed a
general duty to take care. It must be established that the defendant owed a duty of care towards the
plaintiff.
C. Consequent Damage Harm To The Plaintiff The last essential requisite for the tort of negligence
is that the damage caused to the plaintiff was the result of the breach of the duty. The harm may fall
into following classes: -
• harm to reputation;
17
• harm to property, i.e. land and buildings and rights and interests pertaining thereto, and his
goods;
Specifically, with regard to India, with the advent and growth of AI, there is a need for more clarity
to be brought about with regard to the law pertaining to ‘negligence’ and ‘reasonable standard / duty
of care’. At present, there is a lack of legal jurisprudence when it comes to “standard / duty of care’
with regard to AI systems along with “product liability” and “the common law tort of wrongful death”
in India. As, questions pertaining to the liability of AI systems for negligent actions have been
addressed in most jurisdictions across the world under the aegis of the principle of “strict product
liability”, it is expected that any guidance or observations by the courts in India with regard to the
attribution of negligence on AI systems may be addressed on the same lines.
However, even though there are steps taken to address the lacunae under law with regard to AI on the
lines of “strict product liability’, issues pertaining to determining the actual manufacturer / owner of
the AI due to the extent of automation involved and the imputation / enforcement of liability against
AI as discussed below would still persist and remain prevalent.
VI. Enforcement against / Liability of AI: With rampant development in the field of AI, wherein
self-driven cars and almost fully automated machines and robots are starting to enter into use,
pertinent legal considerations arise in the form attributing liability in cases of damage. As discussed
above, the assignment of liability is a crucial aspect of granting artificially intelligent entities a legal
personality as well. The general rule thus far has been that since robots or machines cannot qualify
as natural or legal persons, they cannot be held liable in their own capacity. As one court observed,
“robots cannot be sued,” even though “they can cause devastating damage.” The introduction of
highly intelligent, autonomous machines may prompt reconsideration of that rule.
In view of such practice, there is a question of liability in the context of the legal relationship between
AI and its developer. Legal norms provide that damages caused by unlawful actions of another person
must be compensated.
A. Civil Liability
As Paulius Cerka et al note22, damage is one of the main conditions of civil liability, which must
be proven in order to obtain redress. Arguments are put forth that if AI would be fully autonomous
22
Paulis Cerka et al, Liability for Damages Caused by Artificial Intelligence, available at http://fulltext.study/
download/467680.pdf
18
(such as super-intelligence), then they must be aware of their actions. If they are aware of their
actions, they must be liable for their actions. An AI’s autonomy in the eye of the law means that
AI has rights and a corresponding set of duties. In law, rights and duties are attributed to legal
persons, both natural (such as humans) and artificial (such as corporations). Therefore, if we seek
for AI to be liable for its actions, there is an argument to be made about whether or not legal
personality should be attributable to it? Although, in the event AI is given independent autonomy,
the challenge which would continue is the enforcement of rights / obligations against the AI. At
this point in time, there are no straight jacket answers, but the jurisprudence on the same would
certainly evolve with the passage of time. With the recent instances of accidents that occurred in
relation to autonomous / self-driving cars being tested / used by Tesla Inc.23 and Uber
Technologies Inc.24 questions pertaining to imposition of civil liability on AI systems and / or
their developers have become more prominent.
B. Criminal Liability
AI’s have become an integral part of modern human life, functioning more sophistically than
other daily tools.25 However, the question that now follows is whether they could be a threat to
our lives. In his science fiction work ‘I, Robot’, Isaac Asimov laid down three fundamental laws
of robotics: (1) A robot may not injure a human being or, through inaction, allow a human being
to come to harm; (2) A robot must obey the orders given to it by human beings, except where
such orders would conflict with the First Law; (3) A robot must protect its own existence as long
as such protection does not conflict with the First or Second Law. Later, Asimov added a fourth,
or zeroth law, that preceded the others in terms of priority: (0) A robot may not harm humanity,
or, by inaction, allow humanity to come to harm. While these laws, laid down in 1942, have
become quite mainstream both in science fiction and in robotics; there is a large section of the
sector who argue that they are now obsolete.26 In 2015, over 1000 AI and robotics researchers
including Stephen Hawking and Elon Musk issued a warning of the destruction that AI warfare,
23
https://www.nytimes.com/2016/07/01/business/self-drivingtesla-fatal-crash-investigation.html, last accessed on
October 21, 2018
24
Chris Capps, “Thinking” Supercomputer Now Conscious as a Cat, http://www.unexplainable.net/artman/publish/
article_14423.shtml
25
George Dvorsky, Why Asimov’s Three Laws of Robotics Can’t Protect Us, available at http://io9.gizmodo.com/why-
asimovsthree-laws-of robotics-cant-protect-us-1553665410.
26
Lucas Matney, Hawking, Musk Warn Of ‘Virtually Inevitable’ AI Arms Race, available at
https://techcrunch.com/2015/07/27/ artificially-assured-destruction/#.woknrl:EnLr
19
or autonomous weaponry would cause.27 The main question, as Gabriel Hallevy notes, is what
kind of laws or ethics are to govern the situation, and who is to decide? He observes that people’s
fear of AI entities in most cases, is based on the fact that AI entities are not considered to be
subject to the law.28 Importantly, he contrasts this fear to the similar unease that was felt towards
corporations and their power to commit a spectrum of crimes. However, with corporations now
being subject to criminal and corporate law, this fear appears to have significantly reduced.87
Bearing in mind the basic requisites to bring an entity under criminal law: criminal conduct (actus
reus) and the internal or mental element (mens rea), Hallevy proposed three models to bring AI
under criminal liability:
This model does not consider AI entities to possess any human attributes, and instead recognizes
the entities’ capabilities as a perpetrator of an offence. However, this model limits the entities’
capabilities to that of an ‘innocent agent’, or a mentally limited person such as a child, one who
is mentally incompetent, or one who lacks a criminal state of mind. He notes that in such cases,
the person orchestrating the offence is to be seen as the real perpetrator. Therefore, for an AI
entity the Perpetrator-Via-Another would be either the programmer of the AI software or the end
user.
This second model of criminal liability assumes deep involvement of the programmers or users
in the AI entity’s daily activity, but without any intention of committing an offence via the entity.
An example would be the entity committing an offence during the execution of its daily tasks.
The important distinction in these cases is that there is no criminal intent on part of the
programmer/user. This model assigns liability to the programmers/ user, but in the capacity of
them being in a negligent mental state. It assumes that the programmers or the users should have
known about the probability of the forthcoming commission of the specific offence, and hence
holds them to be ` criminally liable.
27
Gabriel Hallevy, The Criminal Liability of Artificial Intelligence Entities – From Science Fiction to Legal Social
Control.
28
John C. Coffee, Jr., “No Soul to Damn: No Body to Kick”: An Unscandalised Inquiry into the Problem of Corporate
Punishment, 79 MICH. L. REV. 386 (1981).
20
This third model does not assume any dependence of the AI entity on a specific programmer or
user, but focuses on the AI entity itself. It states that should the actus reus as well as the mens rea
of that offence be fulfilled, the AI entity would be liable as if it were a human or a corporation.
The challenge, as he notes is the attribution of specific intent, as the external element of a crime
would be easy to prove. Criminal liability on an AI does not replace the liability that might fall,
if at all, on the programmers or the users. Instead, the AI would be held liable along with the
programmers and users. The three models described above are to be considered together, and not
separately; and determined in the specific context of AI involvement.
Punishment Considerations :The biggest issue that the assignment of liability faces is how to
penalize the entity for its wrongdoing. A number of questions arise: If the offence under which
the entity is convicted prescribes punishment, how would the entity be made to serve such a
sentence? How would capital punishment, probation, or even a fine be imposed on an AI entity?
When AI entities do not have bank accounts, is it really practical to impose upon it a fine? Similar
problems were faced when the criminal liability of corporations was debated, and it is suggested
that just as the law adjusted for corporations, it will for AI entities as well. What Hallevy suggests,
is that there are certain parallels to be drawn between existing penalties of contravention of the
law and what an AI may be subjected to:
a. Capital Punishment: If the offence involves capital punishment, perhaps the deletion of the AI
software controlling the AI entity would incapacitate the entity, achieving the same end as capital
punishment.
b. Imprisonment: Incarceration is one of the most popular sentences, and its purpose is to deprive the
prisoner of human liberty and the imposition of severe limitations on freedom of movement. Hallevy
notes that the ‘liberty’ or ‘freedom’ of an AI entity includes the freedom to act as an AI in its relevant
area. He therefore suggests that perhaps putting the AI out of use in its field of work for a determinate
period could curtail its freedom and liberty in much the same manner.
c. Community Service: Should the offence be of community service; the AI entity could be put to
work in the area of choice to be of benefit to the society. d. Fines: The imposition of a fine on an AI
entity would be wholly dependent on whether the entity possesses its own property or money. In the
event that the entity does not, it is possible that a fine imposed upon an AI entity could be collected
though the provision of labor for the benefit of community. While the above are only ideas /
propositions, one can always argue that these are in no manner similar to the criminal sanctions as
imposed on a natural person. Thus, the questions continue on whether AI can be given an independent
autonomous status which can be held responsible for its own acts.
21
CONCLUSION
The penetration of self-driven cars, robots and fully-automated machines, which are currently being
used in various economies around the world, is only expected to increase with the passage of time.
As a result, the dependency of entities and individuals on AI systems is also expected to increase
22
proportionately. This may be evidenced from the fact that AI is expected to bolster economic growth
by an average of 1.7% across various industries by 2035.29 However, in order to safeguard the
development and integration of AI systems with the industrial and social sector, it is important to
ensure that the current concerns that exist with regard to AI systems are appropriately addressed. The
most prevalent issues being (i) the issue of imputation of liability or in other terms the issue of holding
an AI to be responsible for its actions; and (ii) the issue pertaining to the relationship / interplay
between ethics, the law and AI and robotics systems.
Whilst addressing the aforementioned, it would be imperative that the regulators undertake a
reasonable and balanced approach between the protection of rights of citizens / individuals and the
need to encourage technological growth. Failure to do so may either impact the protection of rights
or on the other hand may adversely impact creativity and innovation. In addition, the regulations
should also undertake steps to provide for guidance / clarity as to the rights and obligations of
programmers or creators of AI systems, in order to crystallize the broad ethical standards to which
they are required to abide to whilst programming / creating AI and robotics systems. Due to the lack
of legal jurisprudence on this subject, it is hoped that in the near future legal and tax principles are
established which will not only foster the development of AI but also ensure that the necessary
safeguards are in place.
BIBLIOGRAPHY
1. https://www.linkedin.com/pulse/fixing-liability-failures-intelligent-machines-rajesh-vellakkat
2. https://arxiv.org/pdf/1802.07782.pdf
3. https://www.foxmandal.in/wp-content/uploads/2017/04/Legalroundup_ITLaw_April.pdf
29
https://www.forbes.com/sites/louiscolumbus/2017/06/22/artificial-intelligence-will-enable-38-profit-gains-by-
2035/#2f7f-30da1969, last accessed on September 26, 2017
23
4. http://www.nishithdesai.com/fileadmin/user_upload/pdfs/Research_Papers/Artificial_Intelligen
ce_and_Robotics.pdfhttps://www.livelaw.in/indias-unregulated-tryst-with-artificial-
intelligence-looking-into-future-without-a-law/
5. http://www.mondaq.com/india/x/712308/new+technology/Can+Artificial+Intelligence+Be+Gi
ven+Legal+Rights+And+Duties
6. https://indianexpress.com/article/technology/opinion-technology/why-we-need-to-have-
regulation-and-legislation-on-artificial-intelligence-quick-5151401/
7. https://www.paymentlawadvisor.com/files/2018/08/NBA-2018-AI-and-Machine-Learning-
Presentation.pdf
8. https://blog.ipleaders.in/ai-in-legal-profession/
9. https://cis-india.org/internet-governance/blog/ai-in-india-a-policy-agenda
10. https://www.livelaw.in/indias-unregulated-tryst-with-artificial-intelligence-looking-into-future-
without-a-law/
11. https://www.cnbc.com/2018/05/11/artificial-intelligence-india-wants-to-fire-up-its-a-i-
industry.html
12. http://niiconsulting.com/checkmate/2014/06/it-act-2000-penalties-offences-with-case-studies/
13. http://niiconsulting.com/checkmate/2014/06/it-act-2000-penalties-offences-with-case-studies/
15. https://www.nytimes.com/2016/07/01/business/self-drivingtesla-fatal-crash-investigation.html
16. https://www.techemergence.com/artificial-intelligence-in-india/
24