Artificial
Intelligence Legal Research
I.
Personhood
A critical question concerning Artificial
Intelligence is whether a legal system should view AI as a legal person. Previous year students identified that
the Supreme Court in a series of abortion cases determined that a fetus is not
a person with rights under the constitution until it reaches the stage of
viability, that is, the stage where the fetus may survive outside the womb. http://www.daviddfriedman.com/Academic/Course_Pages/21st_century_issues/21st_century_law/ai_law_alive_04.htm; http://en.wikipedia.org/wiki/Fetus#Viability
This standard, however, presumes that a
person with protectable rights is a natural person. Issues arise where AI develops to the
point where it may be considered conscious. Constitutional jurisprudence evolves
with societal norms and standards.
For example, there was a point in our countryÕs history where statutes
banning interracial marriage may have been acceptable. After Love v. Virginia, however, that is
no longer the case. Consider
also, the abolition of Òseparate but equalÓ in Brown v. Board of Education.
Prior to that case, the
Koops confronted the issue of whether
computer agents would qualify for a claim to what he deemed posthuman rights
and liberties. That is, human rights like privacy, due process,
and bodily integrity claimed by and attributed to non-human agents such as
non-biological machines, cyborgs, or synthetic biological entities. Koops perceives three objections.
First, only natural persons should qualify for constitutional rights of
personhood. Second, AIs lack some critical aspect of personhood. Third, since
AI is human creation, it can never be more than human property. Bert-Jaap Koops , BRIDGING THE
ACCOUNTABILITY GAP: RIGHTS FOR NEW ENTITIES IN THE INFORMATION SOCIETY? 11
Minn. J.L. Sci. & Tech. 497 (2010)
Under the natural persons objection, only
natural persons, and thus, not AI, may qualify for constitutional rights. Importantly, specific constitutional
rights already apply to non-human legal persons such as corporations. Indeed, corporations also have a right
to freedom of expression. The
objection, however, maintains that, in the case of corporations, an AI simply
acts as a place-holder for the rights of natural persons, specifically, the
executives of the corporation, the shareholders, and board of directors. This objection weakens where AI is
perceived as achieving consciousness, and therefore capable of holding moral
viewpoints and basing its decisions on such. Id.
Under the Òmissing somethingÓ objection, AI
may not qualify for constitutional rights natural persons enjoy because it is
missing some critical characteristic natural persons possess. This argument is based on an intuitive
sense that AI is missing a ÒsoulÓ or Òfeelings.Ó Id.
While this objection seems vague and based more on fear of the unknown, in Future Imperfect, Professor Friedman
argues that one of the critical limitations of AI is that it likely not capable
of initiative. That is, AI is
merely reacting to its environment, based on a set of rules, albeit a complex
and evolved set of rules, just as SearleÕs Chinese box (see below)
demonstrates. If this is true, that
Òmissing somethingÓ is initiative, and though an AI may appear conscious and
thus, worthy of constitutional protection, in reality it is merely a complex
program reacting to its environment.
is likely incapable of formulating intent and thus, cannot be held
criminally liable for its actions.
II.
Criminal Law
Applicability of criminal laws to artificial
intelligence creates an interesting problem whose answer likely depends on a
given societyÕs perception of artificial intelligence. In the case where the criminal law
perceives an AI as only a computer program – a product – the
criminal law would likely view AI as a tool used to perpetrate a crime, and in
the courtroom, evidence. Typically,
when a criminal act is committed by an innocent agent, like when a person
causes a child, a person who is mentally incompetent or otherwise incapable of
formulating the requisite intent, the person who caused the agent to commit the
act is criminally liable as a perpetrator-by another. The law perceives the
intermediary, in this case the AI, as an instrument and the party orchestrating
the offense as the real perpetrator as a principal in the first degree. Where
an AI is not perceived as capable of formulating intent, whoever instructs the
program to perpetrate an unlawful deed, would be held liable. Gabriel Hallevy, I,
ROBOT - I, CRIMINAL--WHEN SCIENCE FICTION BECOMES REALITY: LEGAL LIABILITY OF
AI ROBOTS COMMITTING CRIMINAL OFFENSES, 22 Syracuse Sci. & Tech. L. Rep. 1
This set of rules, however, likely does not
apply where the AI that perpetrated an unlawful act may be conscious. A person cannot be tried or sentenced
while mentally incompetent. California Penal Code ¤1367(a); Godinez v Moran
(1993) 509 US 389. Thus, a
threshold question arises – is this AI is conscious and thus competent to
stand trial?
a.
Liability
Generally, to prove an alleged perpetrator
committed a criminal act, the State must prove both the actus reus and the mens
rea. The actus is the factual
element, i.e., criminal conduct, the mens rea is the mental element, i.e.,
knowledge or general intent in relation to the conduct element. The actus reus requirement is expressed
mainly by acts or omissions. Sometimes,
other factual elements are required in addition to conduct, such as the
specific results of that conduct and the specific circumstances underlying the
conduct. The mens rea requirement
has various levels of mental elements. The highest level is expressed by
knowledge, while sometimes it is
accompanied by a requirement of intent or specific intention. Lower levels are expressed by negligence
(a reasonable person should have known), or by strict liability offenses. Hallevy, supra, note 1
i. Actus Reus
In the case of a natural person allegedly
committing a crime, the actus reus is relatively simple to prove - show that
the criminal act occurred, show the person committed that act. There exist multitude means to prove the
act occurred, and this should not be any different when applied to AI. Indeed, the American legal system
permits entities that are not natural persons – corporations – to
be held criminally liable for its acts.
Thus, just as the State can show that a corporation committed an act, so
too can it prove an AI did. Id.
ii. Mens Rea
Proving the mental element of a specific, or
even general intent crime, may prove considerably more difficult. Indeed, even where technology exists to
create AI at or above human capacity for intelligence, does not mean that all
or any AI possess the capacity to formulate intent. Thus, should an AI allegedly commit a
criminal act, courts will likely struggle with proof of intent. Before an AI may be held criminally
liable, courts will likely require proof the AI possesses sufficient
consciousness to formulate the requisite intent for any crime. Once the State proves the ability to
formulate intent, it must prove intent to commit the crime.
1.
Turing Test
Courts may apply the Turing test to determine
whether an AI possesses consciousness sufficient to formulate intent in its
actions. The Turing test is
relatively simple – a game of imitation with a human opponent. A person unaware of which competitor is which,
questions the human and the potentially conscious AI via type, on any subject
whatsoever. After a series of questions, the questioner guesses which is human.
Essentially, Turing proposes an indirect means of ascertaining whether an AI is
conscious by determining whether it can fool a series of questioners. If the AI can convince the questioner it
is the human, at least as often as the questioner believes the human, it passes
TuringÕs test. Solum LEGAL PERSONHOOD FOR ARTIFICIAL
INTELLIGENCES, 70 N.C. L. Rev. 1231 (1992)
Courts applying the Turing test may find it
cumbersome, especially in a world in which a large AI population exists and a
court is regularly confronted with the issue of whether the AI is, indeed
conscious and thus able to formulate criminal intent. Thus, efficiency concerns may prompt
courts to consider other tests. For
example, the agencies regularly have lists of conforming products, which the
agency has tested and deemed compliant with its particular standards. In a world in which AI is common, a
government could require that any company releasing AI to the public must first
subject the AI to a number of tests, one of which could ascertain whether that
particular AI program is capable of formulating intent. Thus, where a State seeks to prosecute a
criminal act perpetrated by an AI, it need only look to the agency list as to
whether the particular AI qualifies as capable of formulating criminal intent. If so, the AI itself is charged and
prosecuted. If not, the person who
instructed the AI to commit the act is.
a.
SearleÕs Chinese Box Thought
Experiment
John Searle offers a clever criticism of the
Turing test in the form of Òthe Chinese boxÓ though experiment. Imagine that you are locked in a room that
periodically receives batches of Chinese writing you must decipher, but you
donÕt know Chinese. Persons outside the room are playing TuringÕs game. You are given a rule book, in which you
can look up the Chinese symbols by their shape. Outside the room, the people are
convinced that whatever is in the room understands Chinese. But you donÕt, you
are following a set of instructions (a program) based on the shape of Chinese
symbols. Searle believes that this thought experiment demonstrates that neither
you nor the instruction book (the program) understands Chinese, even though you
and the program can simulate such understanding. Thus, Searle argues that thinking cannot
be attributed to a computer on the basis of its running a program that
manipulates symbols in a way that simulates human intelligence.
A clear shortcoming of the Chinese box
hypothetical, however, is that the ÒprogramÓ assisting the person decipher
Chinese serves only that purpose.
That is, the Chinese manual does not help the person, or even a
potentially conscious AI, formulate its response to the question. It only serves to translate. Thus, a human or potentially conscious
AI must still formulate its response to questions to convince the questioner of
its consciousness regardless of the language in which the question is
posed. Considering the breadth of
the questions to be posed – any subject whatsoever – the Turing
test seems like a legitimate test, at least regarding intelligence.
As discussed above, Professor Friedman argues
that one of the critical limitations of AI is that it likely not capable of
initiative. SearleÕs Chinese box
demonstrates this notion. If this
is true, an AI is likely incapable of formulating intent and thus, cannot be
held criminally liable for its actions.
b.
Punishment
How AI may be punished is a critical to a discussion on criminal
liability with AI. Can an AI be
punished in the same manner as its natural person counterparts? There are certainly issues where
corporations cannot be held liable in the same manner as natural
counterparts. The death penalty is
legal for humans, only for the most heinous of crimes. Can an AI be put down for a misdemeanor? Courts are permitted to require, as a
condition of parole or probation that people take prescribed medication, does
the same rule apply to fix ÒbugsÓ in an AIÕs program? Can a court order this type of
fixing?
i. Punishment for Criminally Liable AI
Hallevy
argues that most common punishments
are applicable to AI robots. The imposition of specific penalties on AI robots
does not negate the nature of these penalties in comparison with their
imposition on humans. Of course, some general punishment adjustment
considerations are necessary in order to apply these penalties, but still, the
nature of these penalties remains the same relative to humans and to AI robots.
Hallevy,
ÒI, ROBOT - I, CRIMINALÓ--WHEN SCIENCE FICTION BECOMES REALITY: LEGAL LIABILITY
OF AI ROBOTS COMMITTING CRIMINAL OFFENSES, 22 Syracuse Sci. & Tech. L. Rep.
1.
Hallevy
fails to address a significant issue, namely, that an AI likely does not have a
life span. To potentially send an
AI to prison then, may not serve the same goals as it would for human
beings. Indeed, when a lifetime may
last forever, a couple years incapacitated by incarceration does not seem
nearly as harsh as when a lifespan is relatively short. A court, however, may legally require
that a convict take medically necessary medication to ensure he is not a danger
to himself or others. This likely
also applies to AI where the ÒmedicationÓ may be a software fix and the
Òmedical conditionÓ is some defect in its programming causing it to commit
criminal acts.
Isaac
Asimov set down three fundamental laws of robotics in ÒI, RobotÓ: (1) a robot
may not injure a human being or, through inaction, allow a human being to come
to harm; (2) a robot must obey the orders given to it by human beings, except
where such orders would conflict with the First Law; and (3) a robot must
protect its own existence, as long as such protection does not conflict with
the First or Second Laws. If a
country adopts such a standard as necessary in any AI, Is it enough to simply
ÒremindÓ a culpable robot of these laws
III.
Tort Law
a.
Liability
i. Products Liability
Restatement
(Second) of Torts ¤ 402A (1965)
(1)
One who sells any product in a defective condition unreasonably dangerous to
the user or consumer or to his property is subject to liability for physical
harm thereby caused to the ultimate user or consumer, or to his property, if(a)
the seller is engaged in the business of selling such a product, and(b) it is
expected to and does reach the user or consumer without substantial change in
the condition in which it is sold.(2) The rule stated in Subsection (1) applies
although(a) the seller has exercised all possible care in the preparation and
sale of his product, and(b) the user or consumer has not bought the product
from or entered into any contractual relation with the seller.
See Michael D. Scott, Tort Liability for Vendors of
Insecure Software: Has the Time Finally Come?, 67 Md. L. Rev. 425, 457
(2008)
Scott argues that strict tort liability
should be extended to insecure software.
While courts have been reluctant to do so, largely because there is
often no physical damage, generally only economic damages such as employee time
spent recovering lost data, etc. that can be remedied by contract law. Courts, however, are increasingly
recognizing data as property and where damage occurs software vendors may be
liable, if not strictly, for their products that damage consumers through lost
data and other issues.
With the emergence of AI in many industries,
including finances and other industries where it may make critical decision
regarding process and transactions, courts may extend strict liability for
products not specifically designed for a single customer. Additionally, considering that AI may be
present in robots, the damage could be much more than economic. An AI with faulty software could
physically harm people or property.
It is likely if AI advances to this point, courts may recognize
manufacturers responsibility and apply strict liability principles.
As with criminal liability, however, the
question of whether an AI is conscious and culpable for its actions, is central
to tort liability as well.
Additionally, considerations of whether an AI could pay any judgment
against it must be addressed. If
the
Professional Malpractice
Liability for AI creators
Presently, courts are reluctant to extend
professional malpractice standards to software engineers. This is true despite the fact that
software engineers often undergo extensive education and training, and many
companies require certifications. Michael
D. Scott, Tort Liability for Vendors of
Insecure Software: Has the Time Finally Come?, 67 Md. L. Rev. 425, 472
(2008). For example, in Hosp. Computer Sys., Inc. v. Staten Island
Hosp., a federal district court opined that a profession differs from mere
business through:
Òthe requirement of extensive formal training
and learning, admission to practice by a qualifying licensure, a code of ethics
imposing standards qualitatively and extensively beyond those that prevail or
are tolerated in the marketplace, a system for discipline of its members for
violation of the code of ethics, a duty to subordinate financial reward to
social responsibility, and, notably, an obligation on its members, even in
non-professional matters, to conduct themselves as members of a learned,
disciplined, and honorable occupation.
Hosp. Computer Sys., Inc. v. Staten Island Hosp., 788 F. Supp. 1351, 1361
(D.N.J. 1992)
While a software engineer designing todayÕs
programs may not be a ÒprofessionÓ for purposes of malpractice, the law may evolve
in the future to accommodate advanced AI. In the future, software engineers
may act as creators of consciousness and free will in the programs they
construct. They will set the
initial parameters of that consciousness – presumably including the basic
morals and rules the AI will apply to its experiences. It is likely the law will require those
creating such consciousness do so responsibly, that training will be extensive,
that licensing boards will be assembled, who will apply a code of ethics to
professionals under their supervision with disciplinary consequences for
violations. Should these
professional creators violate these professional duties, presumably, the law
will permit professional malpractice suits for those damaged by their negligence
or malice.
IV.
Trusts and Other Financial
Instruments
a.
Trusts
In his article, Legal Personhood for Artificial
IntelligencesI, Lawrence Solum explores the possibility of AI acting as
trustee for a trust. The program
would likely be similar to that Kurzweil described that determines buy and sell
decisions for investment funds. Lawrence B. Solum, Legal
Personhood for Artificial Intelligences (1992) 70 N.C. L. Rev. 1231, 1240. An AI may serve as a more reliable trustee since a
programmer could easily instruct it not to embezzle or steal funds
(alternatively, a greedy programmer may install a program similar to that in
the movie, Office Space http://en.wikipedia.org/wiki/Office_Space, to steal undetected from the
income of a trust by taking fractions of a penny for each transaction.) Solum
argues that the inclusion of corporations and governments as trustees
establishes that a trustee need not be a natural person, but points out two
critical objections to AI acting as a trustee.
First, the responsibility objection, centers
on the idea that AI could not be responsible if it breached one of its duties
as trustee. As noted above, a
software designer may be held liable for its defective product, but Solum
explores how an AI itself may be held liable for breach of its duty as trustee. A possible solution to this is to
require the AI maintain insurance against such a breach. An insured AI, Solum argues, would serve
the ends of liability, to compensate the victim.
Second, the judgment objection,
doubts an AI trusteeÕs competency to make complex judgments. The thrust of the judgment objection
is that an expert system consists only of a complex system of rules, which
leaves no room for system to make judgments in the sense of exercising
discretion. The objection is played out in three versions. First, it is argued
that an AI cannot cope with a change of legally relevant circumstances; second,
it is argued that an AI cannot make the moral choices it may encounter; and
third, it is argued that an AI cannot make some of the legal choices it will
face. In all three versions, the
problem is that, even in the case of parallel distributed algorithms, an expert
system cannot do anything but follow rules. As to the first argument, expert systems
seem to lack the kind of common sense needed to solve unexpected problems. As
to the second argument, expert systems seem to lack the sense of fairness that
is warranted when unexpected circumstances require overruling the letter of a
rule in order to serve its purpose. As to the third argument, expert systems
seem to lack the ability to take the necessary action if called to account in a
court of law.
Solum concludes that AIs presently
do not have the capacity to perform the duties of a trustee, especially in the
case of unexpected circumstances affecting the trust. He raises the question whether a more
limited form of legal personhood could be designed, allowing an AI to serve as
a limited purpose trustee and/or for simple trusts whose operation can be fully
automatic.82 In that case, the terms of the trust will need to specify a human
take-over whenever unanticipated circumstances rule out automatic behavior.83
We note that Solum seems to restrict himself here to automatic devices. Where
autonomic computing is concerned, it seems that responsiveness to changed
circumstances is part of its definition: even if the system cannot but follow
rules, it is supposed to be capable of adjusting the rules that determine its
performance. The first objection may thus fail in the case of autonomic
devices. As to the third objection, this also applies to corporations and funds
to which legal personhood has been attributed. This leaves the second objection
as the only real objection with regard to autonomic computer agents. Bert-Jaap Koops, Mireille Hildebrandt,
David-Olivier Jaquet-Chiffelle, Bridging the Accountability Gap: Rights for
New Entities in the Information Society?, 11 Minn. J.L. Sci. & Tech.
497, 500 (2010)
V.
Warfare
Where a nation has a significant population
of robots operating on AI, the implications of warfare targeted at that
population are interesting.
Consider a scenario where a nation attacks anotherÕs AI population,
infecting it with a virus instructing the AI population to turn against its
nation and AsimovÕs basic laws. If
the virus is effective, the country releasing it essentially has a standing AI
army ready to blindly follow orders.
The AI population is used as a weapon against its host population.
Where the AI robots are considered conscious,
is this biological warfare? Do
international conventions exist that this type of warfare? Where the robot population is not
considered conscious, but rather as property, is this covered under hacking statutes?
a.
AI as Property/Infrastructure
In 2009, the US government acknowledged its
computer network controlling its national power grid had been hacked into.
Allegedly both Russian and Chinese agents hacked the system in an attempt to
locate weaknesses in the system. The visitors refrained from damaging the
system, but left behind hidden software capable of bringing the system down. http://digitaljournal.com/article/303531#ixzz1s2R7rHDb
This type of hacking is a serious concern for
any computer network, and is particularly disconcerting for a country with a
large AI population presumably open to viral and potential hacking
attacks. A viral or hacking attack
on a country with a sizeable AI population could be devastating. Suppose another country released a
contagious virus on a countryÕs AI population, causing the AI to turn against
the human residents of the country.
. Presently, there are no international conventions preventing hacking,
however, in 2010, the Munich Conference on Security Policy addressed the
issue. http://www.securityconference.de/TOP-NEWS.425+M581b9350a37.0.html?&L=1&no_cache=1&sword_list[0]=cyberwar The conference recognized the
growing concern of hacking in international conflict and acknowledged the
importance of digital infrastructure to a countryÕs continued well-being. In particular, one commentator noted
that any successful hacking policy would require the harmonization of hacking
laws in the international community and the need for strong interaction between
private and public sectors. Id.
In the United States, the federal computer
fraud and abuse statute, 18 U.S.C. 1030, outlaws conduct that victimizes computer
systems. It is a cyber security
law. It protects federal computers,
bank computers, and computers connected to the Internet. It shields them from
trespassing, threats, damage, espionage, and from being corruptly used as
instruments of fraud. http://www.fas.org/sgp/crs/misc/97-1025.pdf Hacking laws are not only
relevant in international law, but in the criminal law as well. Where a person installs an unwanted
program on an AI and uses that AI to commit an unlawful act, the person could
be liable, as discussed above,
b.
AI as Beings
i. Biological warfare?
If a country releases a viral attack on
another countryÕs AI population, whether it be in robot form or otherwise, this
may be considered biological warfare, especially where AI is perceived as
conscious and retains the rights of a natural person. The Biological Weapons Convention (BWC)
effectively prohibits the development, production, acquisition, transfer,
stockpiling and use of biological and toxin weapons and is a key element in the
international communityÕs efforts to address the proliferation of weapons of
mass destruction. The BWC opened
for signatures in 1972 and came into force in 1975. Presently, the BWC has 165 States
Parties and 12 signatories. There are 19 states which have neither signed nor
ratified the Convention. http://www.unog.ch/80256EE600585943/%28httpPages%29/04FBBDD6315AC720C1257180004B1B2F?OpenDocument ; http://www.unog.ch/80256EDD006B8954/%28httpAssets%29/699B3CA8C061D490C1257188003B9FEE/$file/BWC-Background_Inf.pdf Thus, most countries have agreed
biological warfare will not be tolerated.
Could this same convention be applied to an AI population?
It is likely that if a country, or more
importantly, a consensus of world states perceives AI as worthy of protections
similar to natural persons, then the BWC may be extended to AI, or a similar
convention may be drafted specifically for AI.