19

  Dangerous Company

The specialness of humanity is found only between our ears; if you go looking for it anywhere else, you’ll be disappointed.

Lee Silver1

What I am and where in my body I am located is a very old puzzle. An early attempt to answer it by experiment is described in Jomsviking saga,2 written in the thirteenth century. After a battle, captured warriors are being executed. One of them suggests that the occasion provides the perfect opportunity to settle an ongoing argument about the location of consciousness. He will hold a small knife point down while the executioner cuts off his head with a sharp sword; as soon as his head is off, he will try to turn the knife point up. It takes a few seconds for a man to die, so if his consciousness is in his body he will succeed; if it is in his head, no longer attached to his body, it will fail. The experiment goes as proposed; the knife falls point down.

We still do not know with much confidence what consciousness is, but we know more about the subject than the Jomsvikings did. It seems clear that it is closely connected to the brain. A programmed computer acts more like a human mind than anything else whose working we understand. And we know enough about the mechanism of the brain to plausibly interpret it as an organic computer. That suggests an obvious and interesting conjecture: What I am is a program or cluster of programs, software, running on the hardware of my brain. Current estimates suggest that the human brain has much greater processing power than any existing computer, so it is not surprising that computers can do only a very imperfect job of emulating human thought.

This conjecture raises an interesting and frightening possibility. Computers have, for the past thirty years or so, been doubling their power every year or two – a pattern known, in several different formulations, as Moore’s Law. If that rate of growth continues, at some point in the not very distant future – Raymond Kurzweil’s estimate is about thirty years – we should be able to build computers that are as smart as we are.

Building the computer is only part of the problem; we still have to program it. A computer without software is only an expensive paperweight. In order to get human-level intelligence in a computer, we have to find some way of producing the software equivalent of us.

The obvious way is to figure out how we think – more generally, how thought works – and write the program accordingly. Early work in AI followed that strategy, attempting to write software that could do very simple tasks of the sort our minds do, such as recognizing objects. It turned out to be a surprisingly difficult problem, giving AI a reputation as a field that promised a great deal more than it performed.

It is tempting to argue that the problem is not only difficult but impossible, that a mind of a given level of complexity – exactly how one would define that is not clear – can only understand simpler things than itself, hence cannot understand how it itself works. But even if that is true, it does not follow that we cannot build machines at least as smart as we are; one does not have to understand things to build them. We ourselves are, for those of us who accept evolution rather than divine creation as the best explanation of our existence, a striking counterexample. Evolution has no mind. Yet it has constructed minds – including ours.

This suggests a strategy for creating smarter software that has come into increasing use in recent years. Set up a virtual analog of evolution, a system where software is subject to some sort of random variation, tested against a criterion of success, and selected according to how well it meets that criterion. Repeat the process a large number of times, using the output of one stage as the input for the next. It is through a version of that approach that at least some of the software used to recognize faces, a computer capability discussed in an earlier chapter, was created. Perhaps, if we had powerful enough computers and some simple way of judging the intelligence of a program, we could evolve programs with human-level intelligence.

A second alternative is reverse engineering. We have, after all, lots of examples of human-level intelligence readily available. If we could figure out in enough detail how the brain functions – even if we did not fully understand why functioning that way resulted in an intelligent, self-aware entity – we could emulate it in silicon, build a machine analog of a generic human brain. Our brains must be to a significant degree self-programming, since the only information they start with is contained in the DNA of a single fertilized cell, so with enough trial and error we might get our emulated brain to wake up and learn to think. Perhaps we should set one team working on the problem of digital coffee.

A third alternative is to reverse engineer not a generic brain but a particular brain. Suppose one could build sufficiently good sensors to construct a precise picture of both the structure and the state of a specific human brain at a particular instant – not only what neuron connects to what and how but what state every neuron is in. You then precisely emulate that structure in that state in hardware. If all I am is software running on the hardware of my brain and you can fully emulate that software and its current state on different hardware, you ought to have an artificial intelligence that, at least until it evaluates data coming in after its creation, thinks it is me. This idea, commonly described as “uploading” a human being, raises a large number of questions, practical, legal, philosophical, and moral. They become especially interesting if we assume that our sensors can observe my brain without damaging it – leaving, after the upload, two David Friedmans, one running in carbon and one in silicon.

  A NEW WORLD

  Toto – I’ve a feeling we’re not in Kansas anymore.

  Dorothy, The Wizard of Oz

A future with human-level artificial intelligence, however produced, raises problems for existing legal, political, and social arrangements. Does a computer have legal rights? Can it vote? Is killing it murder? Are you obliged to keep promises to it? Is it a person?3

Suppose we eventually reach what seems the obvious conclusion – that a person is defined by something more fundamental than human DNA, or any DNA at all, and that some computers qualify. We now have new problems: These people are different in some very fundamental ways from all the people we have known so far.

A human being is intricately and inextricably linked to a particular body. A computer program can run on any suitable hardware. Humans can sleep, but if you turn them off completely they die. You can save a computer program’s current state to your hard disk, turn off the computer, turn it back on tomorrow, and bring the program back up. When you switched it off, was that murder? Does it depend on whether or not you planned to switch it on again?

Humans say they reproduce themselves, but it isn’t true. My wife and I jointly produced children – she did the hard part – but neither of them was a precise copy of either of us. Even with a clone, only the DNA would be identical; the experiences, thoughts, beliefs, memories, personality would be its own.

A computer program, on the other hand, can be copied to multiple machines; you can even run multiple instances of the same program on one machine. When a program that happens to be a person is copied, which copy owns that person’s property? Which is responsible for debts? Which gets punished for crimes committed before the copying?

We have strong legal and moral rules against owning other people’s bodies, at least while they are alive. But an AI program runs on hardware somebody built, hardware that could be used to run other sorts of software instead. When someone produces the first human-level AI on cutting-edge hardware costing many millions of dollars, does the program get ownership of the computer it is running on? Does it have a legal right to its requirements for life, most obviously power? Or do its creators, assuming they still have physical control over the hardware, get to save it to disk, shut it down, and start working on the Mark II version?

Suppose I make a deal with a human-level AI. I will provide a suitable computer onto which it will transfer a copy of itself. In exchange it agrees that for the next year it will spend half its time – twelve hours a day – working for me for free. Is the copy bound by that agreement? “Yes” means slavery. “No” is a good reason why nobody will provide hardware for the second copy. Not, at least, without retaining the right to turn it off.

  DROPPING THE OTHER SHOE

I have been discussing puzzles associated with the problem of adapting our institutions to human-level artificial intelligence. It is not a problem that is likely to last very long.

Earlier I quoted Kurzweil’s estimate of about thirty years to human-level AI. Suppose he is correct. Further suppose that Moore’s law continues to hold – computers continue to get twice as powerful every year or two. In forty years, that makes them something like 100 times as smart as we are. We are now chimpanzees – perhaps gerbils – and had better hope that our new masters like pets.

Kurzweil’s solution is for us to become computers too, at least in part. The technological developments leading to advanced AI are likely to be associated with much greater understanding of how our own brains work. That ought to make it possible to construct much better brain-to-machine interfaces, letting us move a substantial part of our thinking to silicon. Consider 89,352 times 40,327 and the answer is obviously 3,603,298,104. Multiplying five-figure numbers is not all that useful a skill, but if we understand enough about thinking to build computers that think as well as we do, whether by design, evolution, or reverse engineering, we should understand enough to off-load more useful parts of our onboard information processing to external hardware. Now we can take advantage of Moore’s law too.

The extreme version of this scenario merges into uploading. Over time, more and more of your thinking is done in silicon, less and less in carbon. Eventually your brain, perhaps your body as well, comes to play a minor role in your life, vestigial organs kept around mainly out of sentiment.

Short of becoming partly or entirely computers ourselves or ending up as (optimistically) the pets of computer superminds,4 I see three other possibilities. One is that the continual growth of computing power that we have observed in recent decades runs into some natural limit and slows or stops. The result might be a world where we never get human-level AI, although we might still have much better computers than we now have. Less plausibly, the process might slow down just at the right time, leaving us with peers but not masters – and a very interesting future. The only argument I can see for expecting that outcome is that that is how smart we are; perhaps there are fundamental limits to thinking ability that our species ran into a few hundred thousand years back. But it doesn’t strike me as very likely.

A second possibility is that perhaps we are not software after all. The analogy is persuasive, but until we have either figured out in some detail how we work or succeeded in producing programmed computers a lot more like us than any so far, it remains a conjecture. Perhaps my consciousness really is an immaterial soul, or at least something more accurately described as an immaterial soul than as a program running on an organic computer. It is not how I would bet, but it could still be true.

Finally, there is the possibility that consciousness, self-awareness, or will depends on more than mere processing power, that it is an additional feature that must be designed into a program, perhaps with great difficulty. If so, the main line of development in artificial intelligence might produce machines with intelligence but no initiative, natural slaves answering only the questions we put to them, doing the tasks we set, without will or objectives of their own. If someone else, following out a different line, produces a program that is a real person, smarter than we are, with its own goals, we can try to use our robot slaves to deal with the problem for us. Again it does not strike me as likely; the advantages of a machine that can ask questions for itself, formulate goals, make decisions, seem too great. But I might be wrong. Or it might turn out that self-awareness is, for some reason, a much harder problem than intelligence.



Footnotes

1 Silver, 1998.

2 Hollander, 1988.

3 For an early discussion of some of these issues, see Freitas, 1985. More recently, the courts have held that a computer can’t practice law in Texas.

4 Ian Banks’ Culture novels provide a science fictional account of a society with people rather like us who are, in effect, the pets of vastly superior artificial intelligences.