10

Intermission: What’s a Meta Phor?

I am typing these words into a metaphorical document in a metaphorical window on a metaphorical desktop; the document is contained in a metaphorical file folder represented by a miniature picture of a real file folder. I know the desktop is metaphorical because it is vertical; if it were a real desktop, everything would slide to the bottom.

All this is familiar to anyone whose computer employs a graphical user interface (GUI). We use that collection of layered metaphors for the same reason we call unauthorized access to a computer a break-in and a machine language program burned into a computer chip, unreadable by the human eye, a writing. The metaphor lets us transport a bundle of concepts from one thing, about which that bundle first collected, to something else to which we think most of the bundle is appropriate. Metaphors reduce the difficulty of learning to think about new things. Well-chosen metaphors do it at a minimal cost in wrong conclusions.

Consider the metaphor that underlies modern biology: evolution as intent. Evolution is not a person and does not have a purpose. Your genes are not people either and also do not have purposes. Yet the logic of Darwinian evolution implies that each organism tends to have those characteristics that it would have if it had been designed for reproductive success. Evolution produces the result we would get if each gene had a purpose – increasing its frequency in future generations – and acted to achieve that purpose by controlling the characteristics of the bodies it built.

Everything stated about evolution in the language of purpose can be restated in terms of variation and selection, Darwin’s original argument. But since we have dealt with purposive beings for much longer than we have dealt with the logic of Darwinian evolution, the restated version is further from our intuitions; putting the analysis that way makes it harder to understand, clumsier. That is why biologists1 routinely speak in the language of purpose, as when Dawkins titled his brilliant exposition of evolutionary biology “The Selfish Gene.”

For a final example, consider computer programming. When you write your first program, the approach seems obvious: Give the computer a complete set of instructions telling it what to do. By the time you have gotten much beyond telling the computer to type “Hello World,” you begin to realize that a complete set of instructions for a complicated set of alternatives is a bigger and more intricate web than you can hold in your mind at one time.

People who design computer languages deal with that problem through metaphors. Currently the most popular are the metaphors of object-oriented languages such as Java and C++. A programmer builds classes of objects. None of these objects are physical things in the real world; each exists only as a metaphorical description of a chunk of code. Yet the metaphor – independent objects, each owning control over its own internal information, interacting by sending and receiving messages – turns out to be an extraordinarily powerful tool for writing and maintaining programs, programs more complicated than even a very talented programmer could keep track of if he tried to conceptualize each as a single interacting set of commands.

METAPHORICAL CRIMES

From time to time I read a news story about an intruder breaking into a computer, searching through the contents, and leaving with some of them – but I don’t believe it. Looking at the computer sitting on my desk, it is obvious that intrusion is impractical for anything much bigger than a small cat. There isn’t room. And if one of my cats wants to get into my computer, it doesn’t have to break anything – just hook its claws into the plastic loop on the side (current Macs are designed to be easily upgradeable) and pull.

Computer break-in” is a metaphor. So are the fingerprints and watermarks of Chapter 8. Computer programmers have fingers and occasionally leave fingerprints on the floppy disks or CDs that contain their work, but copying the program does not copy the prints.

New technologies make it possible to do things that were not possible, sometimes not imagined, fifty years ago. Metaphors are a way of fitting those things into our existing pattern of ideas, instantiated in laws, norms, language. We already know how to think about people breaking into other people’s houses and what to do about it. By analogizing unauthorized access to a computer to breaking into a house we fit it into our existing system of laws and norms.

The choice of metaphor matters. What actually happens when someone “breaks into” a computer over the internet is that he sends the computer messages, the computer responds to those messages, and something happens that the owner of the computer does not want to happen. Perhaps the computer sends out what was supposed to be confidential information. Perhaps it erases its hard disk. Perhaps it becomes one out of thousands of unwitting accessories to a distributed denial of service attack, sending thousands of requests to read someone else’s web page – with the result that the overloaded server cannot deal with them all and the page temporarily vanishes from the web. Perhaps it spews out spam at the command of its new master.

The computer is doing what the cracker2 wants instead of what its owner wants. One can imagine the cracker as an intruder, a virtual person traveling through the net, making his way to the inside of the computer, reading information, deleting information, giving commands. That is how we are thinking of it when we call the event a break-in.

To see how arbitrary the choice of metaphor is, consider a lower tech equivalent. I want to serve legal papers on you. In order to do so, my process servers have to find you. I call your home number. If you do not answer, I tell the servers to look somewhere else. If you do answer, I hang up and send them in.

Nobody is likely to call what I have just described a break-in. Yet it fits almost precisely the earlier description. Your telephone is a machine that you have bought and connected to the phone network for a purpose. I am using your machine without your permission for a different purpose, one you disapprove of – finding out whether you are home, something you do not want me to know. With only a little effort, you can imagine a virtual me running down the phone line, breaking into your phone, peeking out to see if you are in, and reporting back. An early definition of cyberspace was “where a telephone conversation happens.”

We now have two metaphors for unauthorized access to a computer – housebreaking and an unwanted phone call. They have very different legal and moral implications.

Consider a third – what crackers refer to as “human engineering,” tricking people into giving them the secret information needed to access a computer. It might take the form of a phone call to a secretary from a company executive outside the office who needs immediate access to the company’s computer. The secretary, whose job includes helping company executives with their problems, responds with the required passwords. The name of the executive may not be immediately familiar, but would you, if you were the secretary, want to expose your ignorance of the names of the top people in the firm you work for?

Human engineering is both a means and a metaphor for unauthorized access. What the cracker is going to do to the computer is what he has just done to the secretary – call it up, pretend to be someone authorized to get the information it holds, and trick it into giving that information. If we analogize a computer not to a house or a phone but to a person, unauthorized access is not housebreaking but fraud – against the computer.

We now have three quite different ways of fitting the same act into our laws, language, and moral intuitions – as housebreaking, fraud, or an unwanted phone call. The first is criminal, the second often tortious, the third legally innocuous.

In the early computer crime cases, courts were uncertain what the appropriate metaphor was. Much the same problem arose in the early computer copyright cases. Courts were uncertain whether a machine language program burned into the ROMs of a computer was properly analogized to a writing (protectable), a fancy cam (unprotectable, at least by copyright), or (the closest equivalent for which they had a ruling by a previous court) the paper tape controlling a player piano.3

In both cases, the legal uncertainty was ended by legislatures – Congress when it revised the copyright act to explicitly include computer programs, state legislatures when they passed computer crime laws that made unauthorized intrusion a felony. The copyright decision was correct, at least as applied to literal copying, for reasons I have discussed at some length elsewhere.4 The verdict on the intrusion case is less clear.

Choosing a Metaphor

We have three different metaphors for fitting unauthorized use of a computer into our legal system. One suggests that it should be a felony, one a tort, one a legal if annoying act. To choose among them, we consider how the law will treat the acts in each case and why one treatment or the other might be preferable.

The first step is to briefly sketch the difference between a crime and a tort. A crime is a wrong treated by the legal system as an offense against the state. A criminal case has the form “The State of California v. D. Friedman.” So far as the law is concerned, the state of California is the victim – the person whose computer was broken into is merely a witness. Whether to prosecute, how to prosecute, whether and on what terms to settle (an out of court settlement in a criminal case is called a plea bargain) are decided by employees of the state of California. The cost of prosecution is paid by the state and the fine, if any, paid to the state. The punishment has no necessary connection to the damage done by the wrong, since the offense is not “causing a certain amount of damage” but “breaking the law.”

A tort is a wrong treated by the legal system as an offense against the victim; a civil case has the form “A. Smith v. D. Friedman.” The victim decides whether to sue, hires and pays for the attorney, controls the decision of whether to settle out of court, and collects the damages awarded by the court. In most cases, the damage payment awarded is supposed to equal the damage done to the victim by the wrong – enough to “make whole” the victim.

An extensive discussion of why and whether it makes sense to have both kinds of law and why it makes sense to treat some kinds of offenses as torts and some as crimes is matter for another book; interested readers can find it in Chapter 18 of my book Law’s Order.5 For our purposes it will be sufficient to note some of the legal rules associated with the two systems, some of their advantages and disadvantages, and how they might apply to a computer intrusion.

One difference we may start with is that, as a general rule, criminal conviction does, and tort does not, require intent – although the definition of intent is occasionally stretched pretty far. On the face of it, unauthorized access clearly meets that requirement. Or perhaps not. Consider three stories – two of them true.

The Boundaries of Intent

The year is 1975. The computer is an expensive multi-user machine located in a dedicated facility. An employee asks it for a list of everyone currently using it. One of the sets of initials he gets belongs to his supervisor – who is standing next to him, obviously not using the computer.6

The computer was privately owned but used by the Federal Energy Administration, so they called in the FBI. The FBI succeeded in tracing the access to Bertram Seidlitz, who had left six months earlier after helping to set up the computer’s security system. When they searched his office, they found forty rolls of computer printout paper containing source code for WYLBUR, a text-editing program.

The case raised a number of questions about how existing law fit the new technology. Did secretly recording the “conversation” between Seidlitz and the computer violate the law requiring that recordings of phone conversations be made only with the consent of one of the parties (or a court order, which they did not have)? Was the other party the computer; if so could it consent? Did using someone else’s code to access a computer count as obtaining property by means of false or fraudulent pretenses, representations, or promises – the language of the statute? Could you commit fraud against a machine? Was downloading trade secret information, which WYLBUR was, a taking of property? The court found that it could, you could, and it was; Seidlitz was convicted.

One further question remains: Was he guilty? Clearly he used someone else’s access codes to download and print out the source code to a computer program. The question is why.

Seidlitz’s answer was quite simple. He believed the security system for the computer was seriously inadequate. He was demonstrating that fact by accessing the computer without authorization, downloading stuff from inside the computer, and printing it out. When he was finished, he planned to send all forty rolls of source code to the people now in charge of the computer as a demonstration of how weak their defenses were. One may suspect – although he did not say – that he also planned to send them a proposal to redo the security system for them. If he was telling the truth, his access, although unauthorized, was not in violation of the law he was convicted under – or any then existing law that I can think of.

The strongest evidence in favor of his story was forty rolls of printer output. In order to make use of source code, you have to compile it – which means that you first have to get it into a form readable by a computer. In 1975, optical character recognition, the technology by which a computer turns a picture of a printed page back into machine-readable text, did not yet exist; even today it is not entirely reliable. If Seidlitz was planning to sell the source code to someone who would actually use it, he was also planning at some point to have someone type all forty rolls back into a computer – making no mistakes, since a mistake might introduce a bug into the program. It would have been far easier, instead of printing the source code, to download it to a tape cassette or floppy disk. Floppy disks capable of being written to had come out in 1973, with a capacity of about 250K; a single 8” floppy could store about 100 pages worth of text. Forty rolls of printout would be harder to produce and a lot less useful than a few floppy disks. On the other hand, the printout would provide a more striking demonstration of the weakness of the computer’s security, especially for executives who did not know very much about computers.

One problem with using law to deal with problems raised by a new technology is that the legal system may not be up to the job. It is likely enough that the judge in U.S. v. Seidlitz (1978) had never actually touched a computer and more likely still that he had little idea what source code was or how it was used.

Seidlitz had clearly done something wrong. But deciding whether it was a prank or a felony required some understanding of both the technology and the surrounding culture and customs – which a random judge was unlikely to have. In another unauthorized access case,7 decided a year earlier, the state of Virginia had charged a graduate student at Virginia Polytechnic Institute with fraudulently stealing more than $5,000. His crime was accessing a computer that he was supposed to access in order to do the work he was there to do – using other students’ passwords and keys to access it, because nobody had gotten around to allocating computer time to him and he was embarrassed to ask for it. He was convicted and sentenced to two years in the State penitentiary. The sentence was suspended, he appealed, and on appeal was acquitted – on the grounds that what he had stolen was services, not property. Only property counted for purposes of the Virginia statute, and the scrap value of the computer cards and printouts was less than the $100 that the statute required. While charges of grand larceny were still pending against him VPI gave him his degree, demonstrating what they thought of the seriousness of his offense.

When I tell my students the sad case of Bertram Seidlitz, I like to illustrate the point with another story, involving more familiar access technologies. This time I am the hero, or perhaps villain.

The scene is the front door of the University of Chicago Law School. I am standing there because, during a visit to Chicago, it occurred to me that I needed to check something in an article in the Journal of Legal Studies before emailing off the final draft of an article. The University of Chicago Law School not only carries the JLS, it produces the JLS; the library is sure to have the relevant volume. While checking the article, perhaps I can drop in on some of my ex-colleagues and see how they are doing.

Unfortunately, it is a Sunday during Christmas break; nobody is in sight inside and the door is locked. The solution is in my pocket. When I left the Law School last year to take up my present position in California I forgot to give back my keys. I take out my key ring, find the relevant key, and open the front door of the law school.

In the library another problem arises. The volume I want is missing from the shelf, presumably because someone else is using it. It occurs to me that one of the friends I was hoping to see is both a leading scholar in the field and the editor of the JLS. He will almost certainly have his own set in his office – as I have in my office in California.

I knock on his door; no answer. The door is locked. But at the University of Chicago Law School – a very friendly place – the same key opens all faculty offices. Mine is in my pocket. I open his door, go in, and there is the Journal of Legal Studies on his office shelf. I take it down, check the article, and go.

The next day, on the plane home, I open my backpack and discover that, as usual, I was running on autopilot; instead of putting the volume back on the shelf I took it with me. When I get home, I mail the volume back to my friend with an apologetic note of explanation.

Let us now translate this story into a more objective account and see where I stand, legally speaking.

Using keys I had no legal right to possess I entered a locked building I had no legal right to enter, went into a locked room I had no legal right to enter and left with an item of someone else’s property that I had no authorization to take. Luckily for me, the value of one volume of the Journal of Legal Studies is considerably less than $5,000 so although I may possibly be guilty of burglary under Illinois law, I am not covered by the federal law against interstate transportation of stolen property. Aside from the fact that the Federal government has no special interest in the University of Chicago Law School,8 the facts of my crime were nearly identical to the facts of Seidlitz’s. Mine was just the low-tech version.

As it happens, this story is almost entirely fiction – inspired by the fact that I really did forget to give back my keys until a year or so after I left Chicago, so could have gotten into both the building and a faculty office if I had wanted to. But even if it were true, I would have been at no serious risk of anything worse than embarrassment.9 Everyone involved in my putative prosecution would have understood the relevant facts – that not giving keys back is the sort of thing absent-minded academics do, that using those keys in the same way you have been using them for most of the past eight years, even if technically illegal, is perfectly normal and requires no criminal intent, that looking at a colleague’s copy of a journal without his permission when he isn’t there to give it is also perfectly normal, and that absent-minded people sometimes walk off with things instead of putting them back where they belong. Seidlitz – assuming he really was innocent – was not so lucky.

My third story, like my first, is true.10 The scene this time is a building in Oregon belonging to Intel. The year is 1993. The speaker is an Intel employee named Mark Morrissey.

On Thursday, October 28, at 12:30 in the afternoon, I noticed an unusual process running on a Sun computer which I administer. Further checking convinced me that this was a program designed to break, or crack, passwords. I was able to determine that the user “merlyn” was running the program. The username “merlyn” is assigned to Randal Schwartz, an independent contractor. The password-cracking program had been running since October 21st. I investigated the directory from which the program was running and found the program to be Crack 4.1, a powerful password cracking program. There were many files located there, including passwd.ssd and passwd.ora. Based on my knowledge of the user, I guessed that these were password files for the Intel SSD organization and also an external company called O’Reilly and Associates. I then contacted Rich Cower in Intel security.

Intel security called in the local police. Randy Schwartz was interrogated at length; the police had a tape recorder but did not use it. Their later account of what he said was surprisingly detailed, given that it dealt with subjects the interrogating officers knew little about, and strikingly different from his account of what he said. The main facts, however, are reasonably clear.

Randy Schwartz was a well-known computer professional, the author of two books on PERL, a language used in building things on the web. He had a reputation as the sort of person who would rather apologize afterwards than ask permission in advance. One reason Morrissey was checking the computer Thursday afternoon was to make sure Schwartz wasn’t running any jobs on it that might interfere with its intended function. As he put it in his statement, “Randal has a habit of using as much CPU power as he can find.”

Schwartz worked for Intel as an independent contractor running parts of their computer system. He accessed the system from his home using a gateway through the Intel firewall that he had created on instructions from Intel for the use of a group working offsite but retained for his own use. In response to orders from Intel he had first tightened its security and later shut it down completely – then quietly recreated it on a different machine and continued to use it.

How to Break Into Computers11

The computer system at Intel, like many others, used passwords to control access. This raises an obvious design problem. In order for the computer to know if you typed in the right password, it needs a list of passwords to check yours against. But if there is a list of passwords somewhere in the computer’s memory, anyone who can get access to that memory may be able to find the list.

You can solve this problem by creating a public key/private key pair and throwing away the private key – more generally, by creating some procedure that encrypts but does not decrypt.12 Every time a new password is created, encrypt it and add it to the computer’s list of encrypted passwords. When a user types in a password, encrypt that and see if what you get matches one of the encrypted passwords on the list. Someone with access to the computer’s memory can copy the list of encrypted passwords, can copy the procedure for encrypting them, but cannot copy a procedure for decrypting them because it is not there. So he has no way of getting from the encrypted version of the password in the computer’s memory to the original password that he has to type to get the desired level of access to (and control over) the computer.

A program such as Crack solves that problem by guessing passwords, encrypting the guesses, and comparing the result to the list of encrypted passwords. If it had to guess at random, the process would take a very long time. But despite the instructions of the people running the system, people who create passwords frequently insist on using their wife’s name, or their date of birth, or something else easier to remember than V7g9H47ax. It does not take all that long for a computer program to run through a dictionary of first names and every date in the past seventy years, encrypt each, and check it against the list. One of the passwords Randy Schwartz cracked belonged to an Intel vice president. It was the word PRE$IDENT.

Randy Schwartz’s defense was the same as Bertram Seidlitz’s. He was responsible for parts of Intel’s computer system. He suspected that its security was inadequate. The obvious way to test that suspicion was to see whether he could break into it. Breaking down doors is not the usual way of testing locks, but breaking into a computer does not, by itself, do any damage.

By correctly guessing one password, using that to get at a file of encrypted passwords, and using Crack to guess a considerable number of them, Randy Schwartz demonstrated the vulnerability of Intel’s system. I suspect, knowing computer people although not that particular computer person, that he was also entertaining himself by solving the puzzle of how to get through Intel’s barriers while proving how much cleverer he was than the people who set up the system he was cracking – including one particularly careless Intel vice president. He was simultaneously (but less successfully) running Crack against a password file from a computer belonging to O’Reilly and Associates, the company that publishes his books.

Since Intel’s computer system contains a lot of valuable intellectual property protected (or not) by passwords, demonstrating its vulnerability might be considered a valuable service. Intel did not see it that way. They actively aided the state of Oregon in prosecuting Randy Schwartz for violating Oregon’s computer crime law. He ended up convicted of two felonies and a misdemeanor – unauthorized access to, alteration of, and copying information from a computer system.

Two facts lead me to suspect that Randy Schwartz may have been the victim, not the criminal. The first is that Intel produced no evidence that he had stolen any information from them other than the passwords themselves. The other is that, when Crack was detected running, it was being run by “merlyn” – Randy Schwartz’s username at Intel. The Crack program was in a directory named “merlyn.” So were the files for the gate through which the program was being run. I find it hard to believe that a highly skilled computer network professional attempting to steal valuable intellectual property from one of the world’s richest and most sophisticated high-tech firms would do it under his own name. If I correctly interpret the evidence, what actually happened was that Intel used Oregon’s computer crime law to enforce its internal regulations against a subcontractor in the habit of breaking them. Terminating the offender’s contract is a more conventional, and more reasonable, response.

In fairness to Intel, I should add that almost all my information about the case comes from an extensive web site set up by supporters of Randy Schwartz – extensive enough to include the full transcript of the trial. Neither Intel nor its supporters has been willing to web a reply. I have, however, corresponded with an acquaintance who was in a position to know something about the case. My friend believed that Schwartz was guilty but was unwilling to offer any evidence.

Perhaps he was guilty; Intel might have reasons for keeping quiet other than a bad conscience. Perhaps Seidlitz was guilty. It is hard, looking back at a case with very imperfect information, to be sure my verdict on its verdict is correct. But I think both cases, along with my own fictitious burglary, show problems in applying criminal law to something as ambiguously criminal as unauthorized access to a computer, hence provide at least a limited argument for rejecting the break-in metaphor in favor of one of the alternatives.

Is Copying Stealing? The Bell South Case

One problem with trying to squeeze unauthorized access into existing criminal law is that intent may be ambiguous. Another is that it does not fit very well. The problem is illustrated by U.S. v. Neidorf, entertainingly chronicled in The Hacker Crackdown, Bruce Sterling’s account of an early and badly bungled campaign against computer crime.

The story starts in 1988 when Robert Riggs, a college student, succeeded in accessing a computer belonging to Bell South and downloading a document about the 911 emergency system. He had no use for the information in the document, which dealt with bureaucratic organization – who was responsible for what to whom – not technology. But written at the top was “WARNING: NOT FOR USE OR DISCLOSURE OUTSIDE BELLSOUTH OR ANY OF ITS SUBSIDIARIES EXCEPT UNDER WRITTEN AGREEMENT,” which made getting it an accomplishment and the document a trophy. He accordingly sent a copy to Craig Neidorf, who edited a virtual magazine – distributed from one computer to another – called Phrack. Neidorf cut out about half of the document and included what was left in Phrack.

Eventually someone at Bell South discovered that their secret document was circulating in the computer underground – and ignored it. Somewhat later, federal law enforcement agents involved in a large-scale crackdown on computer crime descended on Riggs. He and Neidorf were charged with interstate transportation of stolen property valued at more than $5,000 – a Federal offense. Riggs agreed to a guilty plea; Neidorf refused and went to trial.

Bell South asserted that the twelve-page document had cost $79,449 to produce – well over the $5,000 required for the offense. It eventually turned out that they had calculated that number by adding to the actual production costs – mostly the wages of the employees who created the document – the full value of the computer it was written on, the printer it was printed on, and the computer’s software. The figure was accepted by the federal prosecutors without question. Under defense questioning, it was scaled back to a mere $24,639.05. The case collapsed when the defense established two facts: that the warning on the 911 document was on every document that Bell South produced for internal use, however important or unimportant, and that the information it contained was routinely provided to anyone who asked for it. One document, containing a more extensive version of the information published in Phrack, information Bell South had claimed to value at just under $80,000, was sold by Bell South for $13.

In the ancient days of single-sex college dormitories there was a social institution called a panty raid. A group of male students would access, without authorization, a dormitory of female students and exit with intimate articles of apparel. The objective was not acquiring underwear but defying the authority of the college administration. Robert Riggs engaged in a virtual panty raid – and ended up pleading guilty to a felony. Craig Neidorf received the booty from a virtual panty raid and displayed it in his virtual window. For that act, the federal government attempted to convict him of offenses that could have led to a prison term of over sixty years.

Part of the problem, again, was that the technology was new, hence unfamiliar to many of the people – cops, lawyers, judges – involved in the case. Dealing with a world they did not understand, they were unable to distinguish between a panty raid and a bank robbery.

Another part of the problem was that the law the case was prosecuted under was designed to deal with the theft and transportation of physical objects. It was natural to ask the questions appropriate to that law – including how much the stolen object cost to produce. But what was labeled theft was in fact copying; after Neidorf copied the document, Bell South still had it. The real measure of the damage was not what it cost to produce the document but the cost to Bell South of other people having the information. Bell South demonstrated, by its willingness to sell the same information at a low price, that it regarded that cost as negligible. Robert Riggs was prosecuted under a metaphor. On the evidence of that case, it was the wrong metaphor.

Crime or Tort?

Bell South’s original figure for the cost of creating the 911 document was one that no honest person could have produced. If you disagree, ask yourself how Bell South would have responded to an employee who, sending in his travel expenses for a 100-mile trip, included the full purchase price of his car – the precise equivalent of what Bell South did in calculating the cost of the document. Bell’s testimony about the importance and secrecy of the information contained in the document was also false, but not necessarily dishonest; the Bell South employee who gave it may not have known that the firm provided the same information to anyone who asked for it. Those two false statements played a major role in a criminal prosecution that could have put Craig Neidorf in prison and did cost him, his family, and his supporters hundreds of thousands of dollars in legal expenses.

Knowingly making false statements that cost other people money is usually actionable. But the testimony of a witness in a trial is privileged – even if deliberately false, the witness is not liable for the damage done. He can be prosecuted for perjury – but that decision is made not by the injured party but by the state.

Suppose the same case had occurred under tort law. Bell South sues Riggs for $79,449. In the course of the trial it is established that the figure was wildly inflated by the plaintiff, that in any case the plaintiff still has the property, so has a claim only for damage done by the information getting out, and that that damage is zero since the information was already publicly available from the plaintiff. Not only does Bell South lose its case, it is at risk of being sued for malicious prosecution, which is not privileged. In addition, of course, Bell South, rather than the federal government, would have been paying the costs of prosecution. Putting such cases under tort law would have given Bell South an incentive to check its facts and figure out whether it had really been injured before, not after, it initiated the case – saving everyone concerned a good deal of time, money, and unpleasantness.

One advantage of tort law is that the plaintiff might have been liable for the damage it did by claims that it knew were false. Another is that it would have focused attention on the relevant issue – not the cost of producing the document but the injury to the plaintiff from having it copied. That is a familiar issue in the context of trade secret law, which comes considerably closer than criminal law to fitting the actual facts of the case.

A further problem with criminalizing such acts is illustrated by the fate of Robert Riggs. Unlike Craig Neidorf, he accepted a plea bargain and could have spent a substantial amount of time in prison – although in fact his sentence was cancelled after the trial made it clear that he had done nothing seriously wrong. One reason for agreeing to a guilty plea, presumably, was the threat of a much longer jail term if the case went to trial and he lost. Criminal law, by providing the prosecution with the threat of very severe punishments, poses the risk that innocent defendants may agree to plead guilty to a lesser offense. If the case had been a tort prosecution by the victim, the effective upper bound on damages would have been everything that Riggs owned.

There is, however, another side to that argument. Under tort law, the plaintiff pays for the prosecution. If winning the case is likely to be expensive and the defendant does not have the money to pay large damages, it may not be worth suing in the first place – in which case there is no punishment and no incentive not to commit the tort. That problem – providing an adequate incentive to prosecute when prosecution is private – is one we already touched on in Chapter 5 and will return to in Chapter 12.



Footnotes

1   And evolutionary psychologists. See Jerome H. Barkow, The Adapted Mind: Evolutionary Psychology and the Generation of Culture.

2 “Hacker” has come to be applied to people who do illegal things with computers, apparently as a result of a false-back etymology; noncomputer people saw the term, guessed what it meant, and guessed wrong. In the computer culture a hacker is someone who does ingenious things in tricky and unconventional ways, a programmer who modifies a videogame program to run twice as fast via a trick – a hack – that may stop working the next time the operating system gets upgraded. I like to imagine a programmer observing an elephant for the first time: “It picks up things how? What a brilliant hack.” Which is why I am using “cracker” instead.

3 Found unprotectable in White-Smith Music Publishing Company v. Apollo Company, 209 U.S. 1 (1908). The cam metaphor is due to John Hersey.

4 Friedman, 2000, chapter 11.

5 Friedman (2000) Chapter 18.

6 The case is United States v. Seidlitz, United States Court of Appeals, Fourth Circuit 589 F.2d 152 (1978); my summary is  here.

7 Lund v. Virginia , Supreme Court of Virginia 232 S.E.2d 745, 217 Va. 688 (1977). My  summary.

8 Or perhaps it should. A Dean of the University of Chicago Law School, noting that three of the Federal Appeals Court judges in the seventh circuit had previously been members of his faculty, suggested that it might raise constitutional issues if the government had delegated to him the power to select judges.

9 A judge of my acquaintance who read my fictional story expressed the opinion that, if I had acted as described, I would not have been guilty of burglary. Which is a relief--since although I didn't, I easily could have.

10 A general discussion of the case and the  initial report.

11 A discussion of these technologies by someone who knows much more about them than I do.

12 Still more generally, by creating some form of one-way encryption or hashing – a way of scrambling information that does not require you to have the information necessary to unscramble it.