1For the history of cryptography, see David Kahn, The Codebreakers (1967).

2The idea of public key cryptography is due to Martin Helman and Whitfield Diffie. The first successful implementation is due to Ronald Rivest, Adi Shamir and Leonard Adelman, who patented their algorithms and formed a company, RSA Data Security. See W. Diffie, "The first ten years of public-key cryptography," Proceedings of the IEEE, 76:560-577, 1988. A good summary for the lay reader is Paul Fahn, "Answers To FREQUENTLY ASKED QUESTIONS About Today's Cryptography." The current version is available by anonymous FTP from RSA.com. Information is also available from RSA's home page on the World Wide Web at http://www.rsa.com/.

3A prime is an integer divisible only by itself and 1. Seven is a prime, 15 is not; its prime factors are 3 and 5.

4Modern encryption involves very large numbers. A 128 bit key, for example, would be represented by about a thirty-eight digit number--say:

27,602,185,284,285,729,509,372,650,983,276,043,748

5This statement, like other statements about the security of encryption schemes, depends in part on the length of the key. The longer the string of numbers making up the key, the more difficult it is to break the encryption.

6I am somewhat oversimplifying the working of actual public key systems, in order to keep the explanations in the text from becoming too complicated. Public key encryption is slower than some equally secure single key encryption schemes, so in practice the message is encrypted with a single key system such as DES, the DES key is encrypted with the recipient's public key, and both are sent. The recipient uses his private key to decrypt the DES key then uses that to decrypt the message.

7PGP (for Pretty Good Privacy) is a public domain encryption computer program in widespread use, available for both Dos and Macintosh computers. It allows a user to create his own pair of keys, keep track of other people's public keys, and encrypt and decrypt messages.

8If one wishes current messages to stay private for the next ten years, one should use encryption adequate to defend against the computers of ten years from now-otherwise someone can intercept and record your messages now, then decrypt them when sufficiently powerful computers become available.

9The bandwidth of a communications channel is a measure of how much information it can carry--the wider the bandwidth, the more information can be transmitted per second.

10Extensive information on Chaumian digital cash can be found on the World Wide Web at:

http://www.digicash.com/publish/pu_fr_dc.html. See also Chaum, D., "Security Without Identification: Transaction Systems to Make Big Brother Obsolete," Communications of the ACM, vol. 28, no. 10, October 1985, pp. 1030-1044.

11The procedure is described in Chaum, D. "The Dining Cryptographers Problem" J. Cryptology (1988) 1:65-75;"Achieving Electronic Privacy," Scientific American, August 1992.

12The statement that a transaction cannot be observed must be qualified in three ways. The sorts of encryption I have been discussing are all vulnerable to an observer with a big enough computer and enough time. A transaction is secure if no such observer exists, or at least nobody willing to spend large enough sums to break it. Furthermore, even transactions that are cryptographically secure may be vulnerable to attacks that do not depend on encryption, such as secretly searching your house for the piece of paper on which you have imprudently written down your private key. Even in a world of strong privacy, one must still be careful. Finally, even encryption schemes that are believed to be secure are not, in most cases, provably secure-there is always the possibility that a sufficiently ingenious attack, or some future development in the relevant branches of mathematics, could dramatically reduce the time needed to crack them. For a discussion of recent experience with vulnerability of encryption schemes, see E.F. Brickell and A.M. Odlyzko. "Cryptanalysis: A survey of recent results." Proceedings of the IEEE, 76:578-593, 1988.

13Alternatively, the pirate could try to decompile the program-deduce the source code from the machine language version-and then recompile, producing his own different but functionally equivalent version of the program. Decompiling is a hard problem-just how hard is a matter of controversy. It can be made harder by the use of compilers designed to produce machine code whose detailed working is hard to deduce.

The approach I am describing does not protect against copying observable features of a program; whether and when such copying violates present U.S. copyright law is a complicated question, the answer to which is not yet entirely clear.

14Another way of protecting a computer program is to keep it on the owners computer, and sell, not the program itself, but the right to access it over the network.

15For example, a childrens movie might, as some do, collect large revenues from the sale of toys based on it. In a world of strong privacy, intellectual property law is still enforceable when what is being sold is a physical object (a toy) rather than information (the text of a book). Other approaches are observed in the context of publicly distributed software-shareware, freeware, and the like. Some programmers ask for-and sometimes get-payments for the use of their shareware programs, based either on moral suasion or tie-ins with less easily pirated goods, such as support. Philip Zimmermann received no royalties for writing PGP, but the effect on his reputation may have greatly increased his future earning power.

16This is one of the central ideas of "True Names," a science fiction story by Vernor Vinge, a computer scientist who was one of the early writers to recognize the potential social implications of computer networks and associated technologies. Individuals know each other in cyberspace, but their true names, their actual physical identities, are closely guarded secrets.

17The idea of of competitive private production of law in the physical world is explored in D. Friedman, The Machinery of Freedom: Guide to a Radical Capitalism, part III, (Open Court, 1989) . For a more recent discussion, see D. Friedman, "Law as a Private Good," Economics and Philosophy 10 (1994), 319-327.

18I am assuming only a modestly improved version of the sort of virtual reality we already have, which provides only sight and sound. In the long run, one could imagine a virtual reality technology that bypasses the sense organs to feed directly into the nervous system-a controlled dream. With that technology, a virtual community appears to its participants exactly like a real community; the only difference is that they can produce only virtual objects. Virtual food, however tasty, provides no nutrition, so real food must still be produced somewhere in the real world.

19The term was originated by Timothy C. May, author of the "Cryptoanarchist Manifesto," which may be found on the Web at http://www.quadralay.com/www/Crypt/Crypto-Anarchist/crypto-anarchist.html. It is reprinted in Building In Big Brother, ??? Ed., at pp. ???.For a more extensive exploration of May's views, see his "Cyphernomicon" at http://www.swiss.ai.mit.edu/6095/articles/cyphernomicon/CP-FAQ.

20It is worth noting that the Clinton administration, despite its support for controlling encryption via the Clipper Chip, has been outspoken in its support of the idea of developing computer networks.

21The reason we would expect governments to behave in this way is twofold. Individuals with political power, such as governors and congressmen, have very insecure property rights in their offices and thus little incentive to make long term investments-to bear political costs now in order to produce benefits for the holders of those offices twenty years hence. Individual voters have secure property rights in both their citizenship and their franchise, but because of rational ignorance they are unlikely to have the information necessary to evaluate the long-term consequences of the policies followed by their government.

22Consider the example of cordless phones, whose signals can be easily intercepted by anyone with a scanner. The more expensive models now include encryption, although it is unlikely that drug dealers, terrorists, or bankers make up a significant fraction of the relevant market. Most people prefer to be able to have at least some of their conversations in private.

23Via a procedure known as anonymous file transfer protocol-anonymous because the user does not require a password to access the server.

24The case is "Bernstein v. US Dept. of State, et al." Information can be found on the Web at http://www.eff.org/pub/Privacy/ITAR_export/Bernstein_case/.

25A good history of the agency, which has played an important role both in the development of encryption and the attempt to limits its general availability, is J. Bamford, The Puzzle Palace, Houghton Mifflin, Boston, 1982.

26Q: If the Administration were unable to find a technological solution like the one proposed, would the Administration be willing to use legal remedies to restrict access to more powerful encryption devices?

A: This is a fundamental policy question which will be considered during the broad policy review. (From White House Press Release on the Clipper, April 16, 1993)

27"In 1993, 976 U.S. police wiretaps were authorized, 17 were never installed, and 55 were

electronic bugs [which are unaffected by encryption], leaving 904 nonbug wiretaps installed. They cost an average of $57,256 each, for a total of $51.7 million spent on legal police wiretaps in 1993... . (I will neglect the possibility that police spent much more than this on illegal taps.) This is less than one part in 600 of the $31.8 billion spent by U.S. police in 1990 [12], and most of it was spent on time the police sat and listened to 1.7 million conversations (at $32 per conversation) ...

... Wiretaps cost an average of $14,300 per arrest aided in 1993 ..., or almost five times the $3,000 average spent per non-traffic arrest by U.S. police in 1991... ."

Robin Hanson, Communications of the ACM, December 1994, "Can Wiretaps Remain Cost Effective," (also available on the Web at http://www.hss.caltech.edu/~hanson/wiretap-cacm.html).

28A third possible explanation for secrecy, and the one which seems to be favored by the supporters of the Clipper initiative, is that Skipjack (the algorithm used in the Clipper chip) incorporates new and valuable methods of encryption which the NSA wishes to keep out of the hands of foreign governments. For a discussion of the Clipper proposal from a generally favorable viewpoint, see Dorothy E. Denning, "The Clipper encryption system," American Scientist, 81(4):319--323, July--August 1993.

29"Common objections include: the Skipjack algorithm is not public ... and may not be secure; the key escrow agencies will be vulnerable to attack; there are not enough key escrow agencies; the keys on the Clipper chips are not generated in a sufficiently secure fashion; there will not be sufficient competition among implementers, resulting in expensive and slow chips; software implementations are not possible; and the key size is fixed and cannot be increased if necessary." (Fahn 1993)