This article was published as
"Privacy and Technology." Social Philosophy & Policy
17:186-212 (2000) and in The Right to Privacy, edited by
Ellen Frankel Paul, Fred D. Miller, Jr., and Jeffrey Paul, Cambridge
University Press, 2000. The webbed version here is based on a late
draft on my hard drive, and so differs in a few details from the
published version. It is being provided on the web page with
permission of the Journal.
Introduction
Privacy:
1. state of being apart from the company or observation of others
...
The
definition above, from my 1944 Webster’s unabridged, nicely
encapsulates two of the intertwined meanings of "privacy." In the
first sense, physical seclusion, the level of privacy in modern
developed societies is extraordinarily high by historical standards.
We take it for granted that one bed in a hotel will be occupied by
either one person or a couple–not by several strangers. At
home, few of us expect to share either bed or bedroom with our
children. In these and a variety of other ways, increased physical
privacy has come as a byproduct of increased wealth.[1]
The
situation with regard to informational privacy is less clear. While
the ability of other people to see what we are doing with their own
eyes has decreased as a result of increased physical privacy, their
ability to observe us indirectly has increased–for two quite
different reasons.
One
is the development of increasingly sophisticated technologies for
transmitting and intercepting messages. Eavesdropping requires that
the eavesdropper be physically close to his victim; wiretapping does
not. Current satellite observation technology may not quite make it
possible to read lips from orbit, but it is getting close.
The
other reason is the development of greatly improved technologies for
storing and manipulating information. What matters to me is not
whether information about me exists but whether other people can find
it. Even if all of the information I wish to keep private–say
my marital history or criminal record–exists in publicly
accessible archives, it remains, for all practical purposes, private
so long as the people I am interacting with do not know that it
exists nor where to look for it. Modern information processing has at
least the potential to drastically reduce that sort of privacy. The
same search engines and collections of information that provide the
ideal tools for the researcher who dives into the worldwide web in
the hope of emerging with a fact in his teeth work equally well
whether the fact is historical or personal. Privacy through obscurity
is not, or at least soon will not be, a practical option.
The
two sorts of privacy–physical and informational–are
connected. Physical privacy is a means, although a decreasingly
effective means, to informational privacy. And lack of informational
privacy–in the limiting case, a world where anyone could know
everything about you at every instant–feels like lack of
physical privacy, a sort of virtual crowding.
Physical
privacy can be a means to information privacy–but so can the
opposite; the individual in the crowded city is more anonymous, has
more informational privacy, than in the less crowded village. But the
reason for that is that his privacy is protected by the difficulty of
sorting through such a vast amount of data in order to find the
particular facts relevant to him. That form of protection cannot
survive modern information technology. Hence the connection between
physical and information privacy may become stronger, not weaker,
over the course of the next few decades.
A
third sort of privacy is attentional privacy, the privacy violated by
unsolicited email or telephone calls from people trying to sell you
things that you do not want to buy. Modern technology makes sending
messages less expensive, facilitating bulk email, but also makes
filtering out messages without human intervention easier, thus
lowering the cost of dealing with unwanted messages.
In
this article I will be focussing on issues of information privacy.
But, as we will see, the technology of protecting informational
privacy may depend in part on the existence of physical privacy. One
interesting question for the future will be whether it is possible to
develop technologies that break that link, that make it practical to
engage in information transactions without taking any physical
actions that can be observed and understood by an outside
observer.
The
first section of the article explores the questions of what
informational privacy is, why and whether it is a good thing, and why
it is widely regarded as a good thing. The second section surveys new
technologies useful for either protecting or violating an individual’s
control over information about himself. The final section summarize
my conclusions.
What
is Informational privacy and why does it
matter?
If
all information about you is readily available to anyone who wants
it, you have no informational privacy. If nobody else knows anything
about you, you have perfect informational privacy. All of us live
between those two extremes.
Informational
privacy is not always desirable. Film stars and politicians pay
professional public relations firms to reduce their privacy by
getting (some) information about them widely distributed. Many other
people, however, bear costs in order to reduce the amount other
people know about them, demonstrating that, to them, privacy has
positive value. And many people also bear costs learning about
others, demonstrating that to them the privacy of those other people
has negative value. At the same time, most people regard privacy in
the abstract as a good thing. It is common to see some new product,
technology, or legal rule attacked as reducing privacy, rare to see
anything attacked as increasing privacy.
This
raises two related questions. The first is why individuals
(sometimes) value their own privacy, and so are willing to take
actions to protect it. The second is why many individuals speak and
act as though the cost to them of a reduction in their privacy is
larger than the benefit to them of a similar reduction in other
people’s privacy, making privacy in general, and not merely
privacy for themselves, a good.
The
answer to the first question is fairly straightforward. Information
about me in the hands of other people sometimes permits them to gain
at my expense. They may do so by stealing my property–if, for
example, they know when I will or will not be home. They may do so by
getting more favorable terms in a voluntary transaction–if, for
example, they know just how much I am willing to pay for what they
are selling.[2]
They may do so by preventing me from stealing their property–by,
for example, not hiring me as company treasurer after discovering
that I am a convicted embezzler or not lending me money after
discovering that I have repeatedly declared bankruptcy.
Information
about me in other people’s hands may also sometimes make me
better off–for example, the information that I am an honest and
competent attorney. But privacy rights[3]
as commonly interpreted do not prevent people from giving out
information about themselves, merely from obtaining information about
others without their consent. If I have control over information
about myself I can release it when doing so benefits me and keep it
private when releasing it would make me worse off.[4]
Hence it is not surprising that people value having such
control.
This
does not, however, answer the second question. To the extent that my
control over information about me makes me better off at the expense
of other people, and their control over information about them makes
them better off at my expense, it is not clear why I should regard
protection of everyone’s privacy as on net a good thing. The
examples I offered included one case–where my privacy protected
me from burglary–in which privacy produced a net benefit, since
the gain to a burglar is normally less than the loss to his victim.
It included one case–where my privacy permitted me to steal
from or defraud others–in which privacy produced a net loss,
for similar reasons. And it included one case–bargaining–where
the net effect appeared to be a wash.[5]
That
third case is worth a little more attention. Suppose you have
something to sell–say an apple. I am the only buyer. The apple
is worth one dollar to you and two to me. We are engaged in the game
known as bilateral monopoly.[6]
At any price between one dollar and two, both of us benefit by the
transaction, but the higher the price is within that range the more
of the benefit goes to you and the less to me.
I
can try to get a lower price by persuading you that the apple is
worth less to me than it really is, hence that if you insist on a
high price there will be no sale. You, similarly, can try to get a
lower price by persuading me that the apple is worth more to you than
it is, so if I don’t agree to a higher price there will be no
sale. One risk with both tactics is that they may succeed too well.
If you persuade me that the apple is worth more than two dollars to
you, or if I persuade you that it is worth less than one dollar to
me, the deal falls through.
Suppose
I get accurate information on the value of the apple to you. One
result is that your persuasion no longer works, making it more likely
that I will get the apple at a low price. That is merely a transfer
from you to me, with no net benefit. A second result is to make
bargaining breakdown less likely. I will still try to persuade you
that the apple is worth less than two dollars to me, but I will not
try to persuade you that it is worth less than one dollar, because I
now know that doing so is against my interest. This second result
represents a net benefit, since it increases the chance that the
apple will sell (net gain one dollar) instead of not selling (net
gain zero).
Generalizing
the argument, it looks as though privacy produces, on average, a net
loss in the bargaining case.[7]
In the other cases, it produces a gain if it is being used to protect
other rights (assuming that those rights have been defined in a way
that makes their protection efficient) and a net loss if it is being
used to violate other rights (with the same assumption). There is no
obvious reason why the former situation should be more common than
the latter. So it remains puzzling why people in general support
privacy rights–why they think it is, on the whole, a good thing
for people to be able to control information about
themselves.
Privacy
Rights and Rent Seeking
One
possible approach to this puzzle starts by viewing privacy rights as
a mechanism for reducing costs associated with rent seeking–expenditures
by one person designed to benefit himself at the cost of another.
Consider again our bilateral monopoly bargaining game. Assume this
time that each player can, at some cost, obtain information about the
value of the apple to the other player. I can plant listening devices
or miniature video cameras about your home in the hope of seeing or
hearing something that will tell me just how much you value the
apple. You can take similar actions with regard to me. Such
activities may produce some net gain, by reducing the risk of
bargaining breakdown, but they also produce a net cost–the cost
of the spying.
The
argument becomes clearer if we include not only your efforts to learn
things about me but my efforts to prevent you from doing so. Suppose,
for example, that I have a taste for watching pornographic videos and
my boss is a puritan who does not wish to employ people who enjoy
pornography. We consider two possible situations–one in which
my boss is, and one in which he is not, able to keep track of what I
am renting from the local video store. We assume that I know which is
the case.
If
I know the boss is monitoring my rentals from that store, I respond
by renting videos from a more distant and less convenient outlet. My
boss is no better off as a result of the reduction in my privacy; I
am still viewing pornography and he is still ignorant of the fact. I
am worse off by the additional driving time required to visit the
more distant store.
Generalizing
the argument, we consider a situation where I have information about
myself and can, at some cost, prevent other people from having that
information. Under one legal (or technological) regime, the cost of
doing so is low, under another it is high. Under both regimes,
however, the cost is low enough so that I am willing to pay it. The
former regime is then superior, not because I end up with more
privacy but because I end up getting it at a lower cost. Hence laws,
norms, or technologies that lower the cost of protecting privacy may
produce net benefits.[8]
I
say “may” because the conclusion depends on assuming that
it will, in either case, be worth the cost to me to protect my
privacy. If we assume instead that under the second regime protecting
my privacy is prohibitively expensive, and if we are considering
situations where the loss of privacy produces a transfer from me to
someone else but no net cost (or, a fortiori, if it produces a
net benefit), we get the opposite result.[9]
If privacy is cheap I buy it and, even though it is cheap, it still
costs something. If privacy is expensive, I don’t buy it and,
while I am then worse off for not having it, someone else is better
off at my expense, so on net we are better off by the elimination of
what I would have spent for privacy.
Privacy
as a way of reducing rent seeking provides a possible explanation for
why circumstances that make privacy easier to obtain might be
desirable, but an explanation very much dependant on assumptions
about the technology of getting and concealing information. In a
world where concealing information is costly but not too costly to be
worth doing, making it less costly produces a net benefit. In a world
where concealing information is so costly that nobody bothers to do
it, making it less costly increases the amount spent protecting
privacy, which is a net loss. More generally and precisely lowering
the cost of privacy reduces expenditures on privacy if the demand is
inelastic, increases them if it is elastic.[10]
This
explanation also depends on another assumption–that the
information about me starts in my control, so that facilitating
privacy means making it easier for me to protect what I already
possess. But much information about me comes into existence in other
people’s possession. Consider, for example, court records which
record my conviction on a criminal charge, or a magazine’s
mailing list with my name on it. Protecting my privacy with regard to
such information requires some way of removing that information from
the control of those people who initially possess it and transferring
control to me. That is, in most cases, a costly process. There are
lots of reasons, unconnected with privacy issues, why we want people
in general to have access to court records, and there is no obvious
non-legal mechanism by which I can control such
access,[11]
so if we do nothing to give people rights over such information about
them, the information will remain public and nothing will have to be
spent to restrict access to it.
Privacy
as Property
An
alternative argument in favor of making privacy easier to obtain
starts with a point that I made earlier: if I have control over
information about me but transferring that information to someone
else produces net benefits, then I can give or sell that information
to him. Hence, one might argue, by protecting my property rights in
information about me we establish a market in information. Each piece
of information moves to the person who values it most, maximizing net
benefit.
So
far this is an argument not for privacy but for private property in
information.[12]
To get to an argument for privacy requires two further steps. The
first is to observe that most information about me starts out in my
possession, although not necessarily my exclusive possession. Hence
giving anyone else exclusive rights to it requires somehow depriving
me of it–which, given the absence of technologies to produce
selective amnesia, is difficult. It would be possible to deprive me
of control over information by making it illegal for me to make use
of it or transmit it to others, but enforcing such a restriction
would be costly, perhaps prohibitively costly.
The
second step, following a general line of argument originated by
Coase,[13]
is to note that, to the extent that our legal rules assign control
over information to the person to whom it is most valuable, they save
us the transaction costs of moving it to that person. My earlier
arguments suggest that information about me is sometimes most
valuable to me (where it protects me from a burglar), sometimes to
someone else. There are, however, a lot of different someone else’s.
So giving each person control over information about himself,
especially information that starts in his possession, is a legal rule
that should minimize the transaction cost of getting information to
its highest valued user.
Stated
in the abstract, this sounds like a reasonable argument–and it
would be, if we were talking about other forms of property. There are
two problems with applying a property solution to personal
information. The first is that transacting over information is often
difficult, because it is hard to tell the customer what you are
selling without, in the process, giving it to him. The second is that
a given piece of information can be duplicated a cost close to zero,
so that while the efficient allocation of a car is to the single
person who has the highest value for it, the efficient allocation of
a piece of information is to everyone to whom it has positive value.
That implies that legal rules that treat information as a commons,
free for everyone to make copies, lead to the efficient allocation.
That
conclusion must be qualified in two ways. First, as we have already
seen, legal protection of information may be a cheaper substitute for
private protection, in which case if the information is going to be
protected because it is in someone’s interest to do so, we
might as well have it protected as inexpensively as possible. Second,
you cannot copy information unless it exists. Thus we get the
familiar argument from the economics of intellectual property, which
holds that patent and copyright result in a suboptimal use of
existing intellectual property, since marginal cost on the margin of
number of users is zero while price is positive, but that in exchange
we get a more nearly optimal production of intellectual
property.
This
is a legitimate argument for property rules in contexts such as
copyright or patent. It is less convincing in the context of privacy,
since information about me is either produced by me as a byproduct of
other activities or produced by other people about me–in which
case giving me property rights over it will not give them an
incentive to produce it. It does provide an argument for privacy in
some contexts, usually commercial, where privacy is used to protect
produced information.
Privacy
as an Inefficient Norm
Robert
Ellickson, in Order Without Law, argues that close knit
communities tend to produce efficient norms. One of his examples is
the set of norms developed by 19th century whalers to deal
with situations in which one ship harpooned a whale and another ship
eventually brought it in. He offers evidence that those norms changed
over time in a way that efficiently adapted them to the
characteristics of the changing species of whales being hunted.
This
story raises a puzzle. The reason the whalers had to change the
species they were hunting, and the associated norms, was that they
were hunting one species after another into near extinction. That
suggests that a norm against overwhaling would have produced sizable
benefits. Yet no such norm developed.
My
explanation starts with a different puzzle: what is the mechanism
that produces efficient norms? My answer starts by distinguishing
between two different sorts of efficient norms. A locally efficient
norm is a norm which it is in the interest of a small group of
individuals to follow among themselves–an example would be a
norm of fair dealing. A globally efficient norm is one that it is in
the interest of everyone to have everyone follow.
Locally
efficient norms can be adopted by small groups. Since the groups
benefit by the norm, adoption spreads. Eventually everyone follows
the norm. That mechanism does not work for a norm that is globally
but not locally efficient, such as a norm against over whaling. If
some whalers follow it, it is in the interest of other whalers to
take advantage of the opportunity by increasing their efforts. Hence
we would expect systems of private norms to be locally but not
globally efficient–which corresponds to what Ellickson found
for whaling.[14]
This
brief sketch of norms provides a possible explanation for the
widespread existence of norms of privacy–norms holding that
individuals are entitled to conceal personal information about
themselves, and that other individuals ought not to seek to discover
such information. Such norms may well be locally efficient even if
globally inefficient.
Why
would such norms be locally efficient? Consider some piece of
information about me–say my value for the apple in the earlier
discussion of bilateral monopoly. If only I possess that piece of
information, I can either withhold it, to my benefit, or offer to
sell it to my trading partner, supposing that there is some way in
which I can prove to him that the information I am selling is
truthful. If only you possess the information, you can offer to sell
it to either me or my trading partner, whichever bids more. But if
several people possess the information, no one of them can sell it
for a significant price since if he tries he will be underbid by one
of the others–the cost of reproduction of information being
near zero. The logic is exactly the same as if we wished to maximize
the revenue from a patent and were comparing the alternatives of
having one owner of the patent or several, where in the latter case
each owner could freely license to third parties.
It
follows that if we are members of a closeknit group containing all of
the people who can readily discover personal information about each
other, and if we are also engaged in dealings with non-members of the
group such that possession by them of personal information about one
of us would make him better off at our expense, a norm of privacy is
likely to be in our interest. Its effect is to give each of us
monopoly ownership of information about himself, permitting him to
maximize the return from that information, whether by keeping it
secret or by selling it. To the extent that the return comes at the
expense of the non-members we are dealing with, the norm may be
globally inefficient. But it is locally efficient, which provides a
possible explanation of why it exists.
Blackmail
and Privacy
"If
blackmail were legal, blackmailers and their customers (today called
"victims") would enter into legally enforceable contracts whereby the
blackmailer would agree for a price never to disclose the information
in question; the information would become the legally protected trade
secret of the customer." (Posner 1993)
Laws
against blackmail provide an interesting puzzle. Suppose you know
something about me that I would prefer not to be public. I agree to
pay for your silence. At first glance, the transaction seems
obviously beneficial. I value your silence more than the money, which
is why I made the offer; you value the money more than publishing my
secret, which is why you accepted. We are both better off, so why
should anyone object?[15]
One
answer is that we have started too late in the process–after
you obtained the information. The possibility of blackmail gives
people an incentive to spend resources acquiring information about
other people and gives potential targets an incentive spend resources
concealing such information. If blackmail is legal I spend a thousand
dollars trying to conceal the information, you spend a thousand
dollars trying to discover it. If you succeed I then pay you three
thousand dollars to keep your mouth shut, leaving us, on net, two
thousand dollars worse off than when we started (you are two thousand
dollars better off, I am four thousand worse off). If you fail, we
are each out a thousand dollars, so again we are, on net, two
thousand dollars worse off than when we started. So a law that made
it impractical for you to profit by discovering such information
provides a net benefit of two thousand dollars. We are back with the
rent seeking explanation of privacy.
An
alternative answer is that we ought to include more people in our
calculations–in particular, we ought to include the people you
are threatening to tell my secret to. The reason I am willing to pay
for your silence is that doing so makes me better off–possibly
at their expense. Perhaps the secret is my record for fraud or
malpractice–and, having moved from where my misdeeds were first
unveiled, I am looking for new, and poorly informed, customers.
Perhaps the secret is what happened to my first wife–and I am
now seeking to obtain a replacement. In these and many other
circumstances, when the blackmailer accepts a payment for silence he
imposes an external cost on those who would otherwise have learned
what he knows. So perhaps legal rules that permit me to buy his
silence make the society as a whole worse off, by keeping him silent
and others ignorant.
As
should be clear, these two arguments for banning blackmail are not
only different, they are in an important sense inconsistent. If we
assume that the same amount of information will be produced whether
or not blackmail is legal–if we are imagining that the typical
blackmailer obtained his information by accident not effort–then
the rent seeking argument vanishes but the public good argument
replaces it. The potential blackmailer has the information; if he
cannot sell it he might as well give it away. If, on the other hand,
we assume that the information on which blackmail is based is
primarily obtained for that purpose, the rent seeking argument is
revived but the other argument vanishes. If blackmail is illegal, the
information is never generated so the public is never
warned.
So
far we seem to have arguments against permitting blackmail both in
the case where blackmailers discover information by accident and in
the case where they deliberately search for it. The conclusion
becomes less clear if we assume that the information a blackmailer
discovers is not merely useful to other people in dealing with the
victim but discreditable to the victim–as we usually do assume
in discussing blackmail. If we suppose that the blackmailer
discovered the information by accident and will publish it if he
cannot sell it to the victim–perhaps in the hope of a financial
or reputational reward–then laws against blackmail make sense,
since they result in the potential victim being convicted and
punished for his crimes. If we permit blackmail he is still punished,
with the punishment taking the form of a payment to the blackmailer,
but the reason he makes the payment is that it costs him less than
having his crime revealed, so the result is a lower
punishment.
This
is not true if we assume that the incentive provided by the ability
to blackmail people plays a major role in the production of the
information used to do so. In that case blackmail becomes private
enforcement of law.[16]
If blackmail is legal, people have an incentive to look for evidence
of other people’s crimes and use it to blackmail them, thus
imposing a punishment on criminals who would otherwise go free. The
same argument applies if the information concerns violations of norms
rather than laws, assuming that we believe the norms are efficient
ones and punishment for their violation is appropriate.
Earlier
I pointed out that one argument for intellectual property law is that
it provides an incentive to generate valuable information. Similarly
here, the form of transferable property right that exists if
blackmail is legal also creates an incentive to generate valuable
information. The information, once generated, is suppressed–but
there is still a benefit, since the process generates a penalty for
the behavior the information is about, and blackmail is particularly
likely with regard to behavior that we would like to
penalize.
Privacy
and Government
“It
would have been impossible to proportion with tolerable exactness the
tax upon a shop to the extent of the trade carried on in it, without
such an inquisition as would have been altogether insupportable in a
free country.”
(Adam
Smith’s explanation of why a sales tax is impossible; Wealth
of Nations Bk V, Ch II, Pt II, Art. II)
“The
state of a man’s fortune varies from day to day, and without an
inquisition more intolerable than any tax, and renewed at least once
every year, can only be guessed at.” (Smith’s explanation
of why an income tax is impossible, Bk V Article IV)
So
far I have ignored an issue that is central to much of the concern
over privacy: privacy from government. The logic of the situation is
the same as in the situations we have been discussing. If the
government knows things about me–for example my income–that
permits the government to benefit itself at my expense. In some cases
it also permits government to do things that benefit me–pay me
money because my income is low, for example–but in such
situations privacy rights leave me free to reveal the information if
I wish.
The
case of privacy from government differs from the case of privacy from
private parties in two important respects. The first is that although
private parties occasionally engage in involuntary transactions such
as burglary, most of their interactions with each other are voluntary
ones, which makes it less likely that someone else having information
about me will result in an inefficient transaction. Governments
engage in involuntary transactions on an enormously larger scale. The
second difference is that governments almost always have an
overwhelming superiority of physical force over the individual
citizen. It follows that while I can protect myself from my fellow
citizens, to a considerable degree, by locks and burglar alarms, I
can protect myself from government actors only by keeping from them
the information they need to benefit themselves at my
expense.[17]
The
implications of these differences for the value of privacy depends
very much on one’s view of government. If, at one extreme, one
regards government as the modern equivalent of the philosopher king,
then individual privacy simply makes harder for government actors to
do good. If, at the other extreme, one regards government as a
particularly large and well organized criminal gang supporting itself
at the expense of the taxpayers, individual privacy against
government becomes an unambiguously good thing. Most Americans
appear, judging by expressed views on privacy, to be close enough to
the latter position to consider privacy against government as on the
whole desirable, although with an exception for cases where they
believe that privacy might be used primarily to protect private
criminals.
The
Weak Case for Privacy: A Summary
Explaining
why individuals wish control over information about themselves is
easy. Explaining why it is in my interest that both I and the people
I deal with have such control, or why people believe that it is in
their interest and act accordingly, is more difficult.
We
have considered three reasons why privacy might be in the general
interest—might be efficient in the economic sense. One is that
people want to control information about themselves, so the easier it
is to do the less they will have to spend to do it. People want
information about other people, and the harder it is to get, the less
they will spend getting it. As the odd asymmetry of the two sides of
the argument—in one case lowering a price reduces expenditure,
in one case raising a price reduces expenditure—suggests, the
argument depends on specific assumptions about relevant demand and
supply functions. It goes through rigorously if the demand for
privacy is inelastic and the demand for information about others is
elastic—which might, but need not, be true.[18]
Put
in that form the argument sounds abstract, but the concrete version
should be obvious to anyone who has ever closed a door behind him,
loosened his tie, taken off his shoes, and put his feet up on his
desk. Privacy has permitted him to maintain his reputation as someone
who behaves properly without having to bear the cost of actually
behaving properly—which is why there is no window between his
office and the adjacent hallway.
The
second reason privacy might be efficient is that property rights
permit goods to be allocated to their highest valued use. If
protection of privacy is easy, then individuals have reasonably
secure property rights over information about themselves. It makes
sense to give such rights to the individual the information is about
because he is more likely to be the highest valued user than any
single other person—and if he is not, he can always sell the
information to someone else. The problem with this argument is that
information, unlike other goods, can be reproduced at near zero cost,
making it likely that the highest valued user is everybody—and
there are significant transaction costs preventing the transfer of
information from its subject to everybody even if that transfer
produces net benefits.
The
third reason that privacy might be efficient is that it provides a
way in which individuals may protect themselves against government.
The strength of that argument depends very much on one’s view
of the nature of government.
We
also saw one argument against privacy—that it permits people to
act badly while evading the consequence of having people know that
they acted badly—an argument worked out in the context of
arguments for and against legalizing blackmail.
The
conclusion so far is that the case for privacy—for the claim
that it is desirable to lower the cost to individuals of controlling
information about themselves—is a weak one. Under some
circumstances privacy produces a net gain but under others a net
loss.
Other
Privacies
So
far we have been talking only about informational privacy. The link
to physical privacy is fairly obvious; if there is someone else in
the room, he will probably notice when you loosen your tie and take
off your shoes. Physical privacy is, among other things, a means to
maintain informational privacy.
The
link to attentional privacy is also obvious but the implications less
clear. When someone sends me a message such as a phone call or email
it costs me something to examine the message and determine whether it
is of interest. In a world of uncertainty, some messages are of
interest to me, some are not, and neither I nor the sender knows for
certain until I have examined the message.
Both
I and the sender would prefer that the sender send me messages that
are of interest to me; there is no point to calling someone up in
order to sell him something he has no interest in buying. Where we
differ is in just where we draw the line between messages that are or
are not worth their cost. The sender wants to send messages if and
only if the chance I will be interested, and respond in a way which
benefits him, is sufficient to justify the cost to him of sending the
message.[19]
I want him to send messages if and only if the chance is sufficient
to justify the cost to me of examining and evaluating the message.
The result in a world where sending messages is expensive and
evaluating them inexpensive is that I receive inefficiently few
messages—so I buy additional messages by (for example)
subscribing to magazines. The result in a world where sending
messages is cheap and evaluating them expensive is that I receive
more messages than I want. Resolving that problem requires a negative
subscription price, a mechanism by which I can charge people for
sending me messages.
The
connection to informational privacy comes because the sender needs
information about me in order to decide whether it is worth the cost
to him of sending a message. The implication is ambiguous because
increasing the amount of such information available to him may make
the outcome better or worse for me. In the limiting case of complete
information, potential senders know for certain whether I want to buy
what they are selling, so I receive all the offers I would want to
receive and do not have to waste time examining any that I do not
want to receive.[20]
In the limiting case of no information, and in a world where the cost
of sending messages is significant, it is never worth sending a
message; this cannot be an improvement on other alternatives, since
the other alternatives always permit the option of ignoring all
messages—cutting the bottom out of my mailbox and putting a
waste basket underneath it.
More
generally, increasing the information other people have about you can
benefit you by making it easier for those who have offers you are
interested in to find you and easier for those whose offers you are
not interested in to discover the fact and save themselves the cost
of making the offers. If only all the world knew that I didn’t
have a mortgage on my house, I would no longer be annoyed by phone
calls from people offering to refinance it.
As
this example suggests, one way of getting the best of both worlds is
to have control over information about yourself and use that control
to make some information public while keeping other information
private. We will return to that possibility later, after discussing
technologies that facilitate that approach.
Finally,
it is worth noting that different societies have had different norms
with regard to privacy, some of which surely reflect the differing
value to individuals of having information about themselves widely
known. Consider the English upper class at the beginning of the
nineteenth century, as depicted by Jane Austen. Every gentleman’s
income appears to have been a matter of public knowledge. One reason
may have been that the information was crucial to families with
daughters on the marriage market. A gentleman who went to some
trouble to conceal his financial situation would be signaling not a
taste for privacy but an income below his pretended
status.[21]
We
are now finished with our theoretical discussion of privacy. One
thing that discussion has made clear is that whether it is desirable
for individuals to be able to control information about them depends
on a variety of technologies—in the economist’s sense, in
which a technology is simply a way of transforming inputs to outputs.
In particular, it depends on technologies for obtaining, concealing,
and transmitting information—which will be the subject of the
next part of this article.
The
technology of privacy
Over
the course of the past fifty years, a variety of technologies have
developed which substantially affect the cost of obtaining
information about other people, concealing information about oneself
and transacting in information. For our purposes, they may be grouped
into three broad categories: information processing, encryption, and
surveillance.
Information
Processing
The
earliest and best known of these technologies is information
processing. Fifty years ago, a firm or government bureau possessing
information on millions of individuals faced daunting problems in
making use of it. Today, the average citizen can afford, and may well
own, computer hardware and software capable of dealing with a
database of that size.
One
implication is that organizations that already have large-scale data
collections are more able to use them, hence that privacy rights, in
the sense in which I have been using the term, are weaker. A second
implication is that dispersed information which nobody found worth
collecting in the past may be routinely collected in the future.
It
is possible to hinder that development by legal rules restricting the
collection and sale of data, and such rules—for example, the
Fair Credit Reporting Act—exist. But doing so is costly, and it
is far from clear that it is useful. For the most part, the
information is used by private parties to facilitate voluntary
transactions with others—an activity that typically produces
net benefits.[22]
Given that information is collected for that purpose, it is hard to
design legal rules that prevent its occasional use for other
purposes, such as locating potential targets for criminal activity.
And as the growth of the internet moves more and more of the
commercial activity relevant to U.S. citizens outside of the
jurisdiction of U.S. courts, regulation over the collection and use
of such information will become even more difficult.
An
alternative approach is control over information about individuals by
those individuals, through a combination of physical privacy and
contract. Such information is frequently produced by voluntary
transactions, such as purchases of goods and services, and thus
starts out in the possession of both parties to the transaction. If
one party wishes the information to be kept confidential, it can so
specify in the terms of the initial transaction—as is, of
course, often done in a variety of settings. The same information
processing technology that makes it relatively inexpensive to keep
track of large numbers of facts about large numbers of people also
makes it inexpensive to keep track of which of them have provided you
information with conditions, perhaps detailed conditions, on its
disclosure.
A
more exotic and potentially more secure approach that may become
increasingly practical as a result of technologies to be discussed in
the next section is to engage in transactions anonymously, thus never
putting the relevant information in the control of anyone else, not
even the other party to the transaction. More generally, one
possibility implicit in the combination of technologies for
information processing and encryption is a shift to something more
like a private property/freedom of contract model for personal
information.
Encryption
Many
forms of modern communication, including email and cellular
telephony, are physically insecure—intercepting messages is
relatively easy. In order to protect the privacy of such
communications, it is necessary to make them unreadable by those who
might intercept them. This is done by encryption—scrambling a
message in such a way that only someone with the proper information,
the key, can unscramble it.
The
most important modern development in this field is public key
encryption.[23]
An individual generates a pair of keys—two long numbers having
a particular mathematical relation to each other. If one key is used
to scramble a message the other is required to unscramble it. In
order to make sure that messages sent to me remain confidential, all
I have to do is to make sure that one of my keys (my “public
key”) is widely available, so that anyone who wants to send me
a message can find it. The other (my “private key”) is my
secret, never revealed to anyone. Anyone who has my public key can
use it to encrypt a message to me. If someone else obtains the public
key, he can send me secret messages too. But only someone with my
private key, which I need never make available to any other person,
can read the messages.
The
same technology solves a related problem—how to prove to the
recipient of my message that it is really from me. In order to
digitally sign a message, I encrypt it with my private
key.[24]
The recipient decrypts it with my public key. The fact that what he
gets is a message rather than gibberish demonstrates that it was
encrypted with the matching private key, which only I have.
A
digital signature not only demonstrates, more securely than an
ordinary signature, that I really sent the message, it also
demonstrates it in a way that I cannot later deny. You now possess a
digitally signed message—the original, before decryption—which
you could not have created yourself. So you can prove to interested
third parties that I actually sent the message, whether or not I am
willing to admit it. And since there is no way of changing the
digitally signed message without making the signature invalid, a
digital signature, unlike a physical signature, demonstrates that the
message has not been altered since it was signed.
The
same technology also has two other privacy enhancing applications.
One is an anonymous remailer. If I wish to communicate with someone
without the fact of our communication being known, I send the message
through a third party in the business of relaying messages. In order
to preserve my privacy from both the remailer and potential snoops, I
encrypt my message with the recipient’s public key, add to it
the recipient’s email address, encrypt the whole package with
the remailer’s public key, and sent it to the remailer. The
remailer uses his private key to strip off the top layer of
encryption, permitting him to read the email address and forward the
message. If I am concerned that the remailer himself might want to
keep track of who I am communicating with I can bounce the message
through multiple remailers, providing each with the address of the
next. Unless all of them are jointly spying on me, my secret is
safe.
The
other important application is anonymous digital cash. Using
encryption, it is possible for a money issuer to create the digital
equivalent of currency, permitting one person, by sending a message
to another, to transfer claims against the issuer without either
person having to know the other’s identity and without the
issuer having to know the identity of either.
Consider
a world in which all of these technologies exist and are in general
use. In such a world, it will be possible to do business anonymously
but with reputation. Your cyberspace identity is defined by your
public key. Anyone who can read messages encrypted with that public
key must have the matching private key–which is to say, must be
you. The same is true for anyone who can sign messages with the
private key that matches that public key.
One
disturbing implication, which I have discussed elsewhere, is the
possibility of criminal firms operating anonymously but with brand
name reputation. A more attractive implication is that, in such a
world, the private property model of personal information becomes a
practical possibility. If, when I buy something from you, neither of
us knows the identity of the other, then neither of us can obtain the
relevant transactional information–the fact that a certain
other person bought or sold a particular good–without the
cooperation of the other. Hence transactional information starts as
the sole property of the person it is information about; that person
is then free to either suppress it, publish it, or sell it, whichever
best serves his interests.
A
second feature of this world relevant to the issues we have been
discussing comes from a different use of the technology of
encryption: technological protection of intellectual
property.[25]
It may some become practical to distribute intellectual property in a
cryptographic container–as part of a computer program which
controls access to its contents, what IBM refers to as a “cryptolope.”
Use of the contents then requires a payment, perhaps in digital cash,
with the container regulating the form of use. Combining such
technologies with the use of intelligent software agents for
negotiating online contracts, we have the possibility of a world
where it will be practical to treat information as something close to
ordinary property. One could, for example, sell or give away
transactional data about oneself in a form that could only be used
for specified purposes, or only in association with specified
payments.
This
set of possibilities represent one part of a more general pattern.
The combination of online communications, encryption, and information
processing permits a much more detailed control over information
flows, at least information flows online or flows of information that
originates online, than was possible in the past. Thus, to take an
entirely different example, there is no technical barrier to prevent
the creation of an email program designed to permit someone who
wished to protect his attentional privacy from charging a price for
receiving email–and simply trashing, without human
intervention, any messages that came without an associated payment.
Nor is there any barrier to making such software distinguish among
senders, receiving messages for free if they are digitally signed by
people the owner of the software wants to receive messages
from.
For
a less exotic example, consider the marketing of mailing lists. With
current transactional technology, the fact of a transaction is known
to both parties, so I cannot directly control the fact that I am a
subscriber–as I could if the transaction were taking place
online between anonymous parties. But a magazine may, and some do,
restrict its use of that information by contract, by promising not to
make its mailing list available to others or by giving the customer
the choice of having or not having his name and address sold to other
merchants. Such contractual arrangements will become easier as more
and more transactions shift to digital forms, where individualized
contract terms are considerably less expensive than with conventional
contracting technology.
One
option is to keep your name and address private, another is to permit
it to be freely sold. A third, which many might find more attractive
than either of the others, is for the magazine to sell access to its
customers but not their identity. This could be done easily enough by
having the magazine operate its own remailer. Information about each
customer would be provided to merchants interested in communicating
with him–information about what the customer had purchased, and
any other information the magazine had that was relevant to what the
consumer would want to buy but could not be used to identify him. The
merchant then sends a message directed at that particular (identified
but unnamed) customer, which the magazine forwards to him.
This
is, in fact, how the selling of mailing lists is commonly conducted
at present, although for a different reason and with considerably
less information available to the sender. A magazine does not
normally sell its mailing list, it rents it, for a fixed number of
uses. The purchaser gets, not the list, but the opportunity to have
its message sent to the names on the list, with the remailer replaced
by a third firm in the business of arranging such transactions.
Modern technology makes possible a more sophisticated version of such
a transaction. Ultimately we could have third party remailers holding
large amounts of information on unidentified individuals in ways that
permit their customers to search for individuals possessing
combinations of characteristics that make them attractive targets for
specfic offers but do not permit any outsider to link information
with identity. Alternatively, the same result could be produced even
more securely–without having to trust the remailer–by
having individuals interact via anonymous online personas, making the
facts of transactions public, in order to attract desirable offers,
but keeping the identity of the realspace person corresponding to the
cyberspace persona private.[26]
One thus abandons privacy for purposes of voluntary transactions,
which can take the form of an offer to an unknown identity, but
retains it for protection against involuntary transactions. It is
hard to burgle the house of someone identified only by his public
key.
With
the exception of anonymous ecash, which we know how to do but which
nobody has so far done,[27]
and cryptolopes, which are still mostly in the development stage, all
of the fundamental technologies I have described already exist.
Public key encryption has been implemented in a variety of forms,
including a widely distributed free program.[28]
Anonymous remailers currently exist. Digital signatures are widely
used. But for the most part these technologies have been applied only
to text and so have affected only that part of private and commercial
life that is embodied in text messages.
As
computers become more powerful and the bandwidth of digital networks
increases, that situation will change. Using wide bandwidth networks
and virtual reality programs, it will be possible to create the
illusion of any transaction that involves only the senses of sight
and sound. Further in the future we may succeed in cracking the
dreaming problem, figuring out how our nervous system encodes the
information that reaches us as sensory experience. At that point the
limitation to two senses will disappear. We will be able to create,
by the transmission of information in digital form, the illusion of
any interaction that could take place in realspace.
As
more and more of our activity shifts into cyberspace encryption and
related technologies make possible a degree of control over both the
creation and the transfer of information very much greater than we
now have. At that point the property justification for privacy,
rejected in the first part of this article, comes back into the
argument.
What
about the argument against privacy–that one reason I may wish
to conceal information about myself is in order to defraud my trading
partner, whether in the context of a mortgage or a marriage? That
becomes a less serious problem online, where the technology restricts
parties to voluntary transactions. You can, of course, conceal
information about yourself if that information is under your control.
But I can refuse to transact with you unless you agree to reveal the
relevant information–and if you decline, that fact itself
signals something about the information you are keeping
private.
Surveillance
devices—Towards a transparent society,
While
technological developments in online communication are moving us
towards a high level of privacy in cyberspace, developments in
surveillance technology may be moving realspace in precisely the
opposite direct, for two reasons. One is that surveillance devices
provide a relatively inexpensive and effective way of reducing crime,
one that is becoming increasingly popular. The other is that, as the
relevant electronic devices become smaller and cheaper, it becomes
more difficult to prevent surveillance. We may be moving towards a
world in which video cameras with the size and aerodynamic
characteristics of a mosquito will be widely available.
David
Brin, on whose book The Transparent Society this section is
largely based, argues that general privacy will no longer be an
option. We will be limited to two choices: a world in which those in
power know everything they want to know about everyone, and a world
in which everyone knows everything he wants to know about everyone.
Brin, not surprisingly, prefers the latter. He envisages a future
with video cameras everywhere–including every police station–all
generating images readily accessible to anyone interested, via some
future equivalent of the web.
If
he is correct, physical privacy in realspace will vanish. One
implication is that individuals will protect their informational
privacy in the same ways in which people in primitive societies
without physical privacy protect it, by adopting patterns of speech
and behavior that reveal as little as possible of what they actually
believe and intend. That will represent a substantial rent seeking
cost, to be added to the rent seeking cost of individuals processing
the public information in order to learn things about all those with
whom they expect to interact.
Two
qualifications are worth making to Brin’s picture. The first is
that he is assuming that the technology of surveillance is going to
outrun the technology of physical privacy, that the bugs will beat
the scanners, that the video mosquitos will not fall victim to
automated dragonflies. While he may be correct, it is hard to predict
in advance how the balance will turn out. We might end up in a world
where legal surveillance is cheap and easy, illegal surveillance
difficult, giving us the choice of how much privacy we will have.
One
obvious compromise is privacy in private spaces but not in public
spaces. That would represent a further development along the same
lines as computerized databases. What you do in public spaces, like
the public records produced by your life,[29]
has always been public in a legal sense. A video surveillance network
coupled to computers running pattern recognition software and sorting
and saving the resulting data would simply put that public
information in a form permitting other people to find and use
it.
A
second qualification is that although Brin’s technology
produces information, it may not always be reliable or verifiable
information. Suppose I am conducting an adulterous affair. My
suspicious wife can obtain video footage of me in flagrante delicto
with my paramour, via a suitably programmed video camera. But that
footage may be of very limited use in court, since it could have been
produced just as easily if I was not conducting an affair–using
video editing software instead of a camera. To the extent that modern
technology makes it easy to forge evidence, the evidence itself,
without a provable pedigree, becomes worthless. It may be easy to get
a mosquito camera into my bedroom but it is not so easy to get a
witness in as well, to prove that that camera really took that
film.[30]
Encryption
technology provides one approach to solving that problem.
Conceivably, a manufacturer could build a sealed, tamperproof camera,
complete with its own private key. The camera would then digitally
sign and timestamp[31]
its films as it produced them, making it possible at a later date to
prove that those particular films were created by that camera at that
time and have not since been edited.
One
difficulty with this approach is that a camera records, not facts
about the outside world, but facts about the pattern of light coming
in its lens. To defeat such a camera, I build a lens cap capable of
generating computer synthasized holographic images. I then put the
lens cap on the camera and play whatever I want the camera to see; it
sees and signs it. As this example suggests, figuring out the
implications of technologies that do not yet exist, or exist only in
primitive forms, is a nontrivial problem.
Conclusion
In
the first part of this article I sketched out an economic analysis of
privacy. The conclusion was that increasing the ability of
individuals to control information about themselves had both
desirable and undesirable effects, making it unclear whether, on net,
we were better off with more or less privacy. One argument that I
considered and rejected was that increased privacy rights, at least
over information that originates with the person it is about, were
efficient because they made it possible to convert such information
into private property and then allocate it efficiently through market
transactions.
That
argument is harder to reject when applied to the information
technology of a few decades hence. It may become possible to create
transactional information in such a way that each piece of
information originates in the possession of a single person. And it
may be possible, given the much lower transactions costs of online
transactions, to then use private transactions to allocate
information to its highest valued users. If so, we end up with a
world in which information generated by cyberspace events, online
transactions, is characterized by both a high degree of control by
those it is information about and an efficient market for its
creation and allocation.
There
is no reason to expect the same to be true in realspace. If anything,
the combination of improved surveillance technology and improved
information processing technology is likely to make increasingly
large amounts of realspace information about everyone inexpensively
available to everyone else. We then have both the advantages of a low
privacy environment–individuals cannot hide unattractive facts
about their doings in realspace from those they transact with, making
many forms of fraud, commercial and social, impractical–and the
disadvantages. The cost of privacy becomes the cost of behaving in a
way that reveals as little as possible about oneself.
If
realspace is public and cyberspace private, the amount of privacy
individuals have depends critically on the importance of each, and on
the links between the two. It does me no good to protect my messages
with strong encryption if a mosquito camera is watching me type the
unencrypted original. In extreme versions of this scenario, versions
where both Brin’s vision of realspace and my vision of
cyberspace are realized in full, privacy depends critically on
mechanisms for inputting to a computer that cannot be observed from
the outside. The low tech version is touch typing under a very secure
hood; the high tech a link directly from mind to machine. If some
such method makes it possible to protect cyberspace privacy from
realspace prying, the balance between public and private then depends
on how much of what we do is done in cyberspace and how much in
realspace. It is going to be an interesting century.
References
Friedman,
David, "In Defense of Private Orderings: Comment on Julie Cohen's
'Copyright and the Jurisprudence of Self-Help.'"Berkeley Technology
Law Journal (1999,1).
........................
Why Is Law?: An Economist's View of the Elephant,”forthcoming
from Princeton University Press c. 12/99.(1999,2)
........................
Hidden Order: The Economics of Everyday Life. Harper-Collins,
1996. (1996,1)
........................
“A World of Strong Privacy: Promises and Perils of Encryption,”
Social Philosophy and Policy , 1996 (1996,2)
........................
"Standards As Intellectual Property: An Economic Approach,"
University of Dayton Law Review, Vol. 19, No. 3, (Spring 1994) pp.
1109-1129. (1994,1)
........................
“A Positive Account of Property Rights,” Social
Philosophy and Policy 11 No. 2 (Summer 1994) pp. 1-16.
(1994,2)
........................
“Less Law than Meets the Eye,” a review of Order Without
Law, by Robert Ellickson, The Michigan Law Review vol. 90 no. 6, (May
1992) pp. 1444-1452.
........................
“Some Economics of Trade Secret Law,” with William Landes
and Richard Posner, Journal of Economic Prospectives, vol. 5, Number
1 (Winter 1991) pp. 61-72.
Lindgren,
James, “Blackmail: On Waste, Morals, and Ronald Coase, 36 UCLA
L. REV. 597 (1989);
........................
“Kept in the Dark: Owens's View of Blackmail,” 21 CONN.
L. REV. 749 (1989);
........................
“Secret Rights: A Comment on Campbell's Theory of Blackmail,”
21 CONN. L. REV. 407 (1989);
........................
“In Defense of Keeping Blackmail a Crime: Responding to Block
and Gordon,” 20 LOY. L.A. L. REV. 35 (1986);
........................
“More Blackmail Ink: A Critique of Blackmail, Inc., Epstein's
Theory of Blackmail,” 16 CONN. L. REV. 909 (1984);
........................
“Unraveling the Paradox of Blackmail,” 84 COLUM. L. REV.
670.
Posner,
Richard, “The Right of Privacy,” 12 Georgia Law Review
393 (1978).
........................
“An Economic Theory of Privacy,” Regulation, May/June
1978, at 19.
........................
The Economics of Justice, Chs 9-10 (1981)
........................
Overcoming Law, ch 25 (1995)
“Blackmail,
Privacy, and Freedom of Contract,” 141 University of
Pennsylvania Law Review 1817 (1993).
Murphy,
Richard S., “Property Rights in Personal Information: An
Economic Defense of Privacy, 84 Geo L.J. 2381 (1996)
[1]
References in Posner (1978) to anthropological literature on lack of
privacy in other societies.
[2]
One example occurs in the context of a takeover bid. In order for the
market for corporate control to discipline corporate managers, it
must be in the interest of someone to identify badly managed
corporations and take them over. That depends on a takeover bid
remaining secret long enough for the person responsible to accumulate
substantial ownership at the pre-takeover price. In a very public
world, that is hard to do. Currently it is also hard to do because of
legal rules deliberately designed to prevent it.
[3]
“Rights,” as I use the term here, are defined not by
legal rules but by control. If I have a legal right not to have you
tap my phone but it is impractical to enforce that right–the
situation at present for those using cordless phones without
encryption–then in my sense I have no right to that particular
form of privacy. On the other hand, I have substantial rights to
privacy with regard to my own thoughts, even though it is perfectly
legal for other people to use the available technologies–listening
to my voice and watching my facial expressions–to try to figure
out what I am thinking. I have those rights because those
technologies are not adequate to read my mind. For a more general
discussion of rights from a related perspective see Friedman
(1994)
[4]
An exception is the case where the relevant information is negative–the
fact that I have not declared bankruptcy, say. If individuals have
control over such information, then the absence of evidence that I
have declared bankruptcy provides no evidence that I have not. One
solution, if the control is based on legal rules, is to permit
theinformation to be released with the permission of the person it is
information about.
[5]
Many of the points made in this section of the article can be found,
in somewhat different form, in Posner (1978). The author also finds
the general desirability of privacy to be dubious.
[6]
Discussions can be found in Friedman (1996) chapter 11 and in
Friedman (1999,2) Chapter 8; the latter is forthcoming c. 12/99 and
currently webbed at
http://www.best.com/~ddfr/Academic/Course_Pages/L_and_E_LS_98/Why_Is_Law/Why_Is_Law_Contents.html.
[7]
One exception is the situation in which my gains from the bargain
provide the incentive for me to generate valuable information. There
is little point to spending time and money predicting a rise in wheat
prices if everything you discover is revealed to potential sellers
before you have a chance to buy from them. This is a somewhat odd
case because, as Hirshleifer pointed out in a classic article
[find the reference], while successful speculation is both
socially useful and profitable, there is no particular connection
between the two facts. Hence the opportunity for speculative gain may
produce either too much or too little incentive to obtain the
necessary information.
[8]
This argument is proposed as a possible justification for trade
secret law in Friedman, Landes and Posner (1991).
[9]
One reason that the assumption may be correct is the difficulty of
propertizing information. Suppose that keeping secret some particular
fact about me benefits me at the expense of people I deal with; for
simplicity assume that the benefit is a simple transfer, with no net
gain or loss. If you discover the fact, you have no incentive to keep
it hidden, so you tell other people. You end up getting only a small
fraction of the benefit, while I bear all of the cost, so I am
willing to spend much more to conceal the fact than you to discover
it. This would not be the case if you could sell the information to
other people who deal with me—as credit agencies, of course,
do. But in many contexts such sales are impractical, due to the
problems of transacting over information briefly discussed
below.
[10]
A demand is elastic (aka more than unit elastic) if a one percent
increase in price results in more than a one percent decrease in
quantity demanded. A demand is inelastic (aka less than unit elastic)
if a one percent increase in price results in less than a one percent
decrease in quantity demanded.
[11]
There may be very costly ways of doing so. During some of the
litigation involving the Church of Scientology, information which the
CoS wished to keep private became part of the court record. The CoS
responded by having members continually checking out the relevant
records, thus keeping anyone else from getting access to them. And I
might preserve my privacy in a world where court records were public
by changing my name.
[12]
For a discussion of why it makes sense to treat some things as
property and some as commons, see Friedman (1994,1) and (1999,
2,chapter 10).
[13]
See Friedman (1999, 2, chapter 4).
[14]
A longer version of this argument can be found in Friedman
(1992).
[15]
These issues are explored in Murphy (1996), Posner (1978 1,2), (1981,
1993, 1995), Lindgren (1989 1,2) (1986, 1984, ??) and by Ronald Coase
in ... .
[16]
Posner has argued that laws against blackmail are desirable in
circumstances where private law enforcement is for some reason
inefficient.
[17]
Or, of course, by political activity–lobbying Congress or
making contributions to the police benevolent fund. For most
individuals such tactics are of very limited usefulness.
[18]
Privacy might also be on net efficient if one of the two functions
met the required condition and produced gains that more than
outweighed the loss from the function that did not.
[19]
Throughout this discussion I am assuming that the purpose of messages
is to propose voluntary transactions. I am thus ignoring cases such
as harassment, where the benefit to the sender does not depend on the
buyer deciding that the message is of value to him, and emailbombing,
flooding someone’s mailbox in order to prevent him from using
it, where the purpose of the message is to create the cost to the
recipient.
[20]
This result is not quite as rigorous as it sounds, since the cost of
evaluating an offer is already sunk at the point when you decide
whether to accept it. Consider an offer that costs fifteen cents to
evaluate and proposes a transaction that would produce a gain of ten
cents for the receiver of the offer. The receiver, having already
paid the examination cost, accepts the offer and so produces a gain
for the sender sufficient to more than cover the cost of sending. I
will ignore such complications since I doubt they are of much real
world importance.
[21]
In modern day Israel, judging by my observations, asking someone his
salary is considered perfectly normal, whereas in the U.S. it is a
violation of norms of privacy. I have no good explanation for the
difference.
[22]
Although it might under some circumstances produce net costs
associated with attentional privacy.
[23]
For a much longer discussion see Friedman (1996,2).
[24]
The process used for digital signatures in the real world is somewhat
more elaborate than this, but the difference are not important for
the purposes of this article. A digital signature is produced by
using a hash function to generate a message digest—a string of
numbers much shorter than the message it is derived from—and
then encrypting the message digest with the sender’s private
key. The process is much faster than encrypting the entire message
and almost as secure.
[25]
See Friedman (1999,1).
[26]
For an early and still interesting fictional exposition of the idea
of separating realspace and cyberspace identities, see Verner Vinge, “True
Names,” included in (among other books) True Names and Other
Dangers. A more recent fictional effort, and one picturing
something much closer to what we are actually likely to see in a few
decades, is the forthcoming Earthweb by Marc
Stiegler.
[27]
There have been experiments with ecash, most notably by the Mark
Twain Bank of St. Louis working with David Chaum, the cryptographer
responsible for many of the fundamental ideas in the field. The notes
were semi-anonymous, meaning that the issuing bank could identify one
party to the transaction if it had the cooperation of the
other.
[29]
With a few exceptions created by the law.
[30]
A human solution to the problem of forged data was proposed by Robert
Heinlein in Stranger in a Strange Land–a body of
specially trained “fair witnesses,” whose job it was to
accurately observe and honestly report.
[31]
One way of timestamping a digital document is to calculate a hash of
that document, a much shorter string of digits derived from it in a
fashion difficult to reverse, and post the hash in some publicly
observable place. The document is still secret, since it cannot be
derived from the hash. But the existence of the hash at a given date
can later be used to prove that the document from which it was
derived–in our case, a digital video–existed at that
time. That fact that the hash function cannot easily be reversed
means that you cannot post a random hash, then later create a
suitable document which that particular hash is a hash of.
Back to the list
of articles.
Back to my home
page.