Bulletin Board Pix
Cyber Civil Rights Conference, University of Denver, Denver
University Law Review Online, Volume 87, 2010
Table of Contents
-
Cyber Civil
Rights: Looking Forward, by Danielle Keats Citron
-
The Banality of
Cyber Discrimination, or, the Eternal Recurrence of
September, by Mary Anne Franks
-
Regulating
Cyberharassment: Some Thoughts on Sexual Harassment 2.0
by Helen Norton
-
Cyber Sexual
Harassment: Comments on Citron and Franks, by Nancy
Ehrenreich
-
The Unmasking
Option, by James Grimmelmann
-
Accountability
for Online Hate Speech: What Are the Lessons From
"Unmasking" Laws?, by Christopher Wolf
-
Perspectives on
Privacy and Online Harassment: A Comment on Lipton,
Grimmelmann, and Wolf, by John T. Soma
-
Breaking
Felten's Third Law: How Not to Fix the Internet, by Paul
Ohm
-
Who to Sue?: A
Brief Comment on the Cyber Civil Rights Agenda, by Viva
R. Moffat
-
Unregulating
Online Harassment, by Eric Goldman
CYBER CIVIL RIGHTS:
LOOKING FORWARD
DANIELLE KEATS CITRON†
The Cyber Civil Rights
conference raised so many important questions
about the practical and normative value of seeing online
harassment
as a discrimination problem. In these remarks, I highlight
and address
two important issues that must be tackled before moving
forward with a
cyber civil rights agenda. [1] The first concerns the
practical—whether we,
in fact, have useful antidiscrimination tools at the state
and federal level
and, if not, how we might conceive of new ones. The second
involves the
normative—whether we should invoke technological solutions,
such as
traceability anonymity, as part of a cyber civil rights
agenda given their
potential risks.
As Helen Norton
underscored at the conference, current federal and
state antidiscrimination law can move the cyber civil rights
agenda forward,
but only so far. On the criminal side, the Civil Rights Act
of 1968
does indeed punish “force or threat[s] of force” designed to
intimidate or
interfere with a person’s private employment due to that
person’s race,
religion, or national origin. [2] Courts have sustained
convictions of defendants
who made threats over employees’ email and voicemail. [3] A
court
upheld the prosecution of a defendant who left messages on
an Arab
American’s voice mail that threatened “the only good Arab is
a dead
Arab.” Similarly, a jury convicted a defendant for sending
an email under
the name “Asian Hater” to 60 Asian students that read: “I
personally
will make it my life career [sic] to find and kill everyone
of you personally.”
[4] Crucially, however,
federal criminal law does not extend to threats
made because of a victim’s gender or sexual orientation.
This must
change, particularly because victims of online threats are
predominantly
chosen due to their gender or sexual orientation. [5] So how
might legisla-
tors do that? Current law could be amended to criminalize
online threats
made because of a victim’s gender or sexual orientation. The
Violence
Against Women Act (VAWA) might be a profitable place to
begin this
effort. Although the Supreme Court struck down VAWA’s
regulation of
gender-motivated violence on the grounds that such criminal
conduct did
not substantially affect interstate commerce to warrant
congressional
action under the Commerce Clause, Congress could amend VAWA
pursuant
to its power to regulate an instrumentality of interstate
commerce—
the Internet—to punish anonymous posters who threaten
individuals
because of their gender or sexual orientation. Such a
legislative
move would surely find support from the Department of
Justice, which
encourages federal prosecutors to seek hate crime penalty
enhancements
for defendants who subject victims to cyber harassment
because of their
race, color, religion, national origin, or sexual
orientation. [6]
This leaves us to
examine antidiscrimination actions for civil remedies.
Much like the criminal side, the civil law side permits
private lawsuits
for discriminatory actions taken because of a victim’s race.
For instance,
§ 1981of Title 42 of the U.S. Code guarantees members of
racial
minorities “the same right in every State . . . to make and
enforce contracts
. . . as is enjoyed by white citizens.” Section 1981 permits
lawsuits
against private individuals without the need for state
action because
Congress enacted the statute under its power to enforce the
Thirteenth
Amendment.7 Courts have allowed plaintiffs to bring § 1981
claims
against masked mobs that used tactics of intimidation to
prevent members
of racial minorities from “making a living” in their chosen
field. [8]
Here, again, individuals have limited means to sue
defendants who
seek to prevent them from making a living online due to
their gender or
sexual orientation. In Cyber Civil Rights, I argued that
women might
bring claims against attackers under Title VII of the Civil
Rights Act of
1964 because just after the statute’s passage, courts upheld
discrimination
claims where masked defendants engaged in intimidation
tactics to
prevent plaintiffs from pursuing their chosen careers. Yet,
as I acknowledged
there and as Norton emphasized at the conference, Title VII
decisions
now overwhelmingly focus on employer-employee relationships,
rendering my suggestion one that courts will not lightly
adopt. One
way to address this serious problem is to urge Congress to
amend Title VII to permit individuals to sue defendants who
interfere
with individuals’ online work because of their gender or
sexual orientation.
Although doing so would, in part, honor Title VII’s broader
goal of
eliminating discrimination in women’s employment
opportunities, pressing
for Congressional change is a daunting task. Indeed, one
might say it
would be Sisyphian. [9] Advocates might pursue change in
state legislatures
even though their contributions would naturally be more
limited. This is
something that my future work will explore in earnest.
Now for the
unintended, and potentially destructive, consequences
of technological solutions to implement a cyber civil rights
agenda. In
Cyber Civil Rights, I suggested that an orderly articulation
of the standard
of care for ISPs and website operators should include a
requirement
that website operators configure their sites to collect and
retain visitors’
IP addresses. Such traceable anonymity would allow posters
to comment
anonymously to the outside world but permit their identity
to be traced in
the event that they engage in unlawful behavior. [10]
As Paul Ohm and Wendy
Seltzer forcefully argued, we should be
wary of technical solutions, like traceable anonymity, given
the potential
for misuse. Ohm argued that once we mandate IP retention to
advance a
cyber civil rights agenda, those IP addresses might become
available to
companies seeking to enforce copyright violations against
students and
accessible to countries seeking the identity of dissidents.
In Ohm’s
words, demanding traceable anonymity is like using Napalm
when a
surgical strike is available. Seltzer developed the problem
with technological
optimism by pointing to anti-porn filtering software, which
more
often than not blocked innocent sites and thus hampered
expression on
the Internet, and anti-circumvention requirements in
copyright, which
impaired innovation without stopping the robust pirated DVD
market. Ohm’s
and Seltzer’s arguments are important. Channeling law
through technology has an important role but perhaps not in
this way. I
supported traceable anonymity as a means to protect law’s
deterrent
power. Website operators are so often immune from liability
due to §
230 of the Communications Decency Act, [11] leaving only the
perpetrators
to pursue for legal remedies and prosecutions. In other
words, a cyber
civil rights agenda may have limited coercive and expressive
power unless
perpetrators see that the costs of their conduct exceed the
benefits.
There are, of course, other ways to address this problem
aside from
traceable anonymity. One possibility is a variation of a
notice and takedown
regime. Law could require website operators to retain a
poster’s IP
address only after receiving notice of legitimate claims of
illegal or tortious
activity. Of course, this regime could be manipulated by
individuals
who aim to identify an individual based on frivolous claims.
It would
raise other negative externalities as well, such as chilling
concerns. This
is just one of many possible ways to address the inability
to identify cyber
harassers. Nonetheless, thinking of alternatives to
traceable anonymity
seems an indispensable part of the future of a cyber civil
rights
agenda.
_______________
Notes:
† Professor of
Law, University of Maryland School of Law. I am ever
grateful to Professor
Viva Moffat, Mike Nelson, and Jake Spratt for conceiving
and orchestrating the Cyber Civil Rights
conference. Their insights and those of our panelists
will have an indelible mark on my future work
on the subject.
1. There are
naturally many more weaknesses, though I concentrate on
these two, which
struck an important chord for the participants at the
conference.
2. 18 U.S.C.
245(b)(2)(C) (2006).
3. E.g., United
States v. Syring, 522 F. Supp. 2d 125 (D.D.C. 2007).
4. PARTNERS
AGAINST HATE, INVESTIGATING HATE CRIMES ON THE INTERNET:
TECHNICAL
ASSISTANCE BRIEF 5 (2003).
5. See Danielle
Keats Citron, Law’s Expressive Value in Combating Cyber
Gender Harassment,
108 MICH. L. REV. 378–79 (2009) (explaining that over
60% of online harassment victims are
women and that when perpetrators target men for online
harassment, it is often because the victims
are believed to be gay).
6. PARTNERS
AGAINST HATE, supra note 4, at 5.
7. To that end,
courts have interpreted religious groups, such as Jews,
as a race protected by
the Thirteenth Amendment. See, e.g., United States v.
Nelson 277 F.3d 164 (2d Cir. 2002).
8. Vietnamese
Fishermen’s Ass’n v. Knights of the Ku Klux Klan, 518 F.
Supp. 993 (S.D.
Tex. 1981).
9. I take this
notion of a Sisyphean struggle from Deborah Rhode, who
so eloquently captured
this point when she described women’s struggles to
combat sexual abuses in the workplace.
Deborah L. Rhode, Sexual Harassment, 65 S. CAL. L. REV.
1459, 1460 (1992). She explained that
women’s struggles have “elements of a feminist myth of
Sisyphus” because many women are still
pushing the same rock up the hill with regard to
occupational segregation, stratification, and
subordination. Id. The enforcement of the law of sexual
harassment “reflects a commitment more to
equality in form than equality in fact.” Id.
10. I also argued
that courts should release the names of posters to
plaintiffs only if plaintiffs
could provide proof that their claims would survive a
motion for summary judgment. This would
assure posters of the safety of their anonymity in the
face of baseless allegations.
11. Even under the
broad interpretation of § 230, website operators can be
sued if they explicitly
induce third parties to express illegal preferences. In
2008, the Ninth Circuit addressed whether
Roommates.com enjoyed § 230 immunity for asking posters
questions about sex, sexual orientation,
and whether the person has children as part of the sign
up process. Plaintiffs argued that those questions,
if asked offline, could violate Fair Housing laws. The
Ninth Circuit found that defendant
lacked immunity under CDA because it created the
questions and choice of answers and thus was the
“information content provider” as to the questions and
in turn the answers that it required. The court
reasoned that the CDA does not grant immunity for
inducing third parties to express illegal prefer-
ences. Eric Goldman expertly addressed the implications
of the Roommates.com case at the conference.
***
THE BANALITY OF CYBER
DISCRIMINATION, OR, THE
ETERNAL RECURRENCE OF SEPTEMBER
MARY ANNE FRANKS†
What, if some
day or night a demon were to steal after you into your
loneliest loneliness and say to you: “This life as you
now live it and
have lived it, you will have to live once more and
innumerable times
more” . . . Would you not throw yourself down and gnash
your teeth
and curse the demon who spoke thus?
– Friedrich
Nietzsche, The Joyful Wisdom
[E]very year in
September, a large number of new university students
. . . acquired access to Usenet, and took some time to
acclimate
themselves to the network's standards of conduct and
“netiquette.”
After a month or so, these new users would theoretically
learn to comport themselves according to its
conventions. September
thus heralded the peak influx of disruptive newcomers to
the network.
In 1993, America
Online began offering Usenet access to its tens
of thousands, and later millions, of users. . . . AOL
made little effort
to educate its users about Usenet customs . . . .
Whereas the regular
September freshman influx would soon settle down, the
sheer number
of new users now threatened to overwhelm the existing
Usenet
culture’s capacity to inculcate its social norms.
Since that time,
the dramatic rise in the popularity of the Internet
has brought a constant stream of new users. Thus, from
the point of
view of the pre-1993 Usenet user, the regular
“September” influx of
new users never ended. The term was first used by Dave
Fischer in a
January 26, 1994, post to alt.folklore.computers: “It’s
moot now.
September 1993 will go down in net.history as the
September that
never ended.”
– From the
Wikipedia entry for Eternal September
INTRODUCTION
Much virtual ink has
been spilled on the ever-increasing phenomenon
of cyber harassment by a wide range of individuals writing
from a
wide range of perspectives. The voices weighing in on the
heated discussion
include scholars (legal and otherwise), lawyers, bloggers,
techies,
Internet users whose offline identities are largely unknown,
and many
who fit into more than one of these categories. The varying
opinions on
cyber behavior often revolve around a conception of
“seriousness,” and
seem to fall roughly into one of the following categories:
1. Cyber
harassment is a serious problem that should be legally
regulated
through civil rights, tort, and criminal law;
2. Cyber
harassment is a serious problem that can be adequately
dealt
with through tort and criminal law;
3. Cyber
harassment is a serious problem but legal regulation is
not
the right way to address it;
4. Cyber
harassment is not very serious and accordingly should
not
be legally regulated; and
5. “STFU, b$tches!”
In other words, not only is cyber harassment not
serious, even using the term “cyber harassment” marks
you as a
whiny, oversensitive PC’er/feminazi/old dude who doesn’t
“get it”
(where the referent for “it” ranges from “the
free-wheeling, often
mindlessly derogatory way that digital natives interact
with each
other” to “the First Amendment”); accordingly, not only
should cyber
harassment not be legally regulated, it should be
legally protected. [1]
For simplicity’s
sake, let us call those in category 1, 2, and 3
“condemners,”
and those in category 4 and 5 “defenders.” What condemners
seem to mean by calling cyber harassment serious is that it
creates some
kind of harm, whether criminal, tortious, discriminatory, or
some combination
of the three. What defenders seem to mean by arguing that
cyber
harassment is not serious is that it is an expected,
predictable, and even
valuable aspect of Internet interaction, the virtual
equivalent of frat boy
antics and bathroom wall scribbles.
While I have many
things to say on the topic of cyber harassment, [2] I
want to frame my remarks here around a very specific claim:
the defenders
are largely correct in their description of cyber harassment
as predictable,
commonplace, and juvenile—in a word, banal—and that this
very
banality is what makes it both so effective and so harmful,
especially as a
form of discrimination. There is little that is new or
radical about the
content of cyber harassment. The racist, sexist, and
homophobic epithets,
the adolescent exultation in mindless profanity, the cheap
camaraderie of
sexual objectification and violence are all familiar tropes
from a familiar
setting, namely, High School. What is different is the form
of online harassment,
namely, the way that cyberspace facilitates the
amplification,
aggregation, and permanence of harm. The first part of this
piece will
address defenders in category 4 and 5; the second part of
the piece will
address the divide between category 1 and category 2
condemners on the
necessity of a civil rights approach to cyber harassment,
leaving aside
category 3 condemners for another article. [3]
I. THE ETERNAL
JUVENILE One
may well ask why the drearily familiar sludge of juvenile
hostility
has informed so much of cyberspace’s conventions and norms.
One
plausible reason is that the social norms of cyberspace are
overwhelmingly
determined by the young. The “norm entrepreneurs” of
cyberspace,
if you will, are twenty-somethings and teenagers. Sergey
Brin and Larry
Page were barely out of their twenties when they created
Google; none of
the founders of YouTube had hit 30 when they developed the
videosharing
website; teenagers and college students set the tone of many
online
environments, having more time to spend in them and being
quicker
to access and adopt new technologies. The Internet as we
know it is in
large part driven and populated by individuals whose norms
and customs
are closer to those of high school than to adulthood. This
is likely part of
the story of the Internet’s creative and innovative
potential; it is also part
of the story of the Internet’s more depressing side.
As many know from
personal experience, school harassment can be
a vicious phenomenon. It can range from the trivial to the
traumatizing,
from teasing a shy kid about his haircut to physically
assaulting a student
rumored to be gay. School harassment can cause pain,
embarrassment,
and self-consciousness, and its effects sometimes follow its
victims into
adulthood. Importantly, however, school harassment used to
be bounded
in three significant ways: by audience, scope, and time.
Those who witnessed
the harassment were fellow students, perhaps some teachers
and
school administration officials—traumatic indeed for the
teenager who
feels her peers make up the entire world of relevant
individuals, but objectively
a very small part of the population. The scope of the
harassment
was also bounded, focusing mostly on appearance, mannerisms,
and alleged
activities of the targets, but not usually extending to
information
not readily available to the school community. Perhaps most
importantly,
school harassment used to have a temporal end—for the most
part, rumors
and taunts began to fade minutes after graduation, to be
eventually
forgotten or at least recorded only in the minds of
individuals.
With the increasing accessibility and influence of the
Internet, however,
what was once an often negative but largely containable
phenomenon
has been dramatically transformed. Harassment in cyberspace
is not
bounded by any of the three limitations of the pre-Internet
High School.
The audience for harassment, as targets, participants, and
witnesses, is
virtually unlimited. Any person of any age can be singled
out for harassment,
any person can join in the harassment, and the entire online
world is now a potential witness to that harassment—one’s
peers, to be
sure, but also one’s family, employers, children,
co-workers. The scope
is also no longer limited, as technology makes it simple to
locate and
broadcast a wealth of information about a target: home
addresses, telephone
numbers, social security numbers, sexual activities, medical
information,
vacation pictures, test scores. And, perhaps most
importantly,
cyber harassment is not limited by time, as harassing posts,
comments,
pictures, and video are often impossible to erase, so that a
target may
never be able to leave them behind.
This, then, is the
response to the defenders of categories 3 and 4:
while the substance of cyber harassment might seem familiar,
harassment
writ large in cyberspace—expanded so drastically in target,
scope, and
reach—has far greater impact than any schoolyard attack.
II. WHEREFORE
DISCRIMINATION?
Now let us turn to the
condemners. There are many who do not
need to be convinced of the seriousness of cyber harassment,
but who
nonetheless disagree about the best way to approach it. One
of the biggest
divisions among condemners, it seems, is whether tort and
criminal
law are sufficient to address the problem of cyber
harassment, or whether
it is necessary to develop what Danielle Citron calls a
“cyber civil rights”
approach. [4] Much of the discussion at this symposium
revolved around
this divide.
First, it should be made clear that those who advocate a
civil rights
approach do not do so to the exclusion of tort or criminal
approaches. I
am not aware of any advocate of a cyber civil rights
approach who believes
that tort or criminal law should not be used wherever
possible.
Though my recent work does not focus on this, I am fully in
support of
attempts to make tort claims regarding cyber harassment more
viable and
efforts to strengthen the criminal prosecution of online
stalking and harassment.
The driving idea behind a cyber civil rights agenda, to my
understanding,
is simply that online harassment can be a form of
discrimination.
Can be—the argument is
not, of course, that all cyber harassment
constitutes discrimination. Some online harassment is best
characterized
as bullying; some as defamation; some as invasions of
privacy. It is only
when the harassment in question is directed at a
historically marginalized
group in a way that reinforces their marginalization and
undermines their
equal participation in the benefits of society that
harassment should be
considered discrimination. There are, no doubt, some
difficult questions
about which groups should be considered marginalized, but
settled discrimination
law has recognized, at the very least, that racial
minorities,
religious minorities, the disabled, and women are among
these groups.
This does not mean that every time a woman or an
African-American is
harassed online it is a case of discrimination. But when,
for example, a
woman is attacked by name with unwelcome, graphically sexual
or violent
commentary that invokes and celebrates derogatory and
objectifying
sexist stereotypes, and results in significant interference
with her ability
to work, study, or take advantage of the resources and
opportunities
available to men, then that is discrimination and should be
treated as
such. Why is
it important to recognize that the harassment of
marginalized
groups on the basis of their identity as members of these
groups is
not simply a tort, or in some cases, a crime? Because both
tort and criminal
law are primarily aimed at individuals who are harmed as
individuals,
not as members of a group. When a woman is attacked on the
basis
of being a woman, it sends a message to women as a group:
you do not
belong here, you do not have the right to be here, you will
not be regarded
on the basis of your talents and abilities but rather on
your sexuality,
your appearance, your compliance with traditional gender
roles. To
interrupt the all-too-familiar process of unjust social
segregation—
whether it be along gender, racial, or religious lines—our
legal response
must express the condemnation of discrimination above and
beyond any
individual harm.
III. THE END OF
SEPTEMBER It
is certainly easier not to, of course. If cyber harassment
can be
considered discrimination, that means making tough calls
about what is
merely offensive and what is genuinely discriminatory. It
means running
the risk that in regulating discriminatory speech we chill
valuable speech.
It means, at least in the regime I suggest—holding website
owners liable
for discrimination that occurs on their sites— imposing
costs on third
parties, which in turn might mean that fewer people decide
to create
websites and that those who do will over-regulate. It likely
means legislative
reform. It means a possibly long period of uncertainty about
reasonable
responses and clumsily worded anti-discrimination policies.
It
means that we might end up with a “sanitized” cyberspace,
whatever that
might mean. In other words, doing this will mean changing
how things
are and how they’ve always been done here.
All of this is true.
And we have seen it all before. The same obstacles
and objections were pointed out in the fight to have sexual
harassment
recognized as sex discrimination. Courts struggled to define
“severe
or pervasive” harassment and “unwelcomeness.” There were
warnings
about overdeterrence. There were concerns that sexual
harassment
policies and procedures would place an undue burden on
employers. The
EEOC had to develop and promulgate guidelines on sexual
harassment
and preemptive policies. Employers and schools are still
struggling to
develop best practices for dealing with sexual harassment.
Some bemoan
the rise of the “sanitized” workplace, whatever that means.
But after sexual
harassment was recognized as sex discrimination,
the universe didn’t implode. Employers weren’t bankrupted by
the implementation
of sexual harassment policies. Free speech has not, by most
accounts, become an endangered species in workplaces or
schools. What
has happened, slowly, is that social norms have started to
change. The
blatantly unwelcome sexual advances considered common
practice in the
workplace twenty years ago are not acceptable today. Whereas
would-be
harassers in the past could simply continue the cycle of
behavior that
they saw all around them, harassers now have to consider the
possible
repercussions of violating institutional defaults set to
non-discrimination.
In other words, recognizing sexual harassment as sex
discrimination has
changed how things are and how they’ve always been done
here. Isn’t
much modern discrimination, after all, a form of protracted
adolescence, a refusal to change the ideas to which one is
accustomed, an
insistence on one’s arbitrary schoolyard privilege, an
arrogant dismissal
of the rights of those who seem different? Perhaps to be
ignored, dismissed,
or merely disciplined if kept well within the confines of a
small
space with a limited audience and an expiration date; but
when discrimination’s
dull repetitive damage is given an eternal forum, a
loudspeaker,
a stage, to join forces with the realities of persistent
inequality—then it
must be interrupted, loudly, eternally, forcefully. And
then, perhaps, we
might reach October.
_______________
Notes:
† Bigelow Teaching
Fellow and Lecturer, University of Chicago Law School.
J.D., Harvard
Law School; D.Phil, M.Phil, Oxford University; B.A.
Philosophy and English, Loyola University
New Orleans.
1. It is worth
pointing out that legal scholars of all categories,
including 1 and 2, are to my
knowledge quite well aware of the existence of the First
Amendment and the issues that must be
confronted when dealing with regulations of speech,
despite the impression created by some members
of categories 3, 4, and especially 5.
2. See Mary Anne
Franks, Unwilling Avatars: Idealism and Discrimination
in Cyberspace
19 COLUM. J. GENDER & L. (forthcoming Feb. 2010),
available at
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1374533;
Mary Anne Franks, Sexual Harassment
2.0 (working paper), available at
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1492433.
3. As the reasons
for condemning cyber harassment yet refraining from
legal intervention
can be considerably complex. See Franks, Unwilling
Avatars, supra note 2.
4. Danielle Keats
Citron, Cyber Civil Rights, 89 B.U. L. REV. 61 (2009).
***
REGULATING
CYBERHARASSMENT: SOME THOUGHTS ON
SEXUAL HARASSMENT 2.0
HELEN NORTON†
INTRODUCTION
Professor Franks’
Sexual Harassment 2.0 [1] valuably builds on Professor
Citron’s substantial contributions to our understanding of
cyberharassment
in at least two ways. First, Professor Franks joins
Professor
Citron in powerfully challenging the idealistic narrative of
the internet
as a primarily egalitarian institution. Both persuasively
document the
use of cyberharassment to target and punish traditionally
subordinated
groups.
Second, Professor Franks thoughtfully responds to Professor
Citron’s
call for a conversation about what a cyber civil rights
agenda might
involve. Professor Citron started that dialogue in Cyber
Civil Rights, [2]
where I was particularly fascinated by her discussion of the
Violence
Against Women Act’s [3] prohibition on the use of
telecommunications
devices to deliver certain anonymous threats or harassment.
I am less
optimistic than Professor Citron, however, that other
existing civil rights
laws— such as Title VII, [4] Title IX, [5] and 42 U.S.C. §
1981 [6]—might capture
and address cyberharassment’s harms.
The barriers to
addressing cyberharassment under those statutes
have very little to do with the space in which the
harassment occurs but
instead everything to do with whether the harasser is
someone within the
regulated institution’s control. Cyberharassers (assuming we
can identify
them) are rarely supervisors, co-workers, teachers, or
others subject to
control by covered employers and schools. In other words,
the harasser
frequently has no connection with, and is thus not
controllable by, the
actors regulated by current civil rights laws.
Addressing such
cyberharassment instead requires a new civil rights
law. Professor Franks, for example, intriguingly proposes
that we hold
website operators liable for the cyberharassment facilitated
by their web-
sites. Here I raise some thoughts and questions for
Professor Franks and
others interested in that route.
I. WHAT WE CAN LEARN
FROM THE PAST
This, of course, is
just the most recent in a longstanding and important
series of conversations about the theoretical and practical
relationship
between speech and equality. Indeed, debates over whether
and
when we should regulate hate speech and other forms of
harassment in
various spaces are by no means new. So we might consider
when and
why we have—and have not—protected certain spaces from
harassment
in the past and then ask how the contemporary conversation
over cyberspace
compares to those past deliberations.
There may be a number
of differences between those conversations
past and present, but I flag just one for now. As a
practical matter, the
regulation of harassment at work and school took two steps.
First, Congress
claimed the space as protected by regulating conduct within
that
space: it prohibited discrimination in employment through
the enactment
of Title VII in 1964 and barred sex discrimination by
federally funded
educational institutions in 1972. [7] Those statutes’ plain
language focused
on discriminatory conduct, such as discriminatory decisions
about hiring,
firing, pay, admissions, scholarships, and so on.
Years later courts,
policymakers, and the public took the second
step when they came to understand that illegal
discrimination can include
harassment, which often (but not always) takes the form of
speech. In
other words, only later did we realize that meaningful
protections against
discrimination in those spaces required the regulation of
some speech in
those spaces as well. So, advocates first had to convince
policymakers to
regulate the space at all in the face of vigorous resistance
from opponents
who raised concerns about free market interference and the
constraint of
institutions’ discretionary choices, among others. And,
second, we later
recognized that some forms of speech in that space can
create equality
harms sufficient to justify further regulation.
Here, Professor Franks
seeks to take both steps in the same bold
move: to protect, and thus regulate, a certain space that
has not yet been
regulated and (because speech comprises a substantial part
of what happens
in that space) to regulate speech in that space.
To persuade folks to
make that big leap, one must show that the
harms of harassing speech in this space are so great as to
justify its regulation.
This strikes me as a substantial challenge, especially in
light of
our experience with civil rights legislation that targets
very tangible
harms. For example, nearly twenty years passed before this
year’s en-
actment of the Hate Crimes Prevention Act, [8] which
addresses acts of
physical violence in which the victims bleed, and sometimes
die. Sixteen
years after its introduction (and more than thirty years
after the introduction
of the first gay rights bill in Congress), the Employment
Non-
Discrimination Act [9]—which would prohibit job
discrimination on the
basis of sexual orientation and gender identity— has yet to
be enacted. A
cyberharassment statute strikes me as a particularly heavy
lift in light of
this history.
Can that lift be made?
In addition to preparing for a long haul, advocates
for a new cyberharassment law must answer at least two key
questions. First, can we identify an agent of
control—someone who has
the actual power to control equality harms that might occur
in that space?
Second, should we hold them liable for harms that occur in
this space? In
other words, should we regulate this space at all? Should we
consider
cyberspace a space in which participants should be protected
from harassment?
II. CRAFTING A VIABLE
CYBERHARASSMENT STATUTE
Professor Franks has
persuasively answered the first question by
identifying website operators as agents of control over the
cyberspace
they create and manage. Holding them liable, however,
triggers a number
of other challenges. The matter of remedies, for example,
raises theoretical
and practical concerns about over-deterrence. If one
parallels the
remedies available under Titles VII [10] and IX [11] to hold
website operators
liable for money damages for the injuries caused by
cyberharassment that
occurs on their sites, the potential costs to website
operators are quite
great—especially when compared to those faced by the
harassers themselves,
who (as Professor Franks notes) would simply risk being
denied
access to those websites or having their posts removed. This
dynamic
might well lead many website operators simply to prohibit
private parties
from offering comments or postings—an outcome many might
find troubling.
One response to those
concerned by that outcome might be to build
on the work of Charles Lawrence, Catharine MacKinnon, and
others in
other contexts involving hate speech and harassment. In
other words, one
might challenge a traditional zero-sum understanding of
speech and liberty
(that treats speech restrictions as inevitably shrinking the
universe of
available speech in a way that damages important First
Amendment values)
by explaining how cyberharassment actually undermines free
speech
values by silencing the voices of members of traditionally
subordinated
groups. Under this view, regulations specifically targeted
at cyberharassment
that effectively silences other speakers may actually
increase the
overall universe of expression that furthers significant
First Amendment
interests. So
there may be responses to such objections. But, on the other
hand, legitimate concerns about over-deterrence may suggest
the need to
think creatively about remedies, such that an entirely new
remedies regime
might be appropriate in this context.
This leads to the
second question: whether we should be protected
from harassment in cyberspace at all. Advocates must nail
down with
precision the underlying justification for regulating those
who control
chunks of that space if they are to develop the political
momentum for,
and to ensure the First Amendment validity of, such a
statute. In
the past, policymakers chose to regulate harassment in
employment
and education largely because such harassment caused such
great
harm to families’ economic security as well as to individual
dignity and
autonomy in important spheres of American life. Quantifying
the gravity
of harassment’s harm in those spaces not only made a strong
case for
regulation as a policy matter, but also helped justify the
regulation of
speech in those spaces as constitutional under the First
Amendment. In
other words, one way (but certainly not the only way) to
explain anti-harassment
laws’ constitutionality is to recognize the regulated speech
as
posing substantial harms without significantly furthering
traditional First
Amendment values. Indeed, we frequently understand the First
Amendment
to permit the regulation of expression where the harms of
the targeted
speech appear to outweigh its value in facilitating
significant First
Amendment interests in self-expression, the discovery of
truth, and participation
in democratic self-governance. Examples include threats,
solicitation,
defamation, fighting words, obscenity, and misleading
commercial
speech.
Drawing these lines, however, has always been difficult and
deeply
controversial. A viable cyberharassment law thus must target
specific
expression that both causes grave harms and is of little
First Amendment
value. The Supreme Court sought to strike that balance under
Title VII [12]
with its requirement that speech rises to the level of
actionable harassment
only when it is sufficiently severe or pervasive to alter
the conditions
of employment and create an abusive working environment. A
new
statute’s political and constitutional prospects thus depend
in great part
on identifying the nature and degree of cyberharassment’s
harm with
precision by carefully articulating the importance of
participating in cyberspace
to our lives today apart from any connection to workplace or
educational harm. To be sure, cyberharassment’s harms can
include po-
tential interference with employment or educational
opportunities, as
both Professors Citron and Franks have explained. But what
if the victim
is unemployed or retired, or no longer in school? Is there
no harm caused
by her cyberharassment? Regulating on-line website operators
requires
advocates to focus on cyberharassment’s specific on-line
harms, rather
than the harms that play out off-line in areas like
employment and education
that are beyond operators’ scope of control.
CONCLUSION
Professors Citron and
Franks have taken an important first step in
identifying cyberspace harassment issues and suggesting
legislative responses
to those harms. Those seeking legislation must next make the
case that deterring women from participating in cyberspace
is a sufficiently
great harm to justify regulation of website operators or
others
who have control of that space, and then to target that
regulation to
speech that is both harmful and of relatively low First
Amendment value
(assuming that one seeks to regulate expression other than
that which is
already actionable as threatening or defamatory).
Professor Franks
starts to get at the first part of this calculus when
she writes: “A world in which members of certain groups
avoid places,
professions, opportunities, and experiences because they
fear not de jure
discrimination but de facto discrimination, based not on
their ideas but
on their bodies . . . is not a world that maximizes
liberty.” [13] Professor
Citron has similarly described how cyberharassment raises
the price that
subordinated groups must pay for their participation in
cyberspace. These
are just the first steps in a long-term project for those
who seek to develop
a statute with strong chances both to generate political
support and
withstand First Amendment scrutiny.
_______________
Notes:
† Associate
Professor, University of Colorado School of Law. Thanks
to Danielle Keats
Citron for inspiring—and the entire Denver University
Law Review staff for organizing—a terrific
symposium.
1. Mary Anne
Franks, Sexual Harassment 2.0 (University of Chicago Law
School, Working
Paper, Feb. 5, 2010), available at
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1492433.
2. Danielle Keats
Citron, Cyber Civil Rights, 89 B.U. L. REV. 61 (2009).
3. Violence
Against Women Act, 18 U.S.C.A. § 2261 (1994).
4. 42 U.S.C. §
2000(e) (2006).
5. 20 U.S.C. §
1681–88 (2006).
6. 42 U.S.C. §
1981 (2007).
7. 42 U.S.C. §
2000(e) (2006).
8. 18 U.S.C.S. §
249 (2009).
9. H.R. 2981,
111th Cong. (2009).
10. 42 U.S.C. §
2000(e) (2006).
11. 20 U.S.C. §
1681-88 (2006).
12. 42 U.S.C. §
2000(e) (2006).
13. Mary Anne
Franks, Unwilling Avatars: Idealism and Objectification
in Cyberspace, 19
COLUM. J. GENDER & L. (forthcoming Feb. 2010), available
at
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1374533.
***
CYBER SEXUAL
HARASSMENT: COMMENTS ON CITRON
AND FRANKS
NANCY EHRENREICH†
INTRODUCTION
One of the most
interesting challenges for lawyers and law professors,
of course, is creating new language and concepts to capture
new
injuries as they arise. Naturally, the impulse is to explore
existing doctrine
for appropriate analogies, and there are many interesting
possibilities
for the injury of online sexual harassment, as Professors
Citron’s and
Franks’ scholarship reveals.
The obvious place to
start is certainly sexual harassment law, as
many participants at this symposium noted. [1] I agree with
Professor Citron
that there are many historical similarities between
discussions of
cyber harassment today and the initial debates about sexual
harassment
in the workplace that occurred during the 1980s. Harassment
is being
trivialized now in quite similar ways to how it was then,
and the arguments
for seeing such behavior as non-actionable private crudeness
rather than civil rights violations are familiar as well.
Courts and commentators
in those early days routinely dismissed harassment at work
as
harmless flirting, and would-be plaintiffs were often
exhorted to seek
work elsewhere if they didn’t like the sexually charged
atmosphere that
some workplaces “happened” to have. So, the recycling of
such attitudes
into arguments such as “if you don’t like the atmosphere,
stay off the
website” is certainly not surprising.
In the sections that
follow, I’ll comment upon several aspects of the
problem of cyber sexual harassment and the presenters’
thoughts on how
to solve it, including: (1) the overstated benefits of
Internet freedom; (2)
the nature of the harm of cyber sexual harassment and
possible solutions;
(3) conflicting liberties, the pornography parallel, and
access issues; and
(4) the real danger of web regulation: censorship of
political dissent.
I. THE OVERSTATED
BENEFITS OF INTERNET FREEDOM
What I appreciate most
about Professor Citron’s Cyber Civil Rights
is the way that it gives the lie to the myth of neutrality
on the net—the
idea that free speech exists currently on the Internet and
must be pre-
served at all costs. Professor Franks picks up on that idea
as well, with
her concept of “cyberspace idealism”: “the view of
cyberspace as a utopian
realm of the mind where all can participate equally, free
from social,
historical, and physical restraints.” [2] The online
dynamics identified by
both of these commentators starkly reveal a truth as old as
the American
Legal Realists: Liberties often conflict; one person’s
freedom often compromises
another’s. In this context, this means that granting
unfettered
freedom to harassers silences those whom they harass,
foreclosing access
to the net for those individuals and depriving them of an
incredibly valuable
societal resource. “If she wants to work, she can just put
up with the
behavior,” was the refrain in the 1980s—until activists
revealed it as an
argument in favor of preserving male privilege. To say that
the web is a
sacrosanct arena that can’t be regulated is to sound the
same refrain.
But I’d like to push
the challenge to net naïveté a bit further—by
questioning the popular image of the Internet (implicitly
endorsed at
times by Professor Franks) as a place where individuals can
“escape the
physical constraints imposed on [their] identity, and . . .
exert control
over [their] representation.” [3] As Professor Franks
rightly notes, such
freedom (if it existed) would be especially beneficial to
marginalized
groups, whose bodies are so often deployed as metonyms for
their otherness.
And it seems plausible that individuals might find it
liberating in
some sense to be able to perform a different identity on the
web than they
are able to perform in “real life”—whether it’s women
masquerading as
men or men as women; people of color as white or vice versa;
sexual
minorities as straight or the other way around.
But the benefits of
such identity fluidity on the net may be more illusory
than real. And the notion that one can somehow escape
subordination
when in the virtual world sounds a familiar theme that has
repeatedly
been proven wrong: the naïve faith in technology’s ability
to overcome
social realities of power and privilege. (Recall, for
example, the
early love affair that feminists had with reproductive
technologies or the
more recent refrain that (as yet uninvented) technological
advances will
solve the problem of global warming.) The unfortunate fact
is that, because
communication and language are socially constructed,
categories
of meaning, of similarity and difference, of value and lack
of value, transcend
the material context, continuing to structure behavior and
limit
possibilities in virtual space.
It may be beneficial,
for example, for a man to pretend to be a
woman while on the web, and thereby experiment with
expressing his
feminine side without risking negative repercussions. But,
wouldn’t it be
more beneficial if he could openly express his femininity
without dis-
guising himself? Is it liberating for a woman to be able to
masquerade as
a man in order to be taken more seriously on a blog? Is that
any different
than, say, a gay person being closeted at work?
The Internet can’t
escape culture any more than any other technology
can. While it might be a more subversive place if, for
example,
“handles” were incapable of being sex-identified (through
randomly assigning
numbers or the like), even then dominant ontologies would
prevail.
Because dominant classes operate as unmarked categories
(“man” is
the unmarked generic word for humans, etc), the default
assumption
about any particular individual communicating on the web
would likely
be that he or she occupied one or more dominant positions
(was male,
white, etc.). Studies have shown that such defaulting to the
unmarked
category is common in human communication—for example,
people
often assume an unidentified actor is white, male, and
heterosexual.
Thus, socially constructed categories would likely
perpetuate existing
hierarchies (rendering members of marginalized groups
invisible) even if
handles were completely detached from identity.
Beyond that, class
membership would also likely be ascribed to
people based on their online personalities, behavior,
communication
styles, or other traits—just as race and sex are ascribed to
particular
physical bodies even when those bodies are ambiguous (the
phenomenon
highlighted by the “Pat” character on Saturday Night Live).
As many law
professors may remember from the days of handwritten law
school exams,
even something as innocuous as handwriting carries gendered
meaning. How many of us found ourselves (against our better
instincts)
assuming that the author of a particular bluebook was male
or female just
because of the handwriting?
The categories in
which we think and with which we order our social
world transcend the physicality of that world. This is not
to say that
there aren’t liberating aspects to the web. But they have
been seriously
overstated, skewing the debate about how much is gained and
how much
is lost from regulation of the Internet to prevent harms of
sexual (as well
as racial and other) harassment.
II. THE HARM OF CYBER
SEXUAL HARASSMENT AND POSSIBLE
SOLUTIONS One
harm caused by the types of harassment that Professors
Citron
and Franks describe is that, as Professor Citron notes, [4]
such abuse imposes
a tax on Internet access for the vulnerable groups against
whom it
is directed (women, people of color, and sexual minorities,
to name a
few). Before Catharine MacKinnon so trenchantly described
the connection
between sexuality and power (and even after, unfortunately),
ana-
lysts and decision makers failed to appreciate that sexual
demands in a
context of unequal power (such as between employer and
employee)
result in discriminatory conditions of work. Race- and
sexuality-based
harassment likewise impose additional conditions of work
that dominantgroup
members don’t have to endure. Similarly, when women and
other
marginalized groups are subjected to harassment on the
Internet, being
subjected to such conduct becomes a condition of access to
this crucially
important societal resource. While the disparities in power
are not as
clear on the web as in the employer/employee context, an
analogy can be
drawn to coworker harassment—which of course is illegal
under the Title
VII “hostile environment” rubric. Just as an employer who
fails to
address coworker harassment at work makes being subjected to
such
treatment a term or condition of employment, so a website
manager who
fails to control harassment by “co-users” of a website makes
being subjected
to such treatment a term or condition of access to that
site. And,
just as “go find another job” is no longer an acceptable
response to a
plaintiff’s complaint under Title VII, “go to another
website” should be
considered an unacceptable response to complaints about
limited access
due to uncontrolled harassment on a site.
Thus, for me, a
potential solution that warrants consideration is
simply a separate statute that prohibits discrimination
based on race, sex,
religion, sexuality, etc., in the provision of Internet
services. Unlike Professor
Franks, I see cyber harassment as having significant effects
in
cyberspace itself—as well as, of course, in the other arenas
she identifies
such as work or school. There is just one Internet, and it
is undeniable
that lack of access to it imposes significant social,
economic, work,
travel, and other disabilities on an individual. It’s not
just that cyber harassment
affects the victim’s ability to work or attend school. It
also limits
an individual’s access to the Internet itself. As Professor
Citron convincingly
argues, harassment on the web is a civil rights issue. I
would add
that the web itself has become so important to human
thriving that, like
employment, housing, and educational institutions, Internet
services are
something that all individuals should be able to access
without being
subjected to discrimination based on subordinated status.
A civil rights statute
directly focused on Internet access, therefore,
would address the impacts with which Professor Franks is
concerned, but
would also protect against a far wider range of impacts that
could result
when cyber harassment deters or limits individuals’ access
to the Internet
(for example, by causing them to shut down their email
account or social
networking page). [5] This statutory approach (which I lack
the space to
fully explore here) would not limit potential liability to
conduct that has
“effects in a space traditionally protected by sexual
harassment law,” as
Professor Franks proposes, [6] but instead would recognize
other effects as
equally deserving of remedy.
Of course, a statute
prohibiting discrimination in the provision of
Internet services would require providers to gather identity
information
about web users and bloggers in order to police their
sites—raising privacy
issues usefully addressed by Professor Franks. The major
confidentiality
concerns that compiling such information raises in my mind,
however,
are not the violation of privacy rights of harassers (or
those whom
providers might perceive to be harassers), but rather the
violation of privacy
rights of political dissidents. Just as telecommunications
companies
willingly turned over confidential information about phone
users to the
government in violation of federal law after 9/11, the most
serious risk of
Internet providers’ identity-gathering would be the risk
that providers
would similarly cave to government pressure in times of
political tension.
[7] III. CONFLICTING
LIBERTIES, THE PORNOGRAPHY PARALLEL, AND
ACCESS TO THE INTERNET
Another analogy that
the discussion during this symposium evoked
in my mind is pornography—in particular the pornography
debates of the
1980s. The parallels between those debates and the current
cyber harassment
discussion are striking. Just as Internet analysts have
probed
such quandaries as whether it is possible to be harmed by a
“virtual”
sexual attack, so pornography analysts two decades ago
pondered
whether people can be harmed by “fictional” film or
photographic depictions
of the sexual torture of women. Some readers may recall
Linda
Marchiano, known under her film name of Linda Lovelace and
the “star”
of the infamous porn film, Deep Throat. Marchiano was
coerced at gunpoint
into performing in the film, including being forced to smile
so that
she would look like she was enjoying the acts in which she
was forced to
engage. How different, one might ask, is the harm done to
her by dissemination
of that film from the harm done to a woman whose avatar is
raped online, and made to look like she/it enjoyed the
attack (as described
by Professor Franks in her article)? [8] Moreover, separate
from
questions of the harms to the particular person depicted in
each case, one
might also ask whether “mere” depictions of abusive
behaviors help perpetuate
social inequities in the broader society. Does the
widespread,
unsanctioned, and highly visible abuse of women and other
vulnerable
groups on the Internet, for example, contribute to the
devaluation of and
sense of license towards such groups? When balanced against
the (lim-
ited) benefits of unfettered net “free speech”, [9] does
such harm justify
restriction of harassment on the web?
As these questions
suggest, many of the free speech issues raised
during the pornography debates are also relevant here, and
the answers
provided by anti-pornography activists have resonance for
the cyber harassment
debates of today. Important questions raised by those
activists
are worth considering in this new context, questions such
as: Is online
harassment primarily speech? Is making a false Facebook page
conduct?
Does, or should, freedom of speech extend to protection from
non-state
violators of one’s speech? Is communication on the Internet
(as Professor
Citron suggests) a zero sum game in which one person’s (or
group’s)
speech is sacrificed when another’s is protected?
Professor Citron’s
insightful argument that allowing harassing
speech on the web silences other speech tracks precisely the
argument
made by anti-pornography activists in the 80s. And even
though the latter’s
efforts did not ultimately succeed, their analyses provide
useful
starting points for some of the debates about cyber
harassment today.
The zero sum argument, for example, reveals the incoherence
of the notion
of state nonintervention in the arena of private speech. If
some silencing
of speech will occur under any regulatory regime, then the
question
becomes: whose speech should the government constrain and
whose
should it protect? The illusion of a perfectly free
marketplace of ideas
falls to the same fate as the Lochnerian illusion of a
perfectly free economic
market. And the substantive nature of purportedly neutral
state
“nonintervention” is revealed as a free speech subsidy for
the powerful.
In fact, this line of
argument may have even more traction in the
cyber harassment context than it did in the pornography
context, because,
unlike in the porn debate where the contention was that
women were
being silenced through the delegitimation of their views,
here the argument
would be that they are actually physically silenced by being
intimidated
into withdrawing from the virtual “marketplace of ideas”
altogether.
The harm to their ability to communicate is more direct, and
the
silencing more total.
IV. CENSORSHIP OF
POLITICAL DISSENT ON THE WEB
Finally, it’s worth
mentioning the real potential dangers of government
regulation of the Internet in the current era—dangers that
few defenders
of Internet freedom mention, or perhaps are even aware of.
While the web may have the Wild West image of a free speech
frontier,
in reality it is already vulnerable to censorship. Web sites
carrying messages
of political dissent that are perceived as threatening to
the existing
power structure can and have been closed down, silencing
those voices. [10]
Perhaps the most famous example of political censorship on
the web is
Yahoo’s “assisting the Chinese government in sending four
dissidents . .
. to prison for terms of up to 10 years.” [11] Google has
also accommodated
the Chinese administration by censoring several sites (by
failing to report
them in search results) blocked by the government. [12] But
similar incidents
have occurred here in the U.S. In 2008, for example, POPLINE,
the world’s largest database on reproductive planning,
yielded to pressure
from U.S. AID and changed its settings to ignore “abortion”
as a
search term. [13] (The dean of the Johns Hopkins School of
Public Health,
which produces POPLINE, reversed the decision once he got
wind of it.)
In 2003, the alternative journalism website YellowTimes.org
was removed
from the web by it web-hosting company for refusing to take
down footage of U.S. soldiers killed by Iraqi troops. [14]
In March 2008, the
New York Times reported that, at the request of the U.S.
Treasury Department,
a British host blocked a Spanish travel agent’s web site
advertising
travel to Cuba. [15] According to the Times, the Treasury
Department’s
Internet blacklist has 6,400 names on it. And of course
content-control
software is readily available and used by schools,
employers,
libraries and other institutions.
Rarely does such
censorship attract media coverage, much less
serve as a rallying call for anti-regulation activists. Yet
the ease and invisibility
with which politically-based restrictions on web speech have
been imposed should give any lover of free expression pause.
The more-than-
a-little-ironic contrast between, on the one hand, the
current flurry
of panels and scholarship considering whether violent,
abusive and misogynist
web expression deserves constitutional protection [16] and,
on the
other hand, the deafening silence around the censoring of
political dissent
on the net, speaks volumes about what sorts of speech are
seen as
dispensable and what sorts aren’t—and about the freedom we
do or don’t
have on the web.
_______________
Notes:
† Professor,
University of Denver Sturm College of Law. B.A., Yale
University; J.D.,
LL.M, University of Virginia.
1. Other familiar
tensions that resurface in this context involve the
First Amendment – which
I will only touch on here, but which has always served
as a limit on sexual harassment claims – and
the related topics of hate speech and pornography.
2. Mary Anne
Franks, Unwilling Avatars: Idealism and Discrimination
in Cyberspace, 19
COLUM. J. GENDER & L. (forthcoming 2010), available at
http://ssrn.com/abstract=1374533.
3. Id.
4. Danielle Keats
Citron, Cyber Civil Rights, 89 B.U. L. REV. 61, 68
(2009).
5. Of course, just as in the workplace context, the
regulation of behavior on the Internet
would have to be balanced against free speech concerns.
For a brief comment on that, see Part III.
6. Mary Anne
Franks, Sexual Harassment 2.0 4 (Working Paper),
available at
http://ssrn.com/abstract=1492433.
7. I discuss the
issues surrounding political dissent and Internet
freedom in Part IV.
8. Franks, supra
note 2, at 17 (citing Lawrence Lessig, CODE 74 (Basic
Books 1999)).
9. See supra Part
I. 10.
Another danger of web regulation worth considering is
the danger of repression of marginalized
sexual expression, such as gay erotica. This danger was
famously raised during the pornography
debates, in an amicus brief written by the Feminists
Against Censorship Task Force (FACT).
11. Nicholas D.
Kristof, Op-Ed., Google Takes a Stand, N.Y. TIMES, Jan.
14, 2010, at A29.
12. Google just
recently said that it will stop cooperating with the
censorship, after suffering
an attack on its system that many assume was perpetrated
by the Chinese government. See Edward
Wong, Hackers Said to Breach Gmail Accounts in China,
N.Y. TIMES, Jan. 18, 2010, available at
http://www.nytimes.com/2010/01/19/technology/companies/19google.html.
13. Catherine
Price, Abortion, Redacted, SALON, Apr. 8, 2008,
http://www.salon.com/mwt/broadsheet/2008/04/08/popline_abortion/index.html.
14. Tim Grieve, No
Dead Bunnies, No Dead Soldiers, SALON, Mar. 25, 2003,
http://www.salon.com/news/feature/2003/03/25/yellowtimes/index.html.
15. Adam Liptak, A
Wave of the Watch List, and Speech Disappears, N.Y.
TIMES, Mar. 4,
2008, at A16.
16. At this year’s
Association of American Law Schools annual conference
(held January
2010, in New Orleans), for example, a panel of big-name
scholars discussed whether even professional
censure of such behavior is appropriate.
***
THE UNMASKING OPTION
JAMES GRIMMELMANN†
I’d like to tell a
story about online harassment and extract a surprising
proposal from it. I’m going to argue that we should consider
selectively
unmasking anonymous online speakers, not as an aid to
litigation,
but as a substitute for it. Identifying harassers can be an
effective way of
holding them accountable, while causing less of a chilling
effect on socially
valuable speech than liability would.
In the end, I’ll
conclude that this proposal is unworkable due to the
danger of pretextual uses of an unmasking remedy by
plaintiffs looking
to engage in extra-legal retaliation. Even this conclusion,
though, has
something valuable to teach us about the uses and abuses of
online anonymity.
Decoupling anonymity from liability enables us to understand
more clearly what’s at stake with each.
I. SKANKS IN NYC
To set the stage,
let’s talk about Skanks in NYC. [1] That’s the name
of an anonymous blog someone created on Google’s Blogspot
service.
Actually, calling it a “blog” may be something of an
overstatement. It
consisted of five entries, all posted the same day, in which
the anonymous
author called a model named Liskula Cohen, a “psychotic,
lying,
whoring . . . skank,” “Skankiest in NYC,” a “ho” and so on.
Cohen filed for a
“pre-action disclosure” order against Google to
disclose the anonymous blogger’s name so she could sue for
defamation.
The blogger objected, saying the posts were just hyperbole
and “trash
talk,” not anything actionable. The judge, however, agreed
with Cohen,
looking to the American Heritage Dictionary definition of
“skank” to
conclude that calling someone “disgustingly foul or filthy
and often considered
sexually promiscuous” is defamatory. Thus, since Cohen had a
“meritorious cause of action,” the judge ordered Google to
disclose the
blogger’s identity. [2]
In an O’Henry-esque
plot twist, the anonymous blogger turned out
to be one Rosemary Port—if not quite a friend of Cohen’s,
then certainly
a frenemy. According to an (anonymous) source who spoke to
the New
York Post, [3] the source of Port’s anger was that Cohen had
criticized the
company Port kept to Port’s boyfriend. After learning who
her antagonist
was, Cohen filed a $3 million defamation suit, but quickly
dropped it,
saying, “It adds nothing to my life to hurt hers. I wish her
happiness.”
A. Right and Wrong
Port’s conduct may
have been unfortunate, but what should we
make of Cohen’s? Although they vary in the threshold they
require the
plaintiff to meet, courts across the country agree that a
“John Doe subpoena”
of this sort should issue only where the plaintiff appears
to have a
winnable lawsuit against the (as-yet unknown) defendant.
Cohen represented
to the court that she had an urgent legal need for Port’s
identity—
to file her defamation lawsuit—and that was the basis for
the court’s
ruling. But almost as soon Cohen had Port’s name in hand,
the lawsuit
went by the wayside. So much for urgent legal need. Was this
a hypocritical
abuse of the legal system?
Dan Solove thought so.
He’s written, “The law must restrict badfaith
lawsuits designed solely to unmask anonymous speakers.”4 He
saw
Cohen’s suit in precisely those terms, saying it appeared
“she was using
the lawsuit only to unmask the blogger.” [5] For him, the
Skanks in NYC
case is an abuse of the justice system.
I think Solove has
things exactly backwards. Cohen v. Google
wasn’t an abuse of the justice system, it was justice.
Rosemary Port got
exactly what she deserved. She tried to shame Cohen; the
result was that
she herself was shamed. That seems about right. There’s
something
beautifully Kantian about it. Lawrence Becker would say that
it was a
“fitting” and “proportionate” “return.” [6]
It strikes me as a
good thing that Cohen dropped her lawsuit. For
one thing, lawsuits are shockingly expensive. Cohen resolved
her beef
against Port for a small fraction of what litigation through
final judgment
would have cost. If the only response to online harassment
is willingness
to litigate, then only the rich will have any protection
against it at all. For
another, what more would Cohen have achieved by carrying her
lawsuit
through to the bitter end? Port was apparently close to
judgment-proof,
which is another way of saying that a verdict for Cohen
would have
bankrupted Port without actually achieving anything for
Cohen. And for
yet another, it’s not self-evident that Cohen would have won
a defamation
suit. I’m more confident that calling someone a “ho” and a
“whoring
skank” online is morally wrong than I am that it’s legally
actionable. In
many cases, the convoluted doctrines of defamation and
privacy law will
deny recovery for reasons that have little to do with the
blameworthiness
of the defendant’s conduct.
Perhaps this lawsuit
was pretextual. But if so, then bring on the pretextual
lawsuits! It’s better to have pretextual lawsuits that are
resolved
quickly and lead to appropriate embarrassment than
protracted lawsuits
that cause serious additional harm to the defendant. And
once we put it
this way, why not cut out the middleman? If there’s nothing
wrong with
a pretextual lawsuit brought to unmask the defendant, we
might as well
drop the fiction of the lawsuit as the basis for unmasking.
I’m proposing,
in other words, that the legal system prefer unmasking to
the standard
remedies at law. Without dwelling on the details, what if we
had a system
that routinely unmasked defendants, one that channeled
plaintiffs
into unmasking and away from damage suits?
II. A THOUGHT
EXPERIMENT
Thus, here’s a proposal for a kind of minimally invasive
surgery to
deal with online harassment. Suppose that we were to give
the victims of
online harassment an expedited procedure to unmask their
harassers.
Specifically, following a quick judicial proceeding with an
easier required
showing, a relevant intermediary would be required to turn
over
whatever it knew about the harasser (typically an IP address
or subscriber
information). In return, the plaintiff would be required to
give up
all remedies at law. These two rules, taken together, would
channel many
cases into unmasking rather than into litigation.
My intent is not to
endorse complete reciprocal transparency in all
things, along the lines of David Brin’s The Transparent
Society. [7] That’s a
recipe for madness; privacy is a basic element of the human
condition.
Most people who choose to go online without identifying
themselves
have a good reason for it, and we should ordinarily respect
that decision.
I’m also not suggesting any new data-retention requirements.
At least for
now, the Internet’s ad hoc balance—it’s easy to keep your
identity superficially
private and hard to keep it truly private—is about right.
The harassers
we really think we can reach—the AutoAdmit posters, the
lulzmobs,
the Rosemary Ports—aren’t using advanced techniques to hide
their identities.
There are many things
to like about unmasking. In the first place,
it’s particularly effective at dealing with harassment. Many
of the worst
cases involve online mobs: crowds of mutually anonymous
people who
spur each other on to increasingly nasty behavior. One of
the best ways
to bust up a mob is to call out its members by name, like
Atticus Finch in
front of the jailhouse. It rehumanizes them, activating
feelings of empathy
and shame, removing the dangerous psychological condition in
which they fear no reprisal. In this respect, visible acts
of unmasking—
which make members of the crowd more aware that their
actions have
consequences—may be a more effective deterrent than actually
punishing
them.
Unmasking also has some major advantages over other possible
responses
to anonymous online harassment. The First Amendment puts
significant limits on the use of tort law. This leads to
cases in which
harmful, wrongful speech can’t be redressed through a suit
for damages.
In response, we’ve seen equally dangerous calls to pare back
the First
Amendment’s protections. Unmasking sidesteps that dilemma.
Not all
the speech that we’d like to protect under the First
Amendment needs to
be protected as anonymous speech.
Similarly, unmasking
is a better option in many cases than holding
intermediaries liable. The typical poster to a web site is
more morally
responsible, and better able to control her own speech, than
the web site
operator, its hosting provider, or the ISP. Making any of
these intermediaries
liable is likely to lead to substantial chilling effects, as
they take
down any potentially problematic material at the drop of a
hat. Our experience
with the DMCA in this regard hasn’t been particularly
cheerful. In
contrast, requiring these intermediaries only to turn over
what information
they have on the identity of the poster is a smaller burden,
and one
that doesn’t give them bad incentives to take down too much
material. On
balance, an identification requirement is likely to be more
speech-friendly than most of the alternatives on the table.
It avoids the
excessively censorious effects of direct and intermediary
liability—but it
also helps protect the speech interests of the victims of
anonymous online
harassment, who in many cases today are forced off the web
in fear. A.
Shame, Good and Bad
Let us be clear. An
argument for regular unmasking is, in effect, an
argument for vigilantism. One of the reasons unmasking works
is that it
exposes anonymous harassers to mass shaming. Solove has
argued [8] that
online shaming can be “the scarlet letter in digital form,”
a point he illustrates
with the story of Dog Poop Girl, who was vilified by
millions on
the Internet after failing to clean up after her dog on the
subway. From
that perspective, to unmask posters is to open up Pandora’s
Box. Rosemary Port could become the next Dog Poop Girl, her
face plas-
tered everywhere online, as millions of people mock her,
exposing her to
shame and retaliation that far exceeds anything she
deserved. Aren’t we
unleashing exactly the same forces of hate and innuendo that
we’re supposed
to be tamping down, leading to a never-ending shame spiral?
Compared with legal process and societal oversight, isn’t
this illiberalism,
pure and simple?
Perhaps. But if so,
it’s a surprisingly tolerable kind of illiberalism.
The legal system does violence, too; it uses the full power
of society and
the state against its victims in a very real and direct way.
Dog Poop Girl-level
abuse will be rare, but damage lawsuits in run-of-the-mill
harassment
cases will routinely all but wipe out defendants. If the
alternative is
being sued into bankruptcy, online shaming isn’t the worst
option out
there.
Perhaps even more tellingly, look who started the hate. As
between
the innocent plaintiff and the defendant who originally
posted mean
things about her, it seems clear which of these two ought to
bear the risk
of a disproportionate response. There’s still a plausible
fit between the
harm the shamer caused and the consequences she must endure.
And if
massive online shame for the shamer is a potential outcome,
this seems
like a singularly appropriate form of deterrence, one that
might actually
be psychologically effective with would-be harassers.
B. Retaliation
And now for my own
O’Henry-esque twist. I’ve just argued that an
unmasking option is superior on most theoretical dimensions
to traditional
lawsuits. But I don’t see a way of making it work in
practice.
Sometimes a lawsuit, with a good old-fashioned damage
remedy,
really is the best outcome. If harassment leads you to lose
your job, that’s
a real, economic harm, and compensatory damages make sense.
Forcing
a plaintiff to give up any hope of that remedy is making
matters worse.
In theory, we could
design the unmasking option so that the plaintiff
gets to choose between unmasking (with a lowered threshold)
or a lawsuit
(with the usual John Doe subpoena standard). But that’s an
awful
choice to put the plaintiff to, because of Arrow’s
Information Paradox.
Until she finds out who her harasser is, she’s not in a good
position to
choose: she can’t tell whether the harasser is
embarrassment-proof or
judgment-proof. What if she chooses the identification, only
to learn that
her nemesis is a rich recluse who enjoys victimizing women
and doesn’t
care about his own reputation?
If the unmasking
option is unfair to plaintiffs, it’s also unfair to
defendants.
You can bet that a corporate CEO would love to characterize
some salty criticism of his leadership as “harassment,”
trace it back to an
employee, and take a little revenge. Here, even if we
require the plaintiff
to give up legal remedies, identification itself imposes
serious harms. A
company that can retaliate in ways other than filing a
lawsuit would be
delighted with the unmasking option’s lowered threshold.
Thus, it turns out
that the trade at the core of the unmasking option—
get an identity in exchange for giving up the right to
sue—is
poorly matched. Sometimes plaintiffs get far too little;
sometimes they
get far too much.
C. Pretext
This conclusion,
however, tells us something important about online
privacy. Many anonymous posters justifiably fear the
pretextual plaintiff.
As soon as we lower the standard to unmask people online, we
open
the door to all sorts of disquieting uses. Companies want to
unmask
whistleblowers, and perhaps some stalkers might find a way
to use it to
learn more about their victims.
This is a classic
problem of privacy as a second-best solution. I said
earlier that people have legitimate reasons to go online
anonymously.
Our belief that those reasons are legitimate stems from the
idea that it
would be wrong for these people to have to suffer being
fired, being
stalked, being personally embarrassed, and so on. But in
many cases,
these wrongs are harms the law has principled reasons not to
redress directly,
or simply has practical difficulties in dealing with. Free
speech
rights, freedom of contract, and the difficulties of proving
causation will
mean that many people who suffer retaliation will have no
legal redress
for it. Anonymity is the best we can practically do, and so,
unless we’re
prepared to make much bigger changes to the legal landscape,
we’ll have
to protect people from pretextual unmasking.
But if the fear of
pretext is legitimate, the strength of the plaintiff’s
cause of action isn’t always a very good proxy for it. Some
plaintiffs will
have a valid lawsuit, but bring it for totally pretextual
reasons—a few
stray comments about a mid-level corporate executive could
blow a
whistleblower’s anonymity. Contrariwise as I’ve been
arguing, there are
plenty of people who ought to be unmasked, but who haven’t
done anything
actionably tortious, given the labyrinthine folds of
defamation and
privacy law. Pretextual lawsuits need not be baseless, and
vice versa.
CONCLUSIONS
Thus, I take two lessons from this thought experiment. The
first is
that we need to decouple unmasking and litigation. The
precise inversion
I proposed—give up your lawsuit to make unmasking
easier—doesn’t
work. But we should be more creatively exploring unmasking
standards
that aren’t directly tied to the strength of the plaintiff’s
case in chief. We
should consider the pros and cons of unmasking directly, on
their own
merits, without always referring back to the lawsuit.
So, on the one hand,
in order to better protect the victims’ interests
in these lawsuits, we should find ways of dropping elements
from a typical
John Doe subpoena. Thus, for example, a plaintiff typically
needs to
show necessity: that she’s exhausted other options to learn
the harasser’s
identity. Chuck that one out the window; if the plaintiff
thinks that asking
the intermediary for the identifying information is the best
way to
learn who the harasser is, that ought to be good enough for
us. On the
other hand, to protect defendants, we should be more
explicit
about pretextual unmasking. Right now, we’re protecting
defendants by
testing the strength of the plaintiff’s case. We should
acknowledge explicitly
that the true threat is retaliation, and develop doctrines
that directly
ask whether the defendant legitimately fears retaliation
from being
unmasked. Those doctrines could then usefully be applied in
any case
where unmasking is at stake, regardless of the area of law
in which it
arises. This
is a Legal Realist argument. It’s concerned with the social
goals the law is trying to achieve—and with what the law on
the ground
is actually doing, regardless of what the law says it’s
doing. A John Doe
subpoena standard that sees only the strength of the
plaintiff’s case is
ultimately both unjust and unstable, because it’s asking the
wrong question.
Unmasking is the very best kind of wrong answer: it helps us
understand
the question we meant to ask.
_______________
Notes:
† Associate
Professor of Law, New York Law School. This essay is
available for reuse
under the Creative Commons Attribution 3.0 United States
license,
http://creativecommons.org/licenses/by/3.0/us.
1. Wendy Davis,
Judge Rules That Model Has the Right to Learn ‘Skank’
Blogger’s Identity,
MEDIAPOST, Aug. 17, 2009 http://www.mediapost.com/publications/?fa=Articles.showArticle&art_
aid=111783.
2. Cohen v.
Google, Inc., 887 N.Y.S.2d 424, 428-30 (N.Y. Sup. Ct.
2009), available at
http://m.mediapost.com/pdf/Cohen_doc.pdf.
3. Lachlan
Cartwright et al., Secret Grudge of NY ‘Skankies’, N.Y.
POST, August 21, 2009,
at 9, available at http://www.nypost.com/p/news/regional/secret_grudge_of_ny_skankies_f6c4ttnK4
zchSR51tDJoYJ.
4. DANIEL J.
SOLOVE, THE FUTURE OF REPUTATION 149 (2007), available
at
http://docs.law.gwu.edu/facweb/dsolove/Future-of-Reputation/text/futureofreputation-ch6.pdf.
5. Posting of
Daniel J. Solove to CONCURRING OPINIONS,
http://www.concurringopinions.com/archives/2009/08/can-you-be-sued-for-unmasking-ananonymous-
blogger.html (Aug. 25, 2009, 7:04 EDT).
6. LAWRENCE C.
BECKER, RECIPROCITY (1990).
7. DAVID BRIN, THE
TRANSPARENT SOCIETY (1998).
8. SOLOVE, supra
note 4, at 1–11.
***
ACCOUNTABILITY FOR
ONLINE HATE SPEECH: WHAT ARE
THE LESSONS FROM “UNMASKING” LAWS?
CHRISTOPHER WOLF†
INTRODUCTION
I am delighted to be
part of this Symposium and honored to be included
among such distinguished fellow presenters.
This topic ties
together so many of my curricular and extracurricular
interests, so I am especially grateful for the opportunity
to speak with
you. In my “day job,” I am a partner at the law firm of
Hogan & Hartson,
focusing on privacy law. Almost thirty years ago, I started
practicing law
as a generalist litigator. For many of those thirty years, I
thought that for
sure my tombstone would read “He died with his options
open,” because
my practice alternately covered a wide array of commercial
litigation
issues, from antitrust to zoning. Fortunately for me, I had
the opportunity
to handle some of the earliest Internet law cases starting
in the early
1990’s, and that led to my concentration on privacy law
since around
1998. Related to that is my current role as co-chair of a
think tank on
contemporary privacy policy issues, the Future of Privacy
Forum. [1]
Outside the office, there are a number of non-profits I
support. At
the top of the list is the Anti-Defamation League, the civil
rights agency
better known by its initials “ADL.” I have been an ADL
activist for more
than two decades. In the mid-1990’s, my involvement as a
volunteer lay
leader for the ADL transformed from general support of the
ADL’s mission
“to fight anti-Semitism and promote justice and fair
treatment for
all” to a focus on Internet hate speech. I founded, and
still chair, the
ADL’s Internet Task Force. At the ADL, our monitoring of
white supremacists,
Holocaust deniers, homophobes, as well as racists and bigots
of all kinds, showed that while in the pre-Internet era
their messages of
hate largely were delivered to a relative few in clandestine
rallies and in
plain brown envelopes delivered through the mail, the
Internet empowered
them, along with the rest of society, to reach millions of
people (including
vulnerable children).
I. ONLINE ANONYMITY
AND PRIVACY ALLOW ONLINE HATE TO
FLOURISH The
Internet, in large part because of the shield of online
anonymity,
has become the medium through which hate groups plot and
promote
real-world violence, recruit and indoctrinate like-minded
haters, and mislead
and distort information for those—like students—who
innocently
link to their content. There are, of course, notorious hate
mongers who
use their real identities and revel in the limelight. But
the vast majority
of hate spewed online is done so anonymously. The Internet
content of
hate mongers—words, videos, music, and social network
postings—
serve to offend the human dignity of the intended victims,
minorities,
and those who hate groups identify as “the other.” The Chief
Commissioner
of the Canadian Human Rights Commission, Jennifer Lynch,
recently
commented: “Freedom of expression is a fundamental right . .
.
[s]o is the right to be treated with equality, dignity and
respect.” [2] The
balance between free expression and the right to human
dignity is way
out of whack online. The Internet has become the launching
pad for
mean-spirited, hateful, and harmful attacks on people.
With that said, I
should point out at the outset that neither the ADL
nor I call for any restriction on the free speech rights of
those who use
the Internet for what most of society condemns as repugnant
speech. The
ADL and I are ardent First Amendment supporters. As this
group knows,
there are limits to First Amendment speech—the Nuremberg
Files case [3]
where abortion providers were targeted on a web site for
violence is a
prime example—but the boundaries of the First Amendment are
so wide
that almost anything goes, as we know.
The Internet makes it
more difficult than it used to be to follow the
teachings of Justice Brandeis that “sunlight is the best
disinfectant” [4] and
that counter-speech is the best antidote to hate speech.
Still, a lot of what
the ADL does is shine the light on hate so that the lies
embedded in the
prejudice can be revealed, and the ADL has a wide array of
educational
and other programs focusing on counter-speech. The ADL’s
work to
reduce and counter cyber-bullying is a great and current
example. An
outgrowth of my ADL participation is my involvement with the
International Network Against Cyber-Hate or “INACH,” [5] a
nongovernmental
organization based in Amsterdam. For several years I
served as chair of INACH, which is an umbrella group of
civil rights
groups around the world concerned about Internet hate. Of
course, in
countries without the First Amendment—that is, everywhere
else in the
world—the restrictions on legislating speech are not nearly
as robust as
here in the United States. In many parts of Europe, for
example, it is a
crime to deny the Holocaust or display Nazi symbols. So my
fellow
members of INACH often take issue with my American version
of free
speech. At a conference on Internet hate speech in Paris
hosted by the
Government of France, a former Minister of Justice shouted
in my direction,
“Stop hiding behind the First Amendment.” [6] But, as I
responded
then, with a borderless Internet, and the ability of many
from around the
world to launch their hate speech from the U.S., the rest of
the world has
to deal with the First Amendment in crafting strategies to
counter hate
speech. II.
ACCOUNTABILITY FOR ONLINE HATE SPEECH: WHAT ARE THE
LESSONS FROM “UNMASKING” LAWS?
There is no question
that people take advantage of the privacy that
online anonymity gives them to say and post and distribute
hate-filled
content that they most likely would not do if personal
identity and accountability
were required. The comments posted every day to news
articles
on mainstream newspaper sites demonstrate what I mean. In
the
wake of the Bernie Madoff scandal, the anti-Semitic rantings
posted in
comments to news articles got so bad that the Palm Beach
Post shut
down the comment function. [7] And in the world of
cyber-bullying, as bad
as playground taunts might be, they pale in comparison to
the online
harassment launched anonymously from computers. The risks of
being
identified to a teacher or parent are far less online than
in the schoolyard.
And that shield of
anonymity is exponentially greater when we talk
about general online interactions, from maintaining
websites, to blogging,
to posting comments to mainstream news sites. In that
regard, just
imagine if ICANN ever moves to an anonymous WHOIS
registration
scheme for domain names, as has been proposed. Domain hosts
thus far
have been identifiable and accountable because they can more
easily be
identified through published registration information
required for registration
of a domain name. Shielding from public view the names of
registrants
is a decidedly bad idea for a range of reasons too long to
address
here today. At the top of the list is a loss of
accountability.
A. Legal Tools to
Identify Online Wrongdoers
So, let me now turn to
a couple of identification schemes familiar to
some of us, to frame the discussion on whether there are
legal tools to
identify online hate-mongers.
In the world of online
copyright infringement, the identification of
anonymous online wrongdoers is not a revolutionary concept.
Under the
Digital Millennium Copyright Act or DMCA, even without
filing a lawsuit,
a copyright owner can obtain a subpoena directed to a
service provider
to identify alleged infringers of copyrighted material. [8]
The subpoena
authorizes online service providers, like ISPs and colleges
and
universities, to expeditiously disclose to the copyright
owner information
sufficient to identify alleged infringers. That
identification right only
applies to users hosting content through an online service
and not those
who, as is far more common, use peer-to-peer networks to
upload and
download.
Recall the controversial case a few years back in which
Verizon
Wireless won its argument in the D.C. Circuit that the
Recording Industry
Association of American could not use the expeditious
subpoena provisions
of the DMCA with respect to peer-to-peer infringers but
could
only use it for materials actually hosted on Verizon’s
Internet service. [9]
As a result, the RIAA and other content owners are forced to
file John
Doe lawsuits and then seek discovery as to the identity of
the John Does
using peer-to-peer technology to illegally download
copyrighted material.
Fortunately for the content owners, we have witnessed some
cooperation
from ISPs and colleges and universities to send notices of
infringement
to infringers that the content owners would not be able to
identify on their own. Around the world, and perhaps soon in
the US,
new schemes of graduated enforcement against online piracy
are emerging
whereby user privacy is preserved but the copyright laws are
enforced.
Turning from copyright to defamation, Section 230 of the
Communications
Decency Act (CDA) is the federal statute that shields
websites
from lawsuits arising out of third-party content and
communications online.
[10] The scope of Section 230’s immunity for online services is
extraordinarily
broad. Still, dozens of lawsuits have been brought in state
and federal courts concerning the CDA's immunity provisions,
seeking to
chip away at the breadth of the immunity and to hold online
companies
responsible for content posted by third parties. The reason
there is so
much litigation seeking to strip online services of immunity
for the
speech of others is that those others, cloaked in anonymity
are so hard to
find, and when found, likely do not have deep pockets to
satisfy a hoped-for
judgment.
Plaintiffs seeking redress for online defamation, for the
most part,
have to identify and track down the person responsible for
posting the
content, and there has been significant litigation over the
standards to be
used in evaluating a request to unmask someone accused of
online defamation
or other tortuous wrongdoing.
I remember the day not
so long ago when a lawyer using a prelitigation
discovery tool such as that available in New York [11] could
simply
ask for a subpoena with little in the way of a showing of
need, and
get the requested subpoena. But then online liberty groups
such as the
Electronic Frontier Foundation and others monitored the
dockets, got
involved, and pushed for a high threshold standard for
disclosure. A
trend is emerging whereby the standards articulated in
Dendrite
Int’l, Inc. v. John Doe No. 3, 775 A.2d 756 (N.J. Super. Ct.
App. Div.
2001) are becoming the common requirements. Under Dendrite,
a trial
court confronted with a defamation action in which anonymous
speakers
or pseudonyms are involved and a subpoena is sought to
unmask the
alleged wrongdoer, should (1) make a reasonable attempt to
notify the
person, (2) give that person a reasonable time to respond,
(3) identify the
allegedly defamatory statements, (4) make a substantial
showing of proof
on each element of the claim, and if the plaintiff satisfies
these four requirements,
a judge must (5) balance First Amendment interests.
This balancing test
with respect to issuing and enforcing a subpoena
to unmask someone accused of online defamation is generally
viewed as
more protective of privacy – of shielding the identity of
those online accused
of wrongdoing. Yet, the application of the standard has
resulted in
orders going both ways, with a recent uptick in orders
requiring the disclosure
of the alleged wrongdoers. In his presentation today,
Professor
Grimmelmann provides a compelling analysis of the competing
interests
in disclosure where an actual legal right has been invaded.
An opinion piece
recently appeared in the Cleveland Plain Dealer
on the heels of a New York state court order requiring
Google to turn
over the identity of a blogger accused of defamation. The
opinion piece
was authored by the founder of a social networking company,
J.R. Johnson.
In the piece, [12] Mr. Johnson concluded we are witnessing
what he
saw as a powerful shift away from anonymity online and
toward accountability.
To support his conclusion that online people are supporting
accountability, he cited the New York state court case where
the judge
ordered Google, which owned the blogging software at issue,
to turn
over the e-mail address of an anonymous blogger because the
judge determined
that content on the blog may be defamatory. The blogger
turned
around and sued Google for millions of dollars for not
protecting anonymity.
[13] Johnson observed: “In
the past, most online comments posted in response
to a case like this typically defend anonymity. Often, the
commenters
themselves are anonymous and obviously sympathize with
anyone
being forcibly unmasked.”
The comments with
respect to the Google blogger case highlighted
what Johnson believed to represent a shift in overall tone
and opinion
regarding anonymity. One comment said, “OK, let's get this
straight. A
blogger using a free media service defames someone while
hiding behind
anonymity and then when she is charged with having to take
responsibility
for making such defaming statements sues the media service
for her
having to do so. Anyone else feel sick?” Another boiled it
down simply,
“I'm glad this Blogger's identity was revealed. Trashing
someone else
and hiding behind anonymity is cowardly.”
Johnson added
his own views:
For too long, we
have accepted the idea that the Internet is the supposed
“Wild West” communication medium where people say
whatever
they want without consequence. Granted, there are valid
and
important reasons for having some degree of anonymous
contribution,
such as whistle-blowing and political expression.
However, with
the propensity for anonymous contribution to be so
negative and
hateful, we have also suffered an untold loss as a
result.
Most online contribution is from a very small minority
of people.
Studies report anywhere from 1 percent to 20 percent of
the online
population is actually contributing; I'll just use 10
percent. If we are
getting contributions from such a small but vocal
minority, we are
losing out on what 90 percent of the online population
has to say.
One of the
roadblocks to getting the other 90 percent to contribute
has been the negative culture that has been acceptable
online. But,
with this recent shift toward accountability, we finally
stand to benefit
from the ocean of untapped potential that lies in those
who may
now feel more welcome to participate in a more evolved
online
community.
. . . More people
will contribute, increasing not only the quality of
what's written online but, in turn, our mutual
understanding of one
another. More understanding begets more tolerance and a
more
thoughtful society as a whole.
The columnist, Mr.
Johnson, was talking about online defamation
and unmasking the perpetrator, but he could just as easily
have been talking
about online hate speech.
But, obviously, the
dispositive difference between identifying online
infringers and online defamers, and identifying those
engaged in
online hate speech is that the former category involves
“speech” which is
not protected by the First Amendment. There is no legal
vehicle to seek
the identity of an online proponent of hate and intolerance
except in a
distinct minority of cases where hate speech crosses the
line into unprotected
territory, such as direct threats addressed to identify
individuals,
or someone identifies the host of a web site through the
WHOIS registry.
The First Amendment gives license to remaining anonymous. No
court
will issue a subpoena to unmask people “merely” engaged in
hate
speech. But
what about a law that provides that while there are no legal
consequences
for most hate speech, people should be required to be
identified,
to be held accountable in society, just as they might be in
the offline
world? B. KKK
Unmasking Laws
More than 18 states
and localities have over the years passed “antimasking”
laws that make it a crime to wear a mask in public. Most of
the
laws were passed in response to activities of the Ku Klux
Klan. New
York City Corporation Counsel and my former law partner
Michael Cardozo argued in 2004 to the Second Circuit with
respect to a
New York City anti-masking ordinance that “New York's
anti-mask law
was . . . indisputably aimed at deterring violence and
facilitating the apprehension
of wrongdoers . . . [and that] the statute was not enacted
to
suppress any particular viewpoint.” [14] The Second Circuit
agreed with
Mr. Cardozo in that case and found that the mask “does not
communicate
any message that the robe and hood do not” and its
expressive force was
therefore “redundant.” [15]
It was believed at the
time of the Second Circuit ruling that the interest
of police in maintaining the law included new concerns over
the
role that masks might play in a post-9/11 New York City,
where security
concerns in public gatherings and demonstrations expanded.
Even with that recent
outcome in the Second Circuit, there are First
Amendment issues at stake with anti-masking statutes beyond
the expressive
speech issues. In a series of cases, the Supreme Court has
made
it clear citizens have the right to communicate and
associate anony-
mously, without fear of harassment or reprisals by others
who oppose
their views.
For example, the 1958 Supreme Court case NAACP v. Alabama
[16]
made it clear the government cannot require groups to reveal
members’
names and addresses unless public officials have a
compelling need for
the information and no alternative means of obtaining it.
And, as the Supreme
Court pointed out in McIntyre v. Ohio Elections
Commission, [17] a 1995 case striking down an ordinance
prohibiting
the anonymous distribution of political leaflets: “Anonymity
is a shield
from the tyranny of the majority. It thus exemplifies the
purpose behind
the Bill of Rights, and of the First Amendment in
particular: to protect
unpopular individuals from retaliation—and their ideas from
suppression—
at the hand of an intolerant society.”
Notwithstanding this
Supreme Court precedent, the Second Circuit
upheld the New York City anti-masking ordinance, and
Georgia’s highest
court ruled in 1990 however the state’s anti-masking law was
enacted
to protect the public from intimidation and violence and to
aid law enforcement
officials in apprehending criminals, and these purposes far
outweighed the Klan’s right to associate anonymously.
Unlike the laws on
disclosing member lists struck down by the U.S.
Supreme Court, the Georgia court concluded the anti-masking
laws do
not require the Klan to reveal the names and addresses of
its members,
nor do they stop Klan members from meeting secretly or
wearing their
hoods on private property. The anti-masking law, in the
words of the
court, “only prevents masked appearance in public under
circumstances
that give rise to a reasonable apprehension of intimidation,
threats or
impending violence.” [18]
C. Unmasking Laws as a
Model for Fighting Online Hate?
Some have suggested
the KKK anti-masking laws might serve as
models for a law requiring online identification of those
who engage in
hate speech. For example, last year a Kentucky legislator
proposed a ban
on the posting of anonymous messages online. [19] The
proposed law would
have required users to register their true name and address
before contributing
to any discussion forum. The stated goal was the eliminator
of
“online bullying.”
The apparent impetus
of the Kentucky bill was the growing popularity
of the now defunct JuicyCampus.com, a “Web 2.0 website
focusing
on gossip” where college students post lurid—and often
fabricated—
tales of fellow students’ sexual encounters. The website
billed itself as a
home for “anonymous free speech on college campuses,” and
used
anonymous IP cloaking techniques to shield users’
identities.
There are a host of problems with the proposed Kentucky law,
which presumably is why it made little progress in the
legislature. Similar
proposals requiring online identification would face similar
hurdles.
First, a broad prohibition on anonymous speech (which is
essentially
what the law would create) surely would run afoul of the
Supreme
Court’s views on the right to remain anonymous set forth in
McIntrye.
Second, the requirement that real names be used implicates
NAACP v.
Alabama as it would effectively be state law-ordered
identification of a
person’s views and affiliations. Third, any attempt to
define a more limited
category of speech for which accountability is required
would face
First Amendment problems. Most hate speech, no matter how
objectionable,
is permitted under the First Amendment and defining what is
in or
out of bounds is nearly impossible in the abstract. Third,
enforcement in
this technological work-around age likely would be futile.
Finally, the
same laws designed to deter online defamation and harassment
can also
be used to target political dissent or silence
whistleblowers for whom the
option of remaining anonymous is critical. China requires
real-name registration
for a range on online activity precisely because of its
chilling
effects. Thus the KKK anti-masking laws must be viewed as
sui generis,
not easily imported online.
III. PRIVACY AND
ACCOUNTABILITY: THE LIMITED ROLE OF LAW AND
THE ROLE OF THE ONLINE COMMUNITY
In a recent speech,
FTC Consumer Protection Head David Vladeck
quoted science-fiction writer David Brin who said, “when it
comes to
privacy and accountability, people always demand the former
for themselves
and the latter for everyone else.” [20] Professor Anita
Allen wrote in
her book Why Privacy Isn’t Everything “although privacy is
important,
accountability is important too. Both in their own way
render us more fit
for valued forms of social participation.” Professor Allen
and David
Vladeck both advocate for privacy and accountability. Which
virtue wins
their advocacy depends on the circumstances.
I also advocate for
both privacy and accountability. And that is why
at conferences on hate speech around the world in which I
have participated,
I have said it is frustrating as a lawyer not to be able to
come up
with a legal solution to the problem of hate speech that
often prompts
people to exclaim: “There oughta be a law.” The laws
protecting privacy,
including principally the First Amendment, overwhelm our
ability to
craft laws on accountability.
The law is a tool that
can be held in reserve for the clearly-egregious
cases, but we have seen the untoward consequences of
stretching
the law to cover hate speech—such as contorting the Computer
Fraud
and Abuse Act to prosecute Lori Drew, the woman who
pretended to be
a 13 year old boy on MySpace and whose taunts caused a young
girl,
Megan Meier, to commit suicide. You will recall a federal
court ultimately
rejected the use of the computer law to fight online hate in
that
case. And so
I often end, as I do today, by turning to the online
community
rather than lawyers to address the problem of hate speech,
and especially
accountability for online hate speech.
I hope Mr. Johnson,
the opinion columnist, is right—that there is a
trend online towards accountability. Certainly, the use of
real names on
the wildly popular social networking site Facebook is
perhaps changing
the culture online. There is an opportunity for other online
companies—
who are not constrained by the First Amendment in setting
rules of use
for their private services—to require real names for people
seeking to
post content, so people know they will be held accountable
for what they
say or do. There will still be plenty of places online where
people can
hide behind the shield of anonymity, but the big players can
start to
change the culture.
Regardless of a
requirement of real-name identification, online
companies should have and enforce Terms of Use that prohibit
hate
speech. And users of such services should be provided with a
simple
procedure for communicating with providers to ensure
complaints can be
given and companies act on them (or reject them) in a timely
fashion. I
also am curious about the effects on online discourse with
the
adoption of identity management tools—the tools being
proposed by
Microsoft and other to protect privacy and prove identity.
Their global
use would have users understand while they can control their
privacy, the
tool have obligations as well as benefits, such as
accountability.
One obvious tool to
promote privacy and accountability is the early
and regular online education of the next generation of
“digital natives,”
teaching them online etiquette and that even with assumed
anonymity,
they can be held to answer for what they do online. The
“permanent record”
of the Internet can hinder educational, job and social
opportunities
and kids need to better understand that. And when they do,
maybe they
will constrain the base instinct to engage in bullying and,
later in life,
hate speech.
CONCLUSION
There are many extra-legal opportunities for the online
community
to take action that will serve to diminish online hate. My
remarks here
are intended to start the discussion of what might be done
by private actors
online to create a culture of online accountability, and I
hope that I
have stimulated some thinking and new ideas. I look forward
to continuing
the discussion. Again, many thanks to the University of
Denver for
hosting me at this Symposium.
_______________
Notes:
† Partner at Hogan
& Hartson LLP and Chair, Anti-Defamation League Internet
Task Force.
1.
http://www.futureofprivacy.org.
2. Jennifer Lynch,
Hate Speech: This Debate is Out of Balance, THE GLOBE
AND MAIL,
available at http://www.theglobeandmail.com/news/opinions/hate-speech-this-debate-is-out-ofbalance/
article1178149.
3. Planned
Parenthood of the Columbia/Willamette, Inc. v. Am.
Coalition of Life Activists,
290 F.3d 1058 (9th Cir. 2002) (en banc), cert. denied,
123 S. Ct. 2637 (2003).
4. Louis Dembitz
Brandeis, What Publicity Can Do, Other People’s Money,
ch. 5, p. 92
(1932).
5. http://www.inach.net.
6. See Christopher
Wolf, A Comment on Private Harms in the Cyber-World, 62
WASH. &
LEE L. REV. 355, 361 (2005 ).
7. John Lantigue,
Madoff Scandal Spurs Anti-Semitic Postings on Web”, PALM
BEACH POST,
December 28, 2008, available at
http://www.palmbeachpost.com/search/content/nation/epaper/2008/12/18/a1b
_madoffweb_1219.html.
8. 17 U.S.C. §
512(h) (2007).
9. Recording
Indus. Assoc. of Am. v. Verizon Internet Servs.,
351 F.3d 1229, Case No. 03-7015 (D.C. Cir. 2003) cert
denied 125 S. Ct. 309 (2004).
10. 47 U.S.C. §
230 (2007).
11. N.Y. C.P.L.R.
3102(c) 2008.
12. J.R. Johnson,
Accountability’s Hot and Anonymity’s Not, CLEVELAND
PLAIN DEALER,
September 14, 2009, available at
http://www.cleveland.com/opinion/index.ssf/2009/09/accountabilitys_hot_anonymitys.html.
13. See Chris
Matyszczyk, Outed “Skanks in NYC” Blogger to Sue Google,
CNET NEWS,
Aug. 24, 2009, available at
http://news.cnet.com/8301-17852_3-10315998-71.html.
14.
http://www.law.com/jsp/law/LawArticleFriendly.jsp?id=900005541417
15. Ku Klux Klan
v. Kerik, 356 F.3d 197 (2d Cir. 2004).
16. 357 U.S. 449
(1958).
17. 514 U.S. 334 (1995).
18. State v.
Miller, 260 Ga. 669 (1990).
19. Posting of
purefoppery to Harvard Law’s The Web Difference Blog,
http://blogs.law.harvard.edu/webdifference/2008/03/17/kentucky-to-ban-online-anonymity/
(Mar.
17, 2008 12:33 PM).
20. David C.
Vladeck, Director, FTC Bureau of Consumer Protection,
Promoting Consumer
Privacy: Accountability and Transparency in the Modern
World at New York University (Oct. 2,
2009) available at
http://www.ftc.gov/speeches/vladeck/091002nyu.pdf.
***
ONLINE SOCIAL NETWORKS
AND GLOBAL ONLINE
PRIVACY
JACQUELINE D. LIPTON, PH.D.†
INTRODUCTION
Web 2.0 technologies
pose new challenges for the legal system, distinct
from those that arose in the early days of the Internet. Web
2.0 is
characterized by participatory interactive technologies such
as online
social networks (such as Facebook and MySpace), massive
online multiplayer
games (such as Second Life and World of Warcraft) and wikis
(such as Wikipedia and Wikinews). The participatory nature
of these
platforms makes it more difficult to classify online
participants as either
information content providers or consumers—classifications
that were
fairly typical of earlier technologies. Content providers
were generally
held liable if they infringed laws relating to copyrights,
trademarks, occasionally
patents, defamation, and privacy rights. Consumers generally
avoided such liability. However, as consumers increasingly
became content
providers themselves—on early file sharing platforms such as
Napster,
for example—the lines between production, distribution and
consumption
of online information became blurred.
This aggregation of
online roles is readily apparent in the context of
online social networks (OSNs) such as Facebook and MySpace.
While
the OSN provider is the entity that makes available the
platform for online
interaction, the members take on the various roles of
content creator,
distributor, and consumer. Members are also the subjects of
much online
content shared on OSNs: for example, a Facebook member (or
even a
non-member) may easily become the subject of gossip and
pictures created
and distributed by OSN members over the network. Because of
the
wide scale sharing of information about private individuals
on OSNs,
commentators have begun to raise concerns about privacy in
this context.
[1] Individual privacy rights, difficult to protect at the best of times,
are easily reduced to almost nothing in the context of OSN
interactions.
This Comment aims to
emphasize some of the more obvious limitations
of existing privacy laws in the OSN context. The discussion
focuses
on the E.U. Data Protection Directive [2] and its potential
application to
conduct on OSNs. The Directive is one of the most
comprehensive attempts
to protect privacy in the digital age, in contrast to the
piecemeal,
sectoral approach to privacy taken in countries like the
United States. [3]
However, even the Directive is limited in its ability to
apply to OSNs.
Despite being drafted in the wake of the Internet revolution
and taking
early Internet technologies into account, the Directive’s
privacyprotections
are now dated in their application to OSNs. Nevertheless,
lawyers and policy makers might learn valuable lessons from
the current
gaps and limitations in applying the Directive to OSNs.
These lessons
might usefully inform future developments in global privacy
discourse. I.
THE DATA PROTECTION DIRECTIVE, OSNS AND UNRESOLVED ISSUES
The E.U. Data
Protection Directive aims to protect individual privacy
by imposing certain obligations on those who process
personal data.
The notions of “processing” and “data” are defined very
broadly within
the Directive [4] in an attempt to make the Directive as
technology neutral
and future proof as possible. Entities that are defined as
data controllers
or data processors are required to conform to certain
requirements including
limiting the amount and nature of information collected
about
individuals, [5] and ensuring that individuals have access
to data collected
about them. [6]
Although the Directive
was intended to be technology neutral,
OSNs pose some new privacy challenges outside the initial
contemplation
of the drafters. At the time the Directive was implemented,
the main
concern of the drafters was to curtail practices involving
the unbridled
aggregation, use, and analysis of text-based dossiers about
private individuals.
These dossiers might be compiled by governments or private
entities, and used for all kinds of purposes, including
public security,
crime prevention, and, targeted marketing. At the time,
little thought was
given to aggregations of large amounts of personal
information for predominantly
social purposes—although the Directive does contain an ex-
emption for the processing of personal data: “by a natural
person in the
course of a purely personal or household activity.” [7]
A. Defining “Data
Controller”
In May of 2009, an independent working party reviewed the
Directive’s
application to OSNs and identified a number of uncertainties
inherent
in the application of the Directive to this context. One of
the key
issues discussed by the working party revolved around the
appropriate
identification of a “data controller” in the OSN context.
While an OSN
provider like Facebook is obviously a data controller for
these purposes,
it is less clear whether and, if so, when, other
participants might be so
defined. Application providers, for example, might be data
controllers in
circumstances where they develop add-on applications for OSN
users. [8]
The more important question, however, is when members of
OSNs might
themselves be data controllers for the purposes of the
Directive.
The working party noted that in most cases OSN members will
be
data subjects, rather than data controllers. In other words,
they are typically
the people whose information needs to be protected, rather
than the
people who need to protect others’ information. However,
there are
clearly circumstances in which individuals interacting
online should be
subject to obligations to take care for others’ privacy
rights. The working
party identified a number of circumstances in which an OSN
member
might be regarded as a data controller under the Directive,
and would not
be able to take advantage of the “personal or household use”
exemption.
These circumstances include situations in which an OSN
member:
(a) acquires a
high number of third party contacts including people
who she does not actually know in the real world; [9]
(b) opens her
profile, including information about other people, to
the public at large rather than restricting it to a
selected group of contacts;
[10] and, (c) is acting on
behalf of a company or association to advance a
commercial, political or charitable goal. [11]
B. Categorizing
Data Another
issue that has been particularly challenging in the OSN
context is that of the format of the information being
processed. While
the Data Protection Directive was drafted largely with
aggregation of
text-based data in mind, much of the information exchanged
on OSNs is
in pictorial, video and multi-media formats. The Directive
itself is not
expressly limited to text-based data and the drafters did
contemplate that
it should also cover “sound and image” data as technological
capabilities
improved over time. [12] However, there is little clarity as
to how this information
should be classified and protected under the Directive.
In particular, the
Directive distinguishes between standard “personal
data” and “special categories of personal data” such as data
revealing
racial or ethnic origin, political opinions, religious or
philosophical beliefs,
trade-union membership, health or sexual life. [13] These
special categories
are given greater protection than other data under the
Directive.
Information within these categories may not be processed at
all unless
one of a limited number of exceptions applies, [14] the most
important of
which is probably the data subject’s consent to the
processing. [15] In the
OSN context, the question has arisen as to whether pictures
of data subjects
should automatically be considered as coming within the
special
categories of data and subject to heightened protection. The
argument in
favor of treating images as a special category is that they
can be used to
identify a person’s racial or ethnic origins or may be used
to deduce a
person’s religious beliefs and some health data. [16]
While some European
Union Member States have domestic laws
under which images are specially protected data by default,
the 2009
working party on the Data Protection Directive rejected
making this approach
into a general rule. [17] The working party took the view
that images
on the Internet are not sensitive data per se unless the
particular images
are “clearly used to reveal sensitive data about
individuals.” [18] The working
party also noted that to the extent an OSN service provider
like Facebook
creates a user profile form that includes spaces for
particularly sensitive
data, the service provider must make it clear to members
that answering
any of those questions is completely voluntary. [19]
C. Third Party Data
Additionally, the
working party raised concerns about the collection
and collation of third party data (i.e., data about
non-members). These
practices are of questionable validity under the Data
Protection Directive.
The working party noted with particular concern that an OSN
provider
might send an invitation to a non-member to join the network
and take
advantage of a profile the OSN had already created by
piecing together
information contributed by other users who may be real world
friends of
the non-member. In this case, the OSN provider would likely
be violating
European Union regulations that prohibit the sending of
unsolicited
commercial emails – or spam – for direct marketing purposes.
[20]
CONCLUSION
The above discussion raises just a few of the more salient
challenges
posed for privacy law by OSNs. Obviously, if a law as
comprehensive
as the Directive is challenged by the realities of online
social
networking, the piecemeal laws in countries like the United
States are
unlikely to be able effectively to protect privacy in this
context. Several
commentators have talked about the limitations of American
privacy tort
law in the context of OSNs, and the need to rethink our
approach to privacy
regulation in this context. [21] The European Union
experience with
the Directive may give some guidance to any ongoing law
reform efforts
in jurisdictions such as the United States, particularly as
digital privacy
law reform now needs to be a more global initiative.
_______________
Notes:
† Professor of Law
and Associate Dean for Faculty Development and Research;
Co-
Director, Center for Law, Technology and the Arts;
Associate Director, Frederick K. Cox International
Law Center, Case Western Reserve University School of
Law, 11075 East Boulevard, Cleveland,
OH, 44106. JDL14@case.edu, 216-368-3303.
1. See Patricia
Sánchez Abril, A (My)Space of One’s Own: On Privacy and
Online Social
Networks, 6 NW J. TECH. & INTELL. PROP. 73 (2007);
Patricia Sánchez Abril, Recasting Privacy
Torts in a Spaceless World, 21 HARV. J.L. & TECH. 1
(2007); James Grimmelman, Saving Facebook,
94 IOWA L. REV. 1137 (2009); DANIEL SOLOVE, THE FUTURE
OF REPUTATION: GOSSIP,
RUMOR, AND PRIVACY ON THE INTERNET 1 (2007); Jacqueline
Lipton, “We, the Paparazzi”: Developing
a Privacy Paradigm for Digital Video, 95 IOWA L. REV.
(forthcoming 2010); Jacqueline
Lipton, Mapping Online Privacy, 104 NW. U. L. REV.,
(forthcoming Mar. 2010).
2. European Union
Directive 95/46/EC of the European Parliament and of the
Council, 1995
O.J. (L 281) 31 [hereinafter, EU Directive] (regarding
the protection of individuals with regard to the
processing of personal data and on the free movement of
such data).
3. RAYMOND SHIH
RAY KU & JACQUELINE LIPTON, CYBERSPACE LAW: CASES AND
MATERIALS (2d ed. 2006) (contrasting United States and
European Union approaches to privacy
law). 4.
EU Directive, supra note 2, at Art. 2(a) (defining
“personal data”), 2(b) (defining “data
processing”).
5. Id. at Art. 7,
Art. 8.
6. Id. at Art. 12.
7. Id. at Art.
3(2). 8.
Article 29 Data Protection Working Party, Opinion 5/2009
on online social networking,
adopted on 12 June, 2009 (01189/09/EN, WP 163), ¶ 3.1
[hereinafter Opinion 5/2009].
9. Id. at ¶ 3.1.1.
10. Id. at ¶
3.1.2.
11. Id. at ¶ 3.1.1.
12. EU Directive,
supra note 2, at Art. 33 (“The Commission shall examine,
in particular, the
application of this Directive to the data processing of
sound and image data relating to natural persons
and shall submit any appropriate proposals which prove
to be necessary, taking account of
developments in information technology and in the light
of the state of progress in the information
society.”)
13. Id. at Art.
8(1). 14.
Id. at Art. 8(2).
15. Id. at Art.
8(2)(a). See also Opinion 5/2009, ¶ 3.4.
16. Id.
17. Id.
18. Id.
19. Id.
20. Id. at ¶ 3.5.
21. See supra note
1.
***
PERSPECTIVES ON
PRIVACY AND ONLINE HARASSMENT: A
COMMENT ON LIPTON, GRIMMELMANN, AND WOLF
JOHN T. SOMA†
INTRODUCTION
James Grimmelmann’s
observations on the “Skank” incident in
New York City [1] highlight the developing computer and
telecommunications
technologies as they impact the traditional harassment legal
area.
The Skank affair resulted in the victim persuading the court
to unmask
the alleged harasser/libeler. As noted by Chris Wolf, the
end result was
the court followed the doctrines previously developed in
Dendrite Int’l v.
John Doe No. 3. [2] The Dendrite decision is a classic
balancing tests of
between privacy, First Amendment anonymous speech rights,
and rights
of an alleged victim. In the Skank affair, the court applied
this classic
balancing test in an entirely modern context.
This brief comment
offers three perspectives on the current cyber
civil rights debates between online harassment, privacy,
First Amendment
rights, and civil liability. Although the cyber civil rights
agenda
might appear to present novel questions of law and policy,
this comment
suggests we have much to gain from three perspectives.
First, we can
learn much by examining the historical tension between free
speech and
privacy. Second, we should look to other instances where
courts were
confronted with “new” technologies. And third, we can learn
from other
countries’ approach to privacy and harassment online.
I. THREE PERSPECTIVES
ON A “NEW” PROBLEM
A. What We Can Learn
From the Past: The Historical Civil Rights Perspective
While the challenges
faced by the cyber civil rights movement are
very real,3 we should not lose sight of our past experiences
with similar
issues. As Chris Wolf and Robert Kaczorowski wisely reminded
us, our
court system faced the difficult intersection of privacy,
speech, and har-
assment during the KKK’s reign of terror in the late
nineteenth and early
twentieth centuries.
There should be little
doubt that technology evolution will continue
to challenge our existing legal paradigms. We should not,
however, succumb
to the notion that this reality necessitates a wide scale
revolution in
the law. While the “technology revolution” may seem
startling to many
people, our society—and our court system—has been here
before. Two
hundred and forty years ago, a typical lawsuit might have
involved a
dispute between two landowners over allegedly libelous
information
printed in a local newsletter. One hundred and twenty years
ago, the
newsletter might have been a telegram. Sixty years ago, it
may have been
a video recording, and as recently as the 1990’s,
information on a listserv.
Today’s blogs and
message boards indeed present unique challenges
to our concepts of privacy and communication. But in many
ways,
they are simply the most recent iteration of a constantly
evolving problem.
While these technologies may seem truly transformative in
the short
run, we would do well to remember that our legal system is
not entirely
unacquainted with change.
B. What We Can Learn
From Other “New” Technologies: The Courts’
“Balanced” Perspective
From the perspective
of technology and privacy, the “Skanks in
NYC” case provides yet another example of the constant need
for courts
to establish appropriate balancing tests, and apply them to
new situations
in which the technology has changed. Each time technology
advances
there is, and always will be, an appropriate balancing test,
which includes
a subjective element in the final decision. Those looking to
develop a
hard, “bright line” rules to situations involving new
technology and the
law will likely encounter great frustration—and rightfully
so. Bright lines
create rigid precedent. Technology evolves very quickly, and
today’s
rule will often lead to tomorrow’s undesirable result. Our
best hope,
therefore, is to preserve a balancing of interests when
confronted with
new or unexpected situations.
Indeed, the resolution
in “Skanks in NYC” supports the argument in
favor of balance. Once the unmasking decision was made, the
target was
free to proceed with the litigation or simply drop it. As
plaintiff, the victims’
framework was to weigh the cost of proceeding with the
litigation,
whether the unmasked harasser was judgment proof, as well as
the Streisand
Effect on the victim. In this case, the victim felt the mere
unmasking
was sufficient for the situation. Here again, we see a
balancing of
interests to arrive at the best solution—this time the
private interests of
the victim who brought the lawsuit.
As new technologies
impact the privacy–first amendment legal
arena, courts must always take a balancing approach.
Otherwise, our
overreaction to today’s problem might stamp out tomorrow’s
solution. If
technological evolution has taught us nothing else, it is
that things
change very quickly. By resisting hard-line rules and
maintaining a balanced
approach to new and difficult questions of free speech and
privacy,
we will insure that our courts can respond to that change.
C. What We Can Learn
from Other Countries: The International Perspective
Proposed solutions to
the problems addressed at this conference
must account for the global nature of online activity. It no
longer makes
sense to think about privacy and online harassment from a
purely national
perspective. The global aspects of online social networks
(“OSNs”) including Facebook and My Space are, and will in
the future,
challenge the interface between technology and privacy on an
international
scale.
Jacqueline Lipton’s paper and discussion provides an
excellent review
of the current European treatment of OSNs. Her comments make
clear that although the EU has been much more proactive in
addressing
electronic privacy, the EU Directive still leaves open many
unanswered
questions—particularly in the area of OSNs.
The European Union is
not alone: Canada has also emerged as an
early leader in establishing comprehensive electronic
privacy laws. Canada’s
Personal Information Protection and Electronic Documents Act
(“PIPEDA”)4 broadly prohibits the disclosure of “personal
information
without the knowledge or consent of the individual,” [5]
although that prohibition
is subject to exceptions. On whole, Canada’s PIPEDA is less
restrictive than the EU’s Data Privacy Directive, although
both attempt
to balance the interests of online privacy and free speech.
If the United States
decides to implement federal privacy legislation—
as some have advocated, including participants in this
symposium—
then we need not reinvent the wheel. The United States has
much
to learn from its neighbors to the north and the east, and
in crafting our
own federal statutes (if we should choose that route), we
should capitalize
on the EU and Canada’s existing perspectives.
CONCLUSION
Danielle Citron’s work
to elevate the cyber civil rights movement
has raised many important and difficult questions. To be
sure, online
harassment and discrimination against women and others
deserves seri-
ous attention, and this symposium suggests these issues are
beginning to
attract that attention. We should remember, however, that
our courts and
our country have faced similar problems in the past. No
doubt many of
the participants in this symposium will take the lead in
proposing solutions
to the online harassment problem. Our own country’s
experiences
during the reign of the Ku Klux Klan, the decades-long fight
to end
workplace sexual harassment, and the explosion of technology
during the
last quarter century can offer invaluable perspectives in
that effort.
_______________
Notes:
† Professor,
University of Denver Sturm College of Law; Executive
Director, Privacy
Foundation. B.A., Augustana College, 1970; J.D., Ph.D.,
University of Illinois, 1975. The author
gratefully acknowledges the assistance of Jake Spratt.
1.
http://news.cnet.com/8301-17852_3-10312359-71.html
2. 775 A.2d 756
(N.J. App. 2001). Dendrite involved the anonymous
posting of allegedly
false information about a company’s financial statements
on a Yahoo! message board. The company
sued, and moved to “unmask” the anonymous poster.
3. Professors
Citron and Franks have provided an excellent summary of
the very real harms
posed by online harassment, especially harms to women
and historically marginalized minorities.
4. PIPEDA, S. C.,
ch. 5 (2000) (Can.).
5. Id. § 7(3).
*** BREAKING
FELTEN’S THIRD LAW: HOW NOT TO FIX THE
INTERNET PAUL
OHM† I
applaud the Denver University Law Review for organizing a
symposium
around the Cyber Civil Rights work of Danielle Citron,
because
she deserves great credit for shining a light on the
intolerable harms being
inflicted on women every day on online message boards. [1]
Professor
Citron (along with Professor Ann Bartow [2]) has convinced
me of the importance
of the Cyber Civil Rights movement; we urgently need to find
solutions to punish and deter online harassers, to allow the
harassed to
use the Internet without fear.
But although I embrace
the goals of the movement, I worry about
some of the solutions being proposed in the name of Cyber
Civil Rights.
Professor Citron, for example, has suggested mandatory
logfile data retention
for website providers. [3] Suggestions like these remind me
of
something I have heard Professor Ed Felten say on many
occasions: “In
technology policy debates, lawyers put too much faith in
technical solutions,
while technologists put too much faith in legal solutions.”
This
observation so directly hits its mark, I feel compelled to
give it a name:
Felten’s Third Law. [4] For solving problems, lawyers look
to technology,
and techies look to law.
As we try to achieve
the goals of the Cyber Civil Rights movement,
we should break Felten’s Third Law. We lawyers and law
professors
should seek legal and not technical solutions to attack
online harassment.
It is better to try to increase the odds of civil liability
and criminal prosecution
than it is to mandate data retention or order the redesign
of systems.
This, I argue, is the lesson of recent history. The problem
of online
harassment echoes Internet problems that have come before.
Ever since
the masses started colonizing the Internet in the
mid-1990’s, successive
waves of people have been troubled by different kinds of
online speech
and conduct and have tried to restructure both law and
technology in
response.
There is a nice temporal rhythm revealed here, because these
crusades
have happened to ebb and flow with the close and dawn of
decades;
the 1990’s was the decade of pornography and the Aughts was
the
decade of copyright infringement. In case the 2010’s becomes
the decade
of Cyber Civil Rights, we should look to the histories of
porn and copyright
infringement for guidance.
In the 1990’s, many
worried about the problem of porn, and in particular,
worried that children could easily access porn intended only
for
adults. The movement was spurred, at least in part, by a law
review article,
one now notorious for its poorly executed empirical
research. [5] This
article spurred not only a cover story in Time Magazine, [6]
but also action
in Congress. Citing the research on the Senate Floor,
Senator Grassley
introduced a bill, the Protection of Children from Computer
Pornography
Act of 1995. Although this bill did not pass, it paved the
way for a series
of troublesome, ill-conceived laws that followed.
In 1996 Congress
enacted the Communications Decency Act
(“CDA”), [7] which sought broadly to prohibit the posting of
“indecent”
material on the Internet. In 1999, the Supreme Court struck
down the
indecency ban in the landmark First Amendment and Internet
case, Reno
v. ACLU. [8] In response, Congress enacted the Child Online
Protection
Act, [9] which like the CDA was quickly enjoined and
eventually put to its
final death just last year. [10] The legal responses to
online porn were
sweeping, unconstitutional, and after the courts were
finished ruling,
mostly harmless.
Not only did anti-porn
crusaders look to law, but also they turned to
technology and in particular, to Internet filtering
software. Many of them
had hoped that Internet filters would step in where the law
had failed by
technologically preventing access to porn online. Many
companies and
researchers tried to make Internet filters easier to use,
harder to circumvent,
and more accurate. Policymakers tried to force filters onto
computers
and networks, and in 2000, Congress enacted the Children’s
Internet
Protection Act (“CIPA”), [11] which mandates Internet
filtering for indecent
material on computers in public schools and libraries, a law
that is still
on the books.
This is the first
historical marker: The 1990’s, the decade of first legal
and then technical solutions to stamp out Internet porn. But
just as
this crusade began to run out of steam, the next great
online struggle
emerged. In June 1999, as Congress began writing CIPA,
teenager Sean
Fanning released Napster, the first Internet-wide
peer-to-peer (“p2p”)
system designed specifically for trading music files.
As their anti-porn
crusading counterparts had done before them, the
recording industry has engaged in both legal and technical
campaigns
against p2p copyright infringement. First, it filed
lawsuits. In December
1999, the Recording Industry Association of America (“RIAA”)
sued
Napster. This was only the first in a series, as it sued
many others who
created p2p software and ran p2p networks. Steadily, the
recording industry
won a series of court victories, in the process expanding
interpretations
of copyright law, culminating in the landmark case, MGM v.
Grokster, which held that p2p companies Grokster and
Streamcast could
be held liable for inducing their users to infringe
copyrights. [12]
Evidently unsatisfied
by these victories against providers, in 2003,
the industry embraced another strategy: suits against the
file traders
themselves. This aggressive campaign seems to have been at
least a
qualified success: countless have been threatened, tens of
thousands have
been sued, and at least two have been found liable by
juries. At the very
least, the lawsuits seem to be informing p2p users that
their actions may
have consequences, at least judging from what I have seen in
the press,
blogs, and in my classrooms.
But like the anti-porn
crusaders before them, the anti-p2p copyright
warriors have turned to technical fixes as well as lawsuits.
Most importantly,
the RIAA has searched for ways to deal with online
pseudonymity.
Because our actions online are attached to IP addresses but
not directly
to identities, those who want to stamp out speech or conduct
online
need to find a way to pierce pseudonymity. The recording
industry attacked
Internet pseudonymity in the courts, seeking and often
winning
rulings imposing only low hurdles to unmasking. But they
also began
searching for non-legal solutions, which they still are
searching for today.
To my mind, this is the most problematic phase of the p2p
copyright
war. The RIAA
seems to want to re-architect the Internet to make
pseudonymity
much harder to obtain. For example, it has been arguing for
three strikes laws which would require ISPs to kick off the
Internet any
users who are accused—not proved guilty, merely accused—of
copyright
infringement three times. In addition, the RIAA seems to be
pressuring
ISPs to detect and maybe block copyrighted content traveling
across the
Internet. [13]
* * *
I’ve hummed a few bars of history to explain why, when I
hear Cyber
Civil Rights advocates calling for technical fixes, I feel
as if I’ve
heard the song before. To be sure, the Cyber Civil Rights
movement differs
in important ways from the anti-porn crusades of the 90’s
and the
anti-p2p wars of the Aughts: Most obviously, the harms
described by
scholars like Professor Citron are fundamentally and
meaningfully different
from the purported harms in these other skirmishes. The
subjugation
and terrorization of women Professor Citron describes is a
much
more significant problem than the problems of porn or
copyright infringement
online, at least according to the best arguments I have seen
for each. But
once we move past harms to solutions, we can spot many
similarities
and learn many lessons. In past campaigns to squelch
problematic
categories of Internet speech, legal solutions have ranged
from the scarybut-
never-implemented (CDA) to the quixotic and wasteful but
mostly
harmless (RIAA lawsuits). The trend suggests that so long as
the Cyber
Civil Rights movement focuses on legal solutions—on bringing
lawsuits
against harassers, encouraging criminal prosecutions of the
worst offenders,
and in rare cases, actions against message board operators
who benefit
from the harassment—it might find workable solutions without
doing
too much harm.
In contrast, technical
solutions too often lead to unintended consequences.
Anyone who has ever struggled to use a computer with an
Internet
filter, cursing at the false positives and giggling at the
false negatives,
can breathe a sigh of relief that the anti-porn crusaders
never convinced
anyone to place filters deep inside the network itself.
Likewise,
we should worry about the recording industry’s plans for ISP
filtering
and three strikes laws as overbroad, disproportionate
measures. If
anything, technical solutions may be even less likely to
succeed
against the problem of online harassment than in the earlier
battles. Music
and porn travel through the Internet as large files, which
can be easy
to identify through fingerprinting and filtering. In
contrast, Cyber Civil
Rights harms often involve threats and harassment buried in
small snippets
of text whose threatening nature must be judged by a person
not a
machine. For all of these reasons, we should be deploying
surgical
strikes, not napalm.
In particular, I am
most concerned about calls to increase identifiability
and decrease pseudonymity, such as calls for mandatory data
retention.
I have many problems with these proposals, based in concerns
about overdeterrence, chilling effects, and threats to the
fundamental
nature of the Internet. For now, let’s focus on only one
problem, the
Hermit Crab Problem. You build this beautiful structure, it
does a very
good job providing you the shelter and food you need, but
you wake up
one morning and find that some other creature has moved into
it. If it
were to become harder to hide on the Internet, not only
would
this make it easier to out Cyber Civil Rights harms, but
also it would
become easier to stamp out any type of disfavored Internet
speech. It’s
what I have heard Deirdre Mulligan call The Fully-Identified
Internet
Problem. If the Cyber Civil Rights movement ever brings a
fullyidentified
Internet proposal to Congress, the copyright warriors will
be
sitting in a back row of the hearing room, quietly cheering
them on.
Across the room, the anti-porn crusaders will be doing the
same. Others
will be there too, such as those who dislike dissidents and
whistleblowers.
We can’t empower only one of these groups without empowering
them all.
Forget technical solutions. Build a Cyber Civil Rights
movement,
and use it to propose solutions to the problems of online
hate and harassment,
but focus those solutions on the narrow, surgical tools
afforded
by law, including many of the creative legal proposals
presented elsewhere
in this symposium.
_______________
Notes:
† Associate
Professor, University of Colorado Law School. I thank
Viva Moffat, Mike
Nelson, and Jake Spratt of the University of Denver
Sturm College of Law for inviting me to the
symposium.
1. Danielle Keats
Citron, Cyber Civil Rights, 89 B.U.L. REV. 61 (2009);
Danielle Keats
Citron, Law’s Expressive Value in Combatting Cyber
Gender Harassment, 108 MICH. L. REV. 373
(2009).
2. Ann Bartow, Internet Defamation as Profit Center: The
Monetization of Online Harassment,
32 HARV. J.L. & GENDER 383 (2009).
3. Citron, Cyber
Civil Rights, supra note 1, at 123 (describing a
standard of care called
“traceable anonymity” which would “require website
operators to configure their sites to collect and
retain visitors’ IP address”). At the symposium,
Professor Citron remarked that she has begun to
rethink her call for mandatory data retention.
4. I’m not sure
what Ed Felten’s first two laws are, but because he has
said so many wise
things, I am hedging my bets by calling this his third
law. 5.
Martin Rimm, Marketing Pornography on the Information
Superhighway, 83 GEO. L.J.
1849 (1995). The Rimm study was widely criticized. For
an example of the criticism, see Donna L.
Hoffman & Thomas P. Novak, A Detailed Analysis of the
Conceptual, Logical, and Methodological
Flaws in the Article: “Marketing Pornography on the
Information Superhighway,” July 2, 1995
(version 1.01), available at
http://w2.eff.org/Censorship/Rimm_CMU_Time/rimm_hoffman_novak
.critique.
6. Philip
Elmer-Dewitt, Cyberporn, TIME, July 3, 1995.
7. Pub. L. No.
104-104, 110 Stat. 56 § 502 (Feb. 8, 1996).
8. 521 U.S. 844
(1997).
9. Pub. L. 105-277, § 1403, 112 Stat. 2681-736 (Oct. 21,
1998).
10. ACLU v. Mukasey, 534 F.3d 181 (3d Cir. 2008), cert.
denied, 129 S. Ct. 1032 (2009).
11. Pub. L. No.
106-554, 114 Stat. 2763A-335 (Dec. 21, 2000).
12. 545 U.S. 913
(2005).
13. Paul Ohm, The Rise and Fall of Invasive ISP
Surveillance, 2009 U. ILL. L. REV. 1417.
***
WHO TO SUE?: A BRIEF
COMMENT ON THE CYBER CIVIL
RIGHTS AGENDA
VIVA R. MOFFAT†
Danielle Citron’s
groundbreaking work on cyber civil rights raises a
whole variety of interesting possibilities and difficult
issues. [1] In thinking
about the development of the cyber civil rights agenda, one
substantial
set of concerns revolves around a regulatory question: what
sorts of
claims ought to be brought and against whom? The spectrum of
options
runs from pursuing currently-existing legal claims against
individual
wrongdoers to developing new legal theories and claims to
pursuing either
existing or new claims against third parties. I suggest
here—very
briefly—that for a variety of reasons the cyber civil rights
agenda ought
to be pursued in an incremental manner and that, in
particular, we ought
to be quite skeptical about imposing secondary liability for
cyber civil
rights claims.
Citron has argued very
persuasively that online harassment, particularly
of women, is a serious and widespread problem. I will not
describe
or expand upon her claims and evidence here, but for the
purposes of this
brief essay, I assume that online harassment is a problem.
Determining
what, if anything, to do about this problem is another
matter. There are a
variety of existing legal options for addressing online
harassment. Victims
of the harassment might bring civil claims for defamation or
intentional
infliction of emotional distress. [2] Prosecutors might,
under appropriate
circumstances, indict harassers for threats or stalking or,
perhaps,
conspiracy. [3] These options are not entirely satisfactory:
because of IP
address masking, wireless networks, and other technological
hurdles,
individual wrongdoers can be difficult, if not impossible,
for plaintiffs
and prosecutors to find. Even if found, individual
wrongdoers might be
judgment-proof. Even if found and able to pay a judgment,
individual
wrongdoers may not be in a position to take down the
offending material,
and they are certainly not in a position to monitor or
prevent similar bad
behavior in the future.
Thus there are reasons
to pursue secondary liability—against ISPs,
website operators, or other online entities. Current law,
however, affords
those entities general and broad immunity for the speech of
others. Section
230 of the Communications Decency Act provides that “No
provider
or user of an interactive computer service shall be treated
as the publisher
or speaker of any information provided by another
information content
provider.” [4] This provision has been interpreted broadly
such that ISPs,
website operators, and others are not indirectly liable for
claims such as
defamation or intentional infliction of emotional distress.
[5] The statute
provides a few exceptions, for intellectual property claims,
for example. [6]
The proponents of the cyber civil rights agenda have
proposed that additional
exceptions be adopted. For example, Mary Anne Franks has
analogized online harassment to workplace harassment and
suggested
that Section 230 immunity ought to be eliminated for website
operators
hosting harassing content. [7]
Notwithstanding the
force of the arguments about the extent of the
problem of online harassment and the reasons for imposing
third party
liability, I suggest that claims for indirect liability
ought to be treated
with skepticism for a variety of reasons. [8]
First, it is unclear
whether the imposition of third party liability is
likely to be effective at reducing or eliminating the
individual bad behavior
that is problematic. Secondary liability would presumably
entail some
proof of, for example, the third party’s ability to control
the wrongful
behavior or the place in which that behavior occurred, the
third party’s
knowledge of the bad behavior, or the third party’s
inducement of the
harassment (or some other indicia of responsibility of the
third party). If
this is so, it is easy to imagine that third parties—ISPs,
website operators,
and so on—who wish to avoid imposition of secondary
liability or who
wish to encourage or permit the “Wild West” behavior online
will take
measures to avoid findings of ability to control, of
knowledge, or of inducement.
Website operators might, for example, employ terms of use
that strongly condemn online harassment and that require
that users indemnify
the website operators. ISPs might adopt strategies that
effectively
reduce or eliminate any “knowledge” the entity might have of
what occurs on the site. The third parties might design
their operations
such that they cannot control user-created content, much as
the filesharing
services and peer-to-peer networks did in the wake of the
RIAA’s pursuit of secondary liability claims.
Having just postulated
that indirect liability may be ineffective, my
second concern may seem contradictory: it may be overbroad.
The collateral
consequences of imposing secondary liability for
user-generated
content are enormous. As many have pointed out, third party
liability
may very well have substantial chilling effects on speech.
Even if individual
wrongdoers are willing to put their views out in the world,
website
operators and ISPs are likely to implement terms of use,
commenting
policies, and takedown procedures that are vastly overbroad.
This is not
to say that there are no collateral consequences, such as
chilling effects
on speech, from the imposition of direct liability, but only
to speculate
that such effects are potentially greater as a result of
third party liability.
Third, to the extent
that cyber civil rights agenda entails (and perhaps
emphasizes) a norm-changing enterprise, it seems at least
possible
that claims of indirect liability are less likely to be
effective in that regard.
Revealing individual bad behavior and pursuing that
wrongdoer
through the legal system represents a straightforward
example of the
expressive value of the law at work: public condemnation of
wrongful
behavior. Claims for indirect liability are less likely to
allow for such a
straightforward story. Many (though not all) website
operators and ISPs
are engaged in very little behavior that is easily
categorized as wrongful.
Instead, third party liability of those entities is
justified on other grounds,
such as the entity’s ability to control the online behavior,
the receipt of
benefits from the bad behavior, or knowledge of the
harassment. Attempts
to hold these entities liable may not serve the expressive
value of
changing the norms of online behavior because in the vast
majority of
instances people are less likely to be convinced that the
behavior by the
third party was, in fact, wrongful. [9] In short, the
argument that the imposition
of third-party liability will change norms about individual
online
behavior strikes me as speculative.
Finally, a number of
the reasons that victims might pursue claims
against third parties simply are not sufficient to justify
imposition of such
liability. One might seek third party liability because
individual wrong-
doers cannot be found or because those individual wrongdoers
are judgment-
proof. Neither reason, though understandable, is sufficient.
As a
descriptive matter, third party liability in general is
rarely or never imposed
solely for one of those reasons. As a fairness matter, that
is the
right result: it would be inequitable to hold a third party
liable solely because
the wrongdoer cannot be found or cannot pay.
Each of the concerns
sketched out above applies either to a lesser
extent or not at all to the pursuit of direct liability
claims, civil or criminal.
While there are other problems with efforts to seek redress
against
individuals wrongdoers, that is the more fruitful path for
the development
of the cyber civil rights agenda.
_______________
Notes:
† Assistant
Professor, University of Denver Sturm College of Law.
J.D., University of
Virginia Law School; M.A., University of Virginia; A.B.,
Stanford University.
1. Danielle Keats
Citron, Cyber Civil Rights, 89 B.U. L. REV. 61 (2009);
Danielle Keats
Citron, Law’s Expressive Value in Combating Cyber Gender
Harassment, 108 MICH. L. REV. 373
(2010).
2. See Citron, Cyber Civil Rights, supra note 1, at
86–89. 3.
Id. See, for example, the indictment of Lori Drew for
conspiracy and violation of the
Computer Fraud and Abuse Act, 18 U.S.C. § 1030. (The
indictment is available online at
http://www.scribd.com/doc/23406509/Indictment.) Drew was
eventually acquitted by the judge in
the case. See Rebecca Cathcart, Judge Throws out
Conviction in Cyberbullying Case, N.Y. TIMES,
July 2, 2009, available at http://www.nytimes.com/2009/07/03/us/03bully.html?_r=1&scp=4&sq=
lori%20drew&st=cse.
4. 47 U.S.C. §
230(c)(1) (2006).
5. For a summary
of the development of the CDA’s immunity provisions, see
H. Brian
Holland, In Defense of Online Intermediary Immunity:
Facilitating Communities of Modified Exceptionalism,
56 U. KAN. L. REV. 369, 374–75 (2008) (“[C]ourts have
consistently extended the reach
of § 230 immunity along three lines . . . .”).
6. The statute
provides exceptions for intellectual property claims,
federal criminal enforcement,
and a few others. 47 U.S.C. § 230(e) (2006). Third party
liability for intellectual property
claims is also regulated and partly immunized. See 17
U.S.C. § 512 (2006).
7. Mary Anne
Franks, Unwilling Avatars: Idealism and Discrimination
in Cyberspace,
COLUM. J. GENDER & L. (forthcoming Feb. 2010), available
at
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1374533.
8. On the other
hand, I have much less concern about the vigorous
pursuit of claims against
individual wrongdoers.
9. In the course
of representing a student sued by the RIAA for uploading
digital music file
in violation of the Copyright Act, I asked her if she
had heard of the Napster opinion (A&M Records,
Inc. v. Napster, Inc., 239 F.3d 1004 (9th Cir. 2001)
(notably, for purposes of this anecdote, an
indirect liability case)). She said, “Yes, but I used
Gnutella.” The suit for indirect liability obviously
didn’t have the expressive value for that student that
the recording industry might have hoped.
***
UNREGULATING ONLINE
HARASSMENT BY
ERIC GOLDMAN†
INTRODUCTION
I learned a lot from
Danielle Keats Citron’s articles Cyber Civil
Rights [1] and Law's Expressive Value in Combating Cyber
Gender Harassment.
[2] I realized that women are experiencing serious harms online
that men—including me—may be unfairly trivializing. I was
also convinced
that, just like the 1970s battles over workplace harassment
doctrines,
we will not adequately redress online harassment until we
first
acknowledge the problem.
However, finding
consensus on online harassment’s normative implications
is trickier. Online harassment raises cyberspace’s standard
regulatory challenges, including:
- Defining
online harassment, which may range from a coordinated
group attack by an “online mob” to a single individual
sending a single
improper message.
- Dealing with
anonymous or difficult-to-identify online harassers.
- Determining
how online harassment differs from offline harassment
(if at all) [3] and any associated regulatory
implications.
- Deciding if it
makes more sense to regulate early or late in the
technological evolution cycle (or never).
- Allocating
legal responsibility to intermediaries.
PROTECTING
SECTION 230
In 1996, Congress
addressed the latter issue in the Communications
Decency Act, 47 U.S.C. § 230, which provides a powerful
immunity for
websites (and other online actors) from liability for
third-party content or
actions. Empowered by this immunity, some websites
handle user-
generated content (“UGC”) in ways that may facilitate
online harassment,
such as tolerating harassing behavior by users or
deleting server
logs of user activity that could help identify
wrongdoers. As frustrating
as these design choices might be, they are not
surprising, nor are they a
drafting mistake; instead, they are the logical
implications of Congress
conferring broad and flexible immunity on an industry.
Though we might
question Congress’ understanding of UGC in
1996, it turns out Congress made a great (non)regulatory
decision. Congress’
enactment of § 230 correlates with the beginning of the
dot com
boom—one of the most exciting entrepreneurial periods
ever. Further,
the United States remains a global leader in UGC
entrepreneurial activity
and innovation; note that many of the most important new
UGC sites
founded in the past decade (such as Facebook and
YouTube) were developed
in the United States. Although I cannot prove causation,
I
strongly believe that § 230 plays a major role in both
outcomes.
Frequently, §
230’s critics do not attack the immunization generally,
but instead advocate a new limited exception for their
pet concern. As
tempting as minor tweaks to § 230 may sound, however, we
should be
reluctant to entertain these proposals. Section 230
derives significant
strength from its simplicity. Section 230’s rule is
actually quite clear:
except for three statutorily enumerated exceptions
(intellectual property,
federal crimes and the Electronic Communications Privacy
Act), websites
are not liable for third-party content or
actions—period. Creative
and sympathetic plaintiffs have tried countless attempts
to get around §
230’s immunity, but without any meaningful success. [4]
Given the immunity’s
simplicity, judges have interpreted § 230 nearly
uniformly to shut
down these attempted workarounds. Increasingly, I notice
that plaintiffs
omit UGC websites as defendants knowing that § 230 would
moot that
claim.
Operationally, § 230 gives “in the field” certainty to
UGC websites.
Sites can confidently ignore meritless demand letters
and nastygrams
regarding UGC. Section 230 also emboldens UGC websites
and entrepreneurs
to try innovative new UGC management techniques without
fear of increased liability.
Any new exceptions
to § 230, even if relatively narrow, would undercut
these benefits for several reasons. First, new
exceptions would
reduce the clarity of § 230’s rule to judges. Second,
service providers
will be less confident in their immunity, leading them
to remove content
more frequently and to experiment with alternative
techniques less.
Third, plaintiffs’ lawyers will try to exploit any new
exception and push
it beyond its intent. We saw this phenomenon in response
to some plaintiff-
favorable language in the Ninth Circuit’s Fair Housing
Council of
San Fernando Valley v. Roommates.com en banc ruling. [5]
Judges have
fairly consistently rejected plaintiffs’ expansive
interpretations of
Roommates.com, [6] but only at significant additional
defense costs.
CONCLUSION:
EDUCATION AND “NETIQUETTE”
While the debate
about regulating intermediaries’ role in online
harassment
continues, education may provide a complementary—or
possibly
substitutive—method of curbing online harassment. On
that front, we
have much progress to make. For example, most current
Internet users
started using the Internet without any training about
bullying, online or
offline. Not surprisingly, some untrained users do not
make good
choices.
However, future generations of Internet users will have
the benefit
of education about bullying. For example, my
seven-year-old son is
learning about bullying in school. The program [7]
teaches kids—even first
graders—not to bully each other or tolerate being
bullied. It even shows
kids how to deal with bullies proactively. Anti-bullying
programs like
this may not succeed, but they provide a reason to hope
that online harassment
will abate naturally as better trained Internet users
come online.
________________
Notes:
† Associate
Professor and Director, High Tech Law Institute, Santa
Clara University School
of Law. Website: http:///www.ericgoldman.org. Email:
egoldman@gmail.com.
1. Danielle Keats
Citron, Cyber Civil Rights, 89 B.U. L. REV. 61 (2009).
2. Danielle Keats
Citron, Law's Expressive Value in Combating Cyber Gender
Harassment,
108 MICH. L. REV. 373 (2009).
3. Compare Noah v.
AOL Time Warner Inc., 261 F. Supp. 2d 532 (E.D. Va.
2003) (holding
that Title II discrimination claims do not apply to
virtual spaces such as AOL chatrooms), aff’d No.
03-1770, 2004 WL 602711 (4th Cir. Mar. 24, 2004) (per
curiam), with Nat’l Fed. of the Blind v.
Target Corp., 452 F. Supp. 2d 946 (N.D. Cal. 2006)
(allowing an ADA claim against a non-ADA
compliant retailer’s website based on the interaction
between the retailer’s physical store and its
online activity).
4. See e.g., Nemet
Chevrolet, Ltd. v. Consumeraffairs.com, Inc., 591 F.3d
250, 258 (4th Cir.
2009) (holding that § 320 shielded Consumeraffairs.com
because plaintiff failed to establish that
Consumeraffairs.com constituted an information content
provider by exceeding its "traditional
editorial function.").
5. Fair Hous. Council of
San Fernando Valley v. Roommates.com, 521 F.3d 1157 (9th
Cir.
2008) (en banc).
6. As of December
31, 2009, I have tracked 13 cases citing Roommates.com,
11 of which
have done so while ruling for the defense. See Posting
of Eric Goldman to Technology & Marketing
Law Blog, Consumer Review Website Wins 230 Dismissal in
Fourth Circuit—Nemet Chevrolet v.
ConsumerAffairs.com, http://blog.ericgoldman.org/archives/2009/12/consumer_review_1.htm.
(Dec.
29, 2009, 14:53 PST).
7. See Project
Cornerstone,
http://www.projectcornerstone.org. |