In a recent column, I discussed a recent New York state-court decision dismissing a defamation suit that was based on a series of posts to a private Facebook group — one with a very small set of members, all of whom were approved by two member-administrators.
In this column, I will discuss, more generally, how and why I expect future Facebook defamation cases (if they arise, and I think they may) to differ importantly from defamation cases that take issue with statements that are made in the print and/or online media.
For today, I’ll put aside the interesting question of whether someone posting on Facebook, or another social-media site, could actually count as a journalist — and could thus invoke the law’s journalist’s privilege for confidential sources — with respect to certain postings that he or she has disseminated. But with many users already getting a lot of their news through Facebook’s “News Feed” and relying on each other’s movie and book reviews, I think the question is far from a frivolous one.
The Role that Facebook’s Character Limits May Play
In an earlier column concerning defamation and Twitter, I noted that Twitter’s severe character-limit for tweets might be harmful for defamation defendants — because the character limit might make it difficult for them to assert the defense of “opinion based upon disclosed fact.”
This defense will also be harder to raise with respect to Facebook postings with character limits — though the effect is likely to be less pronounced, because Facebook’s limits are more generous.
Under the opinion-based-on-disclosed-fact doctrine, if true facts are disclosed, then an opinion that is openly and solely based on those true facts cannot be grounds for a defamation suit. That’s because, in that situation, the reader can judge for herself whether to agree with the opinion that’s been expressed, in light of the true facts that have been conveyed to her. And the First Amendment fully protects opinion; defamation law’s only proper concern is false facts.
The question now– with both Twitter and Facebook — is whether there is room for a tweeter or poster to include all the true facts that are needed to invoke the opinion-based-on-disclosed-facts defense, and to make clear that these true facts are the only basis for his or her opinion.
Moreover, the issue is further complicated by the fact that some tweets and posts make private or semi-private references that not every “follower” or “friend” will understand.
Consider, for instance, this hypothetical Facebook post: “Wow, John was a crazy man at last night’s party. You’re seriously turning into an alcoholic, dude.”
For those who attended last night’s party, this posting is tantamount to an opinion based on disclosed fact: They saw what happened at the party; they know the true facts; and they can draw their own conclusions about whether John is turning into an alcoholic.
But what about those who didn’t attend the party, and who only see the posting? All they may take away from the posting is the conclusion that John is an alcoholic — which for them, reads almost as if it were the poster’s statement of fact.
(One feature of defamation law is that a statement like “John is an alcoholic” can function as either a statement of fact — if it’s by itself — or as a statement of constitutionally-protected opinion, if it is the conclusion following a set of facts.)
The odd result is that, on Facebook, the very same posted statement may be defamatory with respect to some readers, but not with respect to others. (Indeed, in this example, some readers may not even know which “John” is being referred to.)
But that conclusion is not actually as odd as it sounds — because defamation is all about reputational damage, and this hypothetical posting has damaged John in the eyes of some (the non-partygoers), but not of others. (Granted, partygoers may also think John is an alcoholic based on their own direct observations, but if so, it is not the poster’s fault.)
The Effect of Being Able to Specifically Identify Those Who Have Read a Given, Allegedly Defamatory Statement
Another way that Facebook is very different from the print and online media is that it is very easy to know exactly who may have read a given statement on Facebook.
No one is going to call every one of a newspaper’s readers up, and find out how they each interpreted a particular story that is alleged to be defamatory. It’s much too time-consuming and expensive. Moreover, news websites may not even know their readers’ names, with access free, and comments often made under pseudonyms.
In addition, only a tiny fraction of a newspaper’s or news website’s readers will have personal knowledge relating to a given news story — a far cry from the situation with the more personal types of Facebook posting.
Thus, in the past — with newspapers as the paradigm, and damages hard to prove — defamation-damages law focused on the plaintiff’s emotional pain and the defamer’s need for punishment. Sometimes, the plaintiff could also prove that damage to his reputation hurt him in his business dealings. Also, for certain categories of statements, which were seen as inherently especially harmful, damages were presumed to have occurred.
In sharp contrast, in suits involving Facebook damages, one could — at least in theory — interview all the plaintiff’s “friends” to see if they saw the posting at issue; if they believed it; if it changed their view of the plaintiff; if they passed on the information to others; if they would now be less likely to recommend the plaintiff for a job; and so forth.
If such questions were asked of everyone who saw a particular posting, then the presumptions that the law makes could be replaced with actual evidence. Moreover, the focus on punishment and pain, when it comes to defamation damages, could be replaced by a focus on what was supposed to be the core concern of defamation law all along: damage to reputation.
The Effect of the Ability to Respond, in Comments or Otherwise — Just Cause for A Facebook-Defamation Waiver?
Finally, the limited universe of a Facebook “friend” group also raises another question: Should Facebook institute a response rule — under which users agree to waive their right to sue other users for Facebook defamation, in exchange for the right to respond to any statement that they think is defamatory by contacting the very same set of “friends” who originally saw the allegedly defamatory statement?
The “comment” function on Facebook already allows such responses if the alleged defamer and the alleged victim are Facebook “friends” — though comments can also be deleted by the poster of the original comment. What this new mechanism would add, then, would be the target’s right to access the defamatory speaker’s friends, in order to respond and potentially to clear his or her name.
Under current statutes, in some states, timely retractions by a person or entity that publishes defamation can reduce damages, or even eliminate the possibility of punitive damages altogether. A Facebook waiver provision could go much further, with users completely waiving the right to sue, but gaining the right to reach those who believed a lie, and tell them the truth — and even to post supporting evidence on Facebook if needed.
It’s always been a little strange for our defamation-law system to compensate reputational damage with money. Matching speech with counter-speech makes more sense, and Facebook has the specific ability to match a speaker to a specific audience. Thus, one way to end Facebook defamation suits, if they do arise, would be simply to moot them by agreement. In this way, Facebook users could create a litigation-free zone, and ensure that disputes on Facebook, stay on Facebook — rather than leaking into the courts and chilling Facebook users’ speech in the bargain.
JULIE HILDEN practiced First Amendment law at the D.C. law firm of Williams & Connolly from 1996-99. She is the author of a memoir, The Bad Daughter and a novel Three. She can be reached through her website.