This Monday, September 27th, a French court ruled against Google and its CEO, Eric Schmidt, in a criminal defamation action. The judgment requires Google to pay the plaintiff 5,000 euros. Google and Schmidt plan to appeal.
The case is based upon Google’s Suggest function. Google Suggest automatically completes search queries by suggesting search terms that have been popular with other users who typed the same initial word.
So if a person typed in “Britney,” Google Suggest might suggest “Spears” to complete the search query — because so many other users, after typing in “Britney,” also typed in “Spears.” Among other advantages, this function saves typing time and gives users a chance to correct misspellings.
This action arose because, when Google Search users typed in the name of a particular man (known for purposes of the French case as “Mr. X”), Google Suggest offered terms such as “rapist” to complete the “Mr. X” search.
Earlier this year, Mr. X was convicted of corruption of a minor, but acquitted on a rape charge. In addition, Mr. X reportedly plans to appeal his corruption-of-a-minor conviction, which is not yet final.
In this column, I’ll explain why this kind of result would not occur under American defamation law, which is heavily influenced by the First Amendment. Also relevant here is a U.S. statutory provision that grants websites immunity for hosting third-party content that they play no role in creating or developing.
Under United States Law, a Search Suggestion Made By Google Does Not Fulfill the Elements of Defamation
Readers may be surprised to learn that, like France, the United States still has criminal libel laws on the books– not at the federal level, but in a number of states. (This compilation by The First Amendment Center details these laws, state by state.)
But realistically, in the United States, this dispute would have yielded a civil suit, not a criminal action — since, in practice, state prosecutors leave these matters for civil actions. And that civil suit, as I will explain, would have been unsuccessful.
Under U.S. law, to prove a civil defamation claim, the plaintiff must show that the defendant (1) made a false statement of fact (2) with the requisite state of mind (which varies depending, for instance, on whether the plaintiff is a public figure) (3) that caused damage to the plaintiff.
In this column, I’ll consider the first and third requirements — showing why, if U.S. law had applied, there would be numerous, overlapping problems with Mr. X’s ability to succeed in his case and gain a monetary award, as he did in France.
With respect to the second requirement — state of mind — this element of the claim might not pose a problem for Mr. X if the other requirements were fulfilled, and if Mr. X only sought damages starting from the time when he alerted Google that he believed that the Google Suggest terms defamed him. At that point, Google was on notice.
The False-Statement-of-Fact Requirement and the “Substantial Truth” Doctrine
First, and fundamentally, in Mr. X’s case, it is not clear that any statement at all was made by Google Suggest about Mr. X. To the contrary, Google Suggest was simply posing a question to the searcher (albeit in shorthand): “Do you want to search on ‘Mr. X’ and ‘rapist, as many other users have done before you?”
Questions cannot be defamatory; only statements can. (Granted, there might be an exception for a clear rhetorical question, but that’s not this case.)
And, if conjoining the words “Mr. X.” and “rapist” makes a statement, what precisely is that statement? Can it even be a statement without a verb?
One might be tempted to say that, here, the verb is implied, and the statement is “Mr. X is a rapist.” But in other contexts, “rapist” might come up, via Google Suggest, in conjunction with, say, a rape prosecutor or rape victim, and the guess of “is” for the verb would be dead wrong.
Thus, unless the Google Search user is relying on outside knowledge, he or she should not jump to conclusions. And to protect First Amendment rights, the legal system ought to focus on the reasonable user as its guide, when deciding how a Google Suggest result will be interpreted — not the user who is prone to jump to conclusions, interpolate words, and assume facts, rather than clicking through to find out for himself or herself what the truth really is.
Moreover, one might be able to argue here, with respect to the false-statement element, that the term “rapist” is substantially true — true enough, that is, for defamation-law purposes. U.S. defamation law requires only rough truth, not exact truth. Thus, if Mr. X were to lose on appeal, and if the facts underlying Mr. X’s “corruption of a minor” conviction were themselves to establish an offense that is close to, or nearly as heinous as, rape — and, to be clear, I don’t know whether or not that is the case here — then the “substantial truth” doctrine might also provide a strong argument for Google.
Under U.S. Law, the Calculation of Damages Would Take Into Account Mr. X’s Already-Sullied Reputation
Additionally, absent a reversal of his corruption-of-a-minor conviction, it might be very hard for Mr. X to show that any significant damage has been caused to him by Google Suggest’s suggestions.
That’s because the simple fact of Mr. X’s conviction for corruption of a minor has already significantly damaged his reputation. Thus, any damages verdict here would reflect only the additional damage that Google Suggest’s reference to rape has caused Mr. X, on top of the corruption-of-a-minor conviction.
(Of course, things could be very different if Mr. X were to prevail on appeal and get his corruption-of-a-minor conviction reversed; then his damages could be substantial and his situation could be very sympathetic.)
In the United States, it is very rare — almost unheard of — to see a case like this, where the plaintiff was convicted of a serious offense, and yet is arguing that he is defamed by the claim that he committed a still more serious offense.
Plaintiffs’ lawyers are unenthused, to say the least, about cases that require them to ask for money for convicts, and cases that will inevitably make it seem as if the plaintiff is trying to minimize the crime he committed. Here, where the victim on the corruption count was a minor, the prospects for such a case would be assessed by potential attorneys as especially weak.
Finally, there is also another interesting damages issue here — and, again, it cuts against Mr. X’s case. Presumably, many Google Search users actually performed the search that was suggested by Google Suggest, and then read the material about Mr. X that resulted. After all, they wouldn’t have typed in “Mr. X” in the first place unless they were interested in learning more about him. And that material likely includes far more detail than the mere words “Mr. X” and “rapist.”
Thus, this material — the search result itself — arguably ought to supersede those two verb-less words, “Mr. X” and “rapist,” in the searcher’s mind. Put another way, the actual search results have a very strong tendency to overwhelm and displace any belief that the suggested search terms themselves might have created. Thus, if any damages occur, they are best traced to the articles, blog postings, and the like that appear in the search results, not to the Google Suggest result that led to the search.
In the U.S., Section 230 of the Communications Decency Act Would Likely Protect Google
In the U.S., Google would also have benefited from Section 230 of the Communications Decency Act (CDA). Section 230 renders websites immune from claims — including defamation claims — that are based on third-party content that the websites host. But websites can lose that immunity if they play a role in developing the content at issue.
(Importantly, too, the main purpose of Google Suggest is obviously not to enable defamation. If it were, then under U.S. law, precedents such as those that resulted from the controversy over Napster and file-sharing could apply. When an entire site or function is created with the core purpose of lawbreaking, courts may pause before letting the defendant off scot-free. But that is not remotely the case here. Courts are far more comfortable with lawbreaking that is an inadvertent byproduct of law-abiding sites’ workings, than they are with sites that, from the very start, take aim at the law. And no one contends that Google Suggest was created with enabling defamation in mind.)
Google has already procured a judgment in its favor, under Section 230, with respect to a fraud claim that was based on third-party content that had appeared on Google AdWords (the small ads that appear to the right of the Gmail screen, and on numerous blogs and other sites). It seems very likely that U.S. courts would reach the same conclusion with respect to Google Suggest, similarly granting it Section 230 protection.
But there is an interesting difference here — one that isn’t significant under current law, but might be significant under future law: If Google can’t be sued based on terms that appear on Google Suggest, then no one can. And that may be the kind of situation that the drafters of Section 230 didn’t anticipate.
If a Google AdWords advertisement contains a libel, then the target of the libel can go after the advertiser in court. But if a Google Suggest search suggestion contains a negative word, and Google itself is immune from liability, whom can the target go after for defamation? Every Google user who typed in the term? That’s hardly feasible or fair.
This situation suggests that current U.S. law provides a right — the right to sue for libel — that is not matched with a remedy, in the case of Google Suggest. In my view, that’s just fine: If, as I have suggested, Google Suggest doesn’t ever make a statement, then it’s appropriate that Google Suggest is libel-proof. After all, libel requires a statement as one of its essential elements.
But some observers and regulators may be uncomfortable if such an influential search tool — and, potentially, others like it — are rendered essentially libel-proof — which raises the possibility of future legislation in this area. Moreover, even if we don’t see legislation in this area, we are likely someday to see legislation that grapples, more generally, with the reality that reputations can be made and lost online — and sometimes unfairly so.
JULIE HILDEN practiced First Amendment law at the D.C. law firm of Williams & Connolly from 1996-99. She is the author of a memoir, The Bad Daughter and a novel Three. She can be reached through her website.
This column originally appeared on Findlaw’s Writ.