I should have been thrilled. And I was, for five minutes. “Your book about psychiatric diagnosis was cited in the latest United States Supreme Court decision,” read a colleague’s email message to me.
For five minutes I felt gratified, thinking my report that many psychiatric diagnostic categories are unscientific had been helpful. Then I saw that what the Clark v. Arizona decision, the last in the Court’s most recent term, included was a serious mischaracterization and misapplication of my work. I wondered how the Court had heard of my book and soon discovered that the writer of an amicus curiae brief had cited it in a way that, through implication and omission, was misleading.
When I discovered that the “Citizens Commission for Human Rights” (CCHR) had submitted that brief, it struck me that a Justice would be unlikely to know that the Church of Scientology founded and remains closely tied to the CCHR.
I wondered: Does the Supreme Court have mechanisms to find out the nature of groups that submit amicus briefs, and does the Court have mechanisms to figure out whether scientific research and clinicians’ opinions in briefs are of good quality and accurately presented?
The case at issue: Eric Michael Clark had been diagnosed as Paranoid Schizophrenic, and he believed that aliens had invaded Earth and were sometimes disguising themselves in government uniforms and trying to kill him. At trial, Clark’s attorney argued that when Clark shot and killed a police officer, he believed himself to be in mortal danger from an alien.
I had served on two of the committees that wrote the current edition of the manual of psychiatric diagnosis – the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM) – but had resigned: I was appalled that they used an unscientific process to decide in which of a vast number of possible ways to assign individual symptoms to groups but then presented the chosen groups as though they were real entities. The APA markets the DSM as a scientifically-based document, but their choices about how to cluster symptoms were often no more scientific than astronomers’ choices about how to group stars into constellations.
An issue in the Clark case was whether or not the accused had the mens rea, that he knowingly and intentionally committed the crime. If the accused did not intend to do what he did, the law says that the crime was not committed. Strangely, the CCHR used my work about diagnosis to argue that psychiatric testimony should not be used in determining mens rea. But a psychiatrist or psychologist can certainly tell whether a person suffers from delusions, and that – not whether psychiatric categories are unscientific – is what pertains to the judgment about whether or not Clark had the mens rea. The CCHR’s argument makes about as much sense as to claim that, because stars can be grouped in different ways into a myriad of constellations, therefore no single star exists. Whether there were more scientific grounds for calling Clark schizophrenic than psychotic or bipolar, he clearly suffered from a delusion that took away his mens rea: He did not intend to kill an officer; he did not know that his victim was an officer.
In Clark, the Supreme Court was faced with the question of whether the state of Arizona had the right to disallow psychiatrists’ and psychologists’ testimony about mens rea. The CCHR brief was so compellingly written that the Court’s majority apparently missed its failure to acknowledge that people suffer from delusions, no matter how you label them. It is surprising that the majority did not question the brief’s relevance after reading its statement that “common activities can be [wrongly] deemed mental illness in DSM”; for that legitimate criticism of the DSM is totally irrelevant to the uncommon belief that aliens in police uniform are trying to take one’s life. And it’s a huge and unwarranted leap from the lack of science in diagnosis to the CCHR’s claim that “The discipline of psychiatry is simply unable to[determine] whether the criminal defendant is responsible for criminal conduct.” It is analogous to claiming that if a physical illness has not been accurately classified, one cannot die from it. In writing the majority opinion, Justice David Souter says that “evidence bearing on whether the defendant knew the nature and quality of his actions is both relevant and admissible”; how he got from that reasonable principle to disallowing expert testimony about delusions is a mystery.
The majority Justices conflated psychiatric diagnosis with serious emotional problems, referring in one breath to both “mental disease” and “capacity”; but whether or not one believes that serious emotional problems are “diseases,” and whatever various diagnostic labels conflicting therapists might choose for Clark, his delusion would have been evidence that he lacked the criminal intent to commit the crime. In fact, his particular delusion is precisely the kind of symptom that even we debunkers of diagnostic categories would consider prima facie evidence of serious emotional disturbance.
According to Harvard Law Professor Laurence Tribe, only “potentially compromising financial links” between an amicus author and a party to the case must be disclosed; otherwise, “the judiciary has no devices for exploring the affiliations, commitments, or ultimate credentials of the various amici.” The CCHR had indicated that “No entity or person” aside from the CCHR “made any monetary contribution” to the brief, but, although the Supreme Court Rules do not require disclosure of nonfinancial sources of bias, the Justices might have considered the ties to Scientology relevant in deciding how much weight to give that brief.
Supreme Court expert Stephen Wermiel of American University’s Washington College of Law believes that the Court probably assumes amicus writers will want the Court to know who they are and why they have an interest in the case. But that may not always be so, and there is no penalty for failing to disclose either that information or even financial conflicts. Perhaps, like editors of scientific journals, the courts could require that anyone or group submitting an amicus curiae brief disclose both definite and possible ideological conflicts. Admittedly, though, most with ideological biases regard their ideology as lea ding not to bias but to truth.
What about the question of the courts’ competence to know what is good science? Trial courts – but not, strictly speaking, the Supreme Court or other appeals courts – are bound by the 1993 case of Daubert et al. v. Merrell Dow Pharmaceuticals, Inc., in which Justice Harry Blackmun wrote that trial judges should ensure that an expert’s testimony “rests on a reliable foundation” and scientifically valid and “relevant to the task at hand.” What a world of complications is bound up in just that brief phrase “rests on a reliable foundation.” As a specialist in research methodology and co-author of a textbook on that subject, I have seen intelligent scientists acting in good faith disagree about whether a claim rests on a reliable foundation. And peer review, publication, and degree of acceptance of research in the scientific community often reflect more about the biases of the most powerful scientists and journal editors than about the quality of the science.
In order to take Daubert fully and fairly into account, judges would need to know as much about scientific debates in particular fields as do the scientists themselves. Even if judges had the time and believed it appropriate to carefully study the expert opinions presented to them, they will not always know what relevant research and interpretations of data have not been presented to them. What appeals or Supreme Court judges can do in light of Daubert is to send cases back to trial courts when they believe the trial judges failed to follow the Daubert guidelines. But higher-court judges often cannot know whether Daubert guidelines were met, if they are not familiar with the debates within that particular field.
These problems are aggravated in the social sciences and the mental health fields, where the difficulties of measuring and interpreting human behavior, feelings, and thoughts are legion. Social scientists and psychotherapists have increasingly urged the courts to use their expertise, and notably, in the 1954 Brown v. Board of Education decision, research about Black children’s self-esteem was considered crucial. The American Psychological Association’s Monitor has included a column on psychology and the courts for two decades, and in 1995 that organization began publishing the journal Psychology, Public Policy, and Law. However, whereas few judges would consider themselves experts on chemical engineering, most judges, like most people, have implicit or explicit theories about human behavior, and judges, like everyone else, have biases. When it comes to research in psychology and mental health, then, Justices may unknowingly be predisposed to accepting unquestioningly those allegedly scientific claims in amicus briefs that accord with their own beliefs.
Although trial judges hear claims by and cross-examination of scientists who disagree with each other, in appeals courts and the Supreme Court, Professor Tribe is aware of no “actively investigatory” mechanism to assess what is presented as science in either the amici or the trial transcripts: “The judiciary, including most particularly the Supreme Court, relies on a process in which it plays a fairly passive role to ferret out groundless or dubious empirical claims. The Court essentially counts on the opposing advocate and the amici of the party that advocate represents to debunk bogus claims advanced on behalf of the other side.” But amicus writers do not have the chance to respond to other amici, and in my case, had I seen the CCHR brief and been asked to write one or advise Clark’s attorney, it would never have occurred to me that the Court would consider relevant to mens rea what the CCHR wrote about my work. Thus, I doubt I would have addressed the issue.
As laypeople. we might be surprised that the Supreme Court and lower-level appeals judges do not consider it their role to determine the validity of relevant science. Their role, Professor Wermiel explains, is “to decide whether there is a right being violated,” not whether or not the relevant science is valid. They do not reweigh the facts as they were determined by the trial judges.” But what if someone’s rights were infringed because a claim was accepted as scientifically proven and relevant, even though it is not? That is just what happened in the Clark case, and for Eric Michael Clark it could be literally a matter of life or death.
Had the Justices known that my work was presented by a Scientologists’ group, perhaps they would have checked out whether it might have been taken out of context or otherwise wrongly used to draw a particular conclusion.
Regardless of who writes a brief, all writers, including us scientists, have biases, and it is alarming that even the Justices of the Supreme Court have no consistent way to assess scientific merit. It’s not as though the Justices have failed to think about the question of how to judge the quality of claims by scientists. U.S. Supreme Court Justice Stephen Breyer is a longtime advocate of the importance of scientific experts in assisting the courts, and he is right that scientists can be of great help. Justice Breyer has been supportive of the American Association for the Advancement of Science’s (AAAS) fine Court-Appointed Scientific Experts program. However, since that program began in 2001, according to its Project Director Mark Frankel, only trial judges have asked for their assistance and only in civil cases, and no experts from psychology or sociology have been used.
None of this means that we should ban science from the courts. But we must face the fact that these are some insurmountable problems and from that basis consider what to do.
Having taught psychology majors at some of the most selective universities in North America, I have been stunned when students have told me that their professors have never asked them to do critical thinking about the research reports that they read; and critical thinking is essential to the attempt to overcome biases. Required courses about research methods in science span at least one semester or a full year, and much science requires far more training for those who wish to understand it.
As science becomes increasingly specialized, the number of people who can understand research outside their own field dramatically diminishes. And in the realms of psychology and psychiatry, court cases relating especially to child custody and child sexual abuse have become increasingly complex and flooded the courts. In light of these developments, it is no wonder that judges have increasingly turned to scientists and other experts, longing to believe that they know some objective truth that will make crystal clear how they should decide the case.
I once spoke to a major conference of legal professionals about a study that Canadian family law attorney Jeffery Wilson and I had conducted, in which we documented that substantial percentages of mental health experts who do child custody assessments had important biases. Many attendees, visibly shaken, asked me how, then, were judges supposed to make their decisions. These reactions highlighted how judges often shift from their shoulders to the shoulders of scientists and mental health experts the responsibility of making judgments. Although such testimony can certainly be helpful, at the very least, judges, lawyers, and the public need to be aware that judges’ reliance on such experts often introduces not indisputable science but rather additional biases into judicial proceedings.
As a practical matter, it is hard to know what can be done. Should judges beyond the trial level in essence hold mini-trials or mini-debates between opposing experts in their chambers, or should they schedule classes for themselves with opposing experts in cases that come before them? Would those mechanisms pose problems of principle or practicality?
Washington College of Law Professor Paul Rice, an expert on evidence, believes that the standards set in Daubert “are probably impossible to meet,” and Supreme Court Justices thus encounter the “exact” problems as the trial judges: “If the trial judges don’t know how to separate the wheat from the chaff, the appellate judges aren’t any better.” In essence, he notes, judges at all levels try to function as scientists, evaluating the validity and relevance of material presented by the parties and in amicus briefs. They have to make choices about who the relevant scientists are and examine their conclusions and how they reached them. But, says Professor Rice, “When judges try to be scientists, they don’t do it very well,” and in essence, they often use scientists as law clerks, with the scientist chosen by the judge making statements that the judge incorporates in a decision and then “acts like he did it,” like he was presenting as their own conclusions what the scientists told them. He further notes that they do this by “taking judicial notice,” presenting claims which may or may not be based on solid science as though they were facts.
One cannot expect judges to have comprehensive knowledge of every field of science that comes up in cases before them. But what is dangerous to us as a society is to assume that the process is less arbitrary and biases, and more balanced, scientific, and just than it actually is.
PAULA J. CAPLAN, Ph. D. is a clinical and research psychologist on the psychology faculty and at the DuBois Institute at Harvard University. She is the author of They Say You Are Crazy: How the World’s Most Powerful Psychiatrists Decide Who’s “Normal.” She may be reached at: paulajcaplan@yahoo.com