Technological and scientific interventions in criminal justice are not always newsworthy enough to draw our attention away from urgent and sensational issues on Capitol Hill. But when a jail in Tennessee promotes sterilization by offering reduced sentences to men who volunteer to have vasectomies, we have to pay attention. The problem is not that one jail implemented a terrible eugenicist policy; it is that stakeholders in the criminal justice system are consistently looking for scientific and technological solutions to social problems, forgetting that—as fivethirtyeight contributor Laura Hudson writes— “technology is biased too.”
In the context of criminal justice, America’s faith in technological interventions is worse than misplaced; it is dangerous.
Consider the innovations of recent years: dashboard and body-worn cameras, next-generation electronic monitors, secret algorithms that help to inform sentencing—these items form an impressive and James Bond-esque inventory. The companies that peddle these products invite law enforcement and the American public to imagine that these devices (or medical procedures like the vasectomy) will solve the problems that plague our criminal justice system. As a writer for the free-market think tank, R-Street Research, claimed earlier this year: “finding solutions to our incarceration problem may require the innovation and cost-effectiveness that only technology can provide.”
Even careful researchers see this technological solutionism as a new and auspicious (if imperfect) trend. For example, Michael D. White’s rigorous study of the benefits and limitations of body-worn cameras begins by noting that: “In recent years, technological innovation has continually shaped law enforcement, from less-lethal devices (e.g., TASER) and forensic evidence to advanced crime analysis.” Americans with differing political ideologies often agree that technological developments will solve social problems—especially in the highly-rationalized areas of policing and punishment.
We have seen this type of thinking before, and a brief history of the electric chair illustrates exactly why it is harmful. In the early-nineteenth century, a consolidated movement to abolish the death penalty implored Americans to question whether a just and modern nation should kill its citizens. Whether is the operative word here. The public debate was couched in terms of ethics, and states decided to update or retain their laws accordingly.
In 1887, Elbridge Gerry and a cadre of New York elite changed this public conversation. This commission published a report—now known as “The Gerry Report”—that surveyed thirty-four methods of execution in the search of the “Most Humane and Practical Method of Carrying into Effect the Sentence of Death in Capital Cases.” These findings inspired New York State’s Electric Execution Act of 1888. But even before it was applied to state policy, “The Gerry Report” had left an indelible impression by positing that a comparatively “humane and practical method” of execution existed.
The Gerry commission, along with other proponents of the death penalty, convinced the American public to stop asking whether states should kill and to start asking how states could kill efficiently and humanely. As I argue in my book, Power Lines: Electricity in American Life and Letters, 1882-1952, this transition from whether to how changed the way Americans perceived the issue of capital punishment, especially in the case of bungled executions. Where a prolonged and painful hanging once encouraged spectators to wonder if the law was unjust, botched electric executions indicated, instead, that the electric chair needed modification.
The history of the electric chair reminds us that new technologies—in corrections, as in all things—are always more than tools. We don’t just use technologies, we also think with them. The danger of the electric chair was not only that it was an imperfect technology, but also that it bolstered the fantasy of perfectibility. Today, when we invest in scientific technological solutions, we buy into a similar dream. Although, as NPR reports, it is important to mitigate machine bias, these measures will not solve our underlying social issues.
Our faith in technology consumes resources and begets complacency. We predict that body cameras will overcome the widespread distrust of law enforcement, although cameras are no substitute for repairing relationships between police officers and the community members they strive to protect. We expect that electronic monitors will reduce recidivism, despite evidence that increased surveillance does not reduce crime. We hope that algorithms might make sentencing objective, but we forget that algorithms are descriptive and recapitulate the racial biases that they claim to redress. Apparently, some even hope that eugenics will solve the school-to-prison pipeline.
As sociologist David Garland argues in his award-winning study, Punishment and Modern Society, “Our taken-for-granted ways of punishing have relieved us of the need for thinking deeply about punishment and what little thinking we are left to do is guided along certain narrowly formulated channels.” When we anticipate a technological fix, we forget that the status quo is not our only option. We must challenge our assumptions about technological and scientific solutions. We could accomplish more by channeling resources into diversionary programs to reduce recidivism. We could invest in prison education or in public education that could address the school-to-prison pipeline at its source. We could confront underlying issues like racism, poverty, and addiction with a robust social safety net. But before we undertake any of these ventures, we have to give up the hope that science or technology will save us from ourselves.