What a year. The US has faced three unprecedented crises – the COVID pandemic, nationwide racial unrest, and now a bitterly contested presidential election. And there in the middle of all three is the Dark Trinity: Facebook, YouTube and Twitter.
With nearly 3 billion users, Facebook is by far the largest publisher in the world of news and information. Or rather, mis- and dis-information. During the past year, Crazytown on Facebook has grown exponentially to the point where it has virtually taken over the platform.
One report on Facebook found that 100 pieces of extreme COVID-19 misinformation were shared 1.7 million times and had 117 million views — way more viewers than the New York Times, Washington Post, ABC News, Fox News, CNN and MSNBC combined. Facebook-shared conspiracy theories claimed the pandemic is a hoax, and that Microsoft co-founder Bill Gates is the mastermind behind a sinister plan to track and control the world’s population via a COVID vaccine. The Global Disinformation Index found that Google provided advertising services to 86% of the sites carrying coronavirus conspiracies.
Following the George Floyd murder by a Minneapolis police officer, the disinformation machine once again cranked into high gear. A flood of shadowy social media posts claimed that George Floyd was not actually dead, and that George Soros was funding the spreading protests. Facebook and Twitter turned out to be effective tools for helping right-wing militias like the Proud Boys and Boogaloo Bois to find each other, organize and strategize to commit murder of police and protesters.
And now America stands on a cliff’s edge of a bizarre presidential election that might end in constitutional crisis. Facebook and Twitter should be renamed “Donald Trump’s Personal Agitation Machine,” because no one has used these portals more effectively to stoke the flames of mis/disinformation across a range of topics, from election fraud, vote-by-mail ballots and COVID precautions to racial division and anti-mask protests.
The social media companies, sensitive to criticism, claim that they have taken some steps to blunt the worst excesses of their products. Whether those efforts have been sincere or face-saving, in truth they have not been successful. Especially with many of their human monitors sheltered at home, and their automated AI screeners proving to be inadequate, platform security has become a helpless game of whack-a-mole.
In other words, this is as good as it gets. Even if the companies are making good faith efforts to use their technology for good, nevertheless their digital machines are Frankenstein technologies that are still dangerously out of control.
The toxic business model
There are two clear reasons why their remedial efforts have been so pathetically lame. First, with billions of users, it is simply impossible for their curators, whether algorithmic or human, to turn down the firehose of mis/disinformation that is gushing from those portals like the floodwaters of Hurricane Katrina.
Second, the platforms’ extractive business model relies on hooking users by algorithmically targeting them with sensationalized news and conspiracy stories. The longer users are engaged on their websites, the more advertisements they view and the more the companies’ profit. Social media companies have no skin in this game, the crazier things get on their platforms, the more they rake in revenue. It’s surveillance capitalism at its worst.
Even prior to these looming crises, the platforms already were contributing to a number of scandals. For example, a majority of people believe in urgently dealing with climate change—but how can we unite to take action when one recent study found that a majority of YouTube climate change videos denies the science and expert consensus? YouTube has 2 billion users, and 70% of what people watch comes from its recommendation algorithm. In addition, a recent study found that a mere $42,000 worth of Facebook ads promoting disinformation about climate change reached approximately 8 million people, especially targeted at older men in rural areas.
My, how the Internet has changed. Its use and spread accelerated 25 years ago with much optimism and idealism, and many people had initially hoped that technology would bring us together. Instead, the Wall Street Journal reports that Facebook’s own internal report found that 64 percent of people who joined an extremist Facebook group did so because the company’s algorithm recommended it to them.
Since the start of the social media era, teen depression, suicide rates and social isolation have increased dramatically. Many people yearn for less partisan polarization, especially in the middle of a presidential election. Instead, Facebook, Twitter and YouTube are being used for disinformation campaigns in over 70 countries to undermine elections, even helping elect a quasi-dictator in the Philippines. These companies are frictionlesslyamplifying extremism, such as the Christchurch mass murderer livestreaming his carnage over Facebook that then was uploaded on YouTube and seen by millions; and spreading hate propaganda in Buddhist-dominant Myanmar against the Muslim Rohingya minority.
This is supposedly the price we pay for being able to post our vacation and new puppy pics to our “friends,” and being able to wish happy birthday to our long-lost college roommate, or for the possibility that the neighbor’s kid’s dance video will go viral. Those are wonderful things, but in exchange the social media companies’ profit imperative thrives on disinformation, controversy, sensation and fake news. They have enabled the dividing, distracting, outraging and polarizing of people to the point where society is plagued by a fractured basis for shared truths and common ground.
Public utilities for public infrastructure
But it doesn’t have to be this way. When Ford Motor Company produced autos with cruise control switches that were causing fires, millions of those switches were recalled. Or think about how medical devices or voting machines are produced: you can’t release your new product to the public before it has been tested by an independent agency and certified safe for use. Yet nothing like private liability or rigorous oversight exists in the secretive culture of Silicon Valley and algorithms.
These businesses are creating the new public infrastructure of the digital age. Search engines, global portals for news, information and networking, web-based movies, music and live streaming, GPS-based navigation apps, and online commercial marketplaces – these comprise the modern infrastructure that people increasingly are using in their daily lives. Just like the telephone, trains or power plants did in years past. We need to enact guardrails for the new public information infrastructure, just as we regulated previous infrastructure.
Even if Section 230 of the Communication Decency Act says these companies are not liable for their user-generated content, that doesn’t mean we can’t regulate what the platforms do with that content. Here’s how this could work.
1) Reform the business model. First, the US should create a new classification of company — “social communication utilities,” or SoComs — for large, dominant social media platform businesses. Socom utilities, like their predecessor telecom utilities, should operate in the public interestaccording to digital licenses that guide their investor-owned business model. For years, traditional companies like AT&T and Comcast have been required to follow various rules and licenses. Socom utilities should be similarly constrained, especially in their use of runaway data extraction, micro-targeting and amplification practices that drive polarization and contribute to online harms. Indeed, some have called for a suspension of algorithmic amplification, since that is the catalyst for so much disinformation.
2) Social impact assessments. The SoCom utilities also should be subject to impact assessments, like an environmental assessment of a tailpipe or factory, or safety protocols used for medical devices. These would evaluate potential impacts and harms on mental health, fake news, polarization and democracy before deploying new product features, and, if necessary, be used to decide recalls of defective products. A “duty of care” on behalf of the public interest would be imposed as a way to establish safe, enjoyable use of these technologies, and to facilitate the development of humane technology.
3) Reinvent the revenue model. It’s a myth that the social media platforms offer their products for “free.” The current revenue model – grabbing user’s attention with sensationalism, conspiracy and crazytown content so that users will stay onsite and see more ads – is the root of this destructive business model. Rather than revenue based on behavioral advertising that is micro-targeted at individuals based on their psychographic profiles, SoCom utilities should convert to a new revenue model that de-couples the connection between profits and manipulating users’ attention.
There are two basic options: either users pay for it with their own money, via a monthly subscription (like cable TV of Netflix) or metering (paying for time of use, like a parking meter); or 2) a new limited “contextual advertising” model, like newspapers and TV broadcasters use, in which ads are directed at users based on the specific webpage being viewed, with no data retention allowed. Or a hybrid combination.
According to insider Facebook sources, a subscription model would likely lead to a decline in the number of Facebook users by 90 percent, shrinking it in size and influence and addressing some of the Big Monopoly dangers. However, that would still leave Facebook as a very large company, with approximately 270 million users and, assuming a subscription of $40-100 per year, with more revenue than Fox News, CNN, MSNBC and CNBC combined.
4) New oversight agency. Finally, the U.S. should create a Department of Digital Economy to regulate and oversee this new industry that is increasingly vital to so many aspects of our lives. Like the Environmental Protection Agency was created in 1977 to oversee the environment, the DDE would have enforcement power, conduct testing of platform technologies, identify key research needs and administer the digital licenses. It would have the power to recall faulty technology, and the power to issue large fines and pursue criminal prosecution of lawbreakers. And it would facilitate platform interoperability and competition.
The DDE also would assist other government agencies in a “digital update/harms audit” that would apply existing law to platform companies. For example, there are restrictions on violence and advertising for Saturday morning cartoons and other children’s programming, resulting from laws like the Children’s Television Act in 1990. Yet Google’s YouTube/YouTubeKids has violated these and other rules, resulting in online lawlessness. The Federal Communications Commission should examine how to apply existing law to the online digital platforms.
Similarly, the Federal Elections Commission should rein in the quasi-lawless world of online political ads, which has far fewer rules and less transparency than ads in broadcasting and traditional media sources. The DDE would help facilitate this kind of re-examination and update for federal agencies. The US once had such a watchdog agency, the Office of Technology Assessment, until it was eliminated in 1995.
The challenge today is to foster a more balanced business model that results in better service, more innovation and competition, and fewer harms to individuals and society. We can do that by placing appropriate guardrails around this new digital infrastructure, and by updating regulatory approaches. Reclassifying them as social communication utilities — an approach similar to what Mark Zuckerberg himself suggested last February — will better ensure that the new digital infrastructure broadly benefits society.
This article was produced by Economy for All, a project of the Independent Media Institute.