- CounterPunch.org - https://www.counterpunch.org -

Free Speech, Expensive Speech, Censorship, Social Media Algorithms, and Anarcho-Puritanism

The corporate media shows us an endless stream of patriotic Ukrainians standing up against the Russian menace.  Then in their coverage of more local news, among the diverse crowd of truckers and other protesters occupying downtown Ottawa, only the most rightwing participants are highlighted.  Meanwhile, if Joe Rogan dares to interview any of them on his podcast there will be more cries for Spotify to drop him from their platform.  Others will say no, this is censorship.  And at a small labor rally in Portland, Oregon, the anarcho-puritan Twitter trolls will successfully prevent a labor musician from singing at a labor rally by using online intimidation and disinformation to convince the young organizers that the Jewish musician in question is antisemitic.

This is our reality now, this week, like it or not (and I sure don’t).  All of this stuff is intimately related, but it generally gets siloed off into different discussions.  This happens partly for perfectly innocent reasons, and partly for completely nefarious ones.  It’s often innocent because many people can understand the basic principles of free speech vs. censorship, but they don’t understand where social media algorithms, and perhaps even corporate wealth and power, fit into the picture.  Often it’s nefarious, because other people understand full well that the issue of rampant disinformation is rarely one of free speech vs. censorship, but they frame it that way, in order to attempt to distract us from the elephant in the living room, the wizard behind the curtain, the naked emperor (insert allegory here).

I heard a host on NPR the other day refer to the world’s most popular podcast, the Joe Rogan Experience, as a podcast “distributed by Spotify.”  I kept listening to the national radio story, waiting perhaps for the host to be corrected by a producer or something, for her to say, “sorry, I meant hosted exclusively by Spotify at a cost of $100 million,” or even just “hosted” rather than “distributed,” but that correction never came.

If you download a podcast app — and I know most of you have never done this, since it’s still only around a quarter of the population who has — but if you download a podcast app, or just open the one that came with your phone, if you have an iPhone, you will not find the Joe Rogan Experience anywhere.  This is because it is not distributed by Spotify, it is hosted exclusively by Spotify.  It’s Spotify programming.  It’s not really a podcast, in the sense that the term used to be understood, as audio content that you can subscribe to as a feed, from any podcast app (generally for free).  That’s how it works if you use the podcast distributing service that Spotify now owns, called Anchor, or if you use the podcasting app that folks like me and the BBC use, Podbean.

This is a relatively minor point, but it helps illustrate the inherent confusion in picking apart these questions.  The folks who want us to see the issues with, for example, Joe Rogan’s choice of guests or interviewing style, are generally concerned not with who he’s interviewing, but whether he’s spreading dangerous lies to a very large audience, which is, in part, as large as it is because of a whole lot of corporate sponsorship.  It’s the vast size of the audience exposed to the lies that they see as the problem, rather than the notion of Joe Rogan interviewing whoever he wants to.

Others will defend Rogan’s right to interview whoever he wants and cry censorship at Neil Young and Rogan’s other critics, while intentionally papering over the questions of scale and audience size involved.  Between a podcast like Rogan’s and a podcast like the millions of other pods out there that are little one- or two-person operations with a monthly audience that may reach four digits during a really active period, there may or may not be much of a relationship, but that’s not a point the free speech vs. censorship crowd want to highlight.

To my way of thinking, when it comes to $100 million exclusive contracts with massive corporations, we are talking about Expensive Speech.  This is not the free marketplace of ideas or the Labor Radio Podcast Network, this is massive, global, corporate money.  When we’re talking about this kind of corporate money and corporate influence, the free speech question is largely overshadowed by questions of corporate power.  At the very least, any one corporation or one individual who has that kind of reach, really of a monopolistic sort in several different ways, the unbalanced, top-heavy nature of the situation is paramount.  How to deal with it is another question.  But that this is what we might call the principal contradiction we face when it comes to the influence of the biggest music streaming service on the planet or individuals like the most popular podcaster on Earth — power and wealth, and the extreme and extremely unfair influence of it on all of us, whether we like it or not (and I sure don’t).

But then even when people do understand the problems presented vis-a-vis the questions of speech, censorship, and corporate influence on both what content is promoted and the laws that may or may not regulate it, the biggest factor of all, that so dramatically affects our lives, whether we know it or not, every minute of the day, especially if we spend any time at all on social media platforms, or rely on them to get news, keep up on the gossip, discuss issues, or communicate with people, is the least understood of them all.  I’m talking about social media algorithms.

Everything changes so fast, and I know that anyone my age or older might especially resonate with that statement.  The term “future shock” that was coined back in the 80’s is a very good one, and I think more of us are feeling it every day, overwhelmed by the pace of transformation in all kinds of different ways, very much including with technology.  Those of us who are old enough to remember the 1990’s may look back on the period as a sort of golden age for the internet.  The MP3 hadn’t been invented yet, it could take half an hour to load a graphic-intensive website, there were all kinds of other issues, but your means of communication were very democratic and straightforward.  Email lists could be moderated or not, and especially the many well-moderated email lists out there could function as amazing sources for finding out what’s going on in town or for promoting an event, for free.

The way it tends to go with high-speed technological change constantly going on is people get future shock and they can’t keep adapting, understanding what’s changing and how.  There are other reasons why some of the big things get lost on a lot of folks.

Partly, it depends on your role in society when it comes to social media.  If a big reason you use social media is for promoting content or otherwise making money through it, the changes can be very noticeable very quickly.  For most people, though, who are using social media to communicate, gossip, follow their friends and news stories, scroll their feeds, etc., the changes are much more subtle, and often hard to notice at all.

But what has happened since the egalitarian days of the early web, dominated by grassroots email lists and independent media collaboratives (at least for a certain set of folks at the time), has been the rise of social media.  At the beginning, it appeared to many of us that a phenomenon like the news feed would function much like an unmoderated email list, and as the old email lists and other vestiges of the free internet began to gather virtual dust from disuse, and most of the world began to spend most of their time online on a small handful of massive corporate platforms, entities like the ultra-dominant platform, Facebook, introduced the algorithm-based news feed.

When this happened, me and many other content-creators who were using platforms like Facebook to promote gigs, tours, songs, etc., noticed immediately, because suddenly we were getting dramatically less attention when we posted most anything.  Whereas the day before, if we had more followers on the platform, more people would see what we posted, suddenly this was no longer how it worked.  Suddenly you had to pay to boost posts if you wanted people to see them, unless you were posting a picture of your baby or a pet, or you were engaged in a heated argument related to a post.

What became the norm was if you posted a link to an article you had spent lots of time writing which you had published on a platform other than Facebook, like, say, Counterpunch, no one would see it anymore.  But if you boiled down your argument in the Counterpunch article into a few paragraphs on a Facebook post that wasn’t a link off-platform, people would see it, and maybe even engage with it.  But the inherent dumbing-down phenomenon here was clear.  A tweet had to be a certain very short length, everyone knew, but less well-known was which sorts of Facebook posts might be seen and which would not, depending on the workings of secret algorithms.

The algorithms are secret, and they change.  They’re blatantly manipulating our perceptions of the world, in all kinds of unknown ways.  If we weren’t at least in theory all doing this to ourselves voluntarily, it would be much more alarming.  But it’s very alarming either way.  At one time, many of you reading a piece on Counterpunch might have found your way there because of a link someone posted on Facebook.  When they changed the algorithm, overnight this would be the case half as much of the time.  It’s easy to see how the casual Facebook scroller who is used to consuming a few Counterpunch articles a week because of links posted by friends might not notice that from one week to the next, they may now only be seeing half as many.  They’ve been replaced by other things, and there’s always so much.  The people who notice are the writers, the editors, and the treasurers, of such publications.

It’s fairly well-known at this point how YouTube’s algorithms work.  If you’re looking for good scientific information on the moon landing, for example, it won’t be long before you’re being served YouTube’s algorithmically-generated recommendations for videos about how the landing was faked.  “Moon landing” appears in the description, and metadata indicates people who like videos about the moon landing also don’t stop watching when this one comes on, so it goes into the mix.  Whether that’s the logic of the algorithm, who knows, because it’s secret, and it changes constantly.

When it comes to music streaming platforms, at least from my personal experience, the algorithms aren’t so bad.  The people who get recommendations about my music because they listened to another artist that Spotify’s algorithms thought were similar do keep coming back to me.  Assuming it works that way with other artists, Spotify’s algorithms seem to have musical tastes of listeners pretty well figured out.

But when it comes to learning about what’s going on in the world, what may be innocent music-recommendation algorithms can suddenly be terrifying.  While social media algorithms alone certainly can’t be blamed for the increased polarization in society and the fact that what passes for discourse increasingly resembles some cross between Idiocracy and the Salem Witch Trials, they play no small part, either.

And they also play perfectly into the hands of those who want to use social media to spread disinformation, since that’s what social media is algorithmically inclined to do in the first place.  Social media is a great equalizer at least in terms of the ability of random people to make a lot of noise in ways previously unknown.  Combine that with the abundant amounts of disinformation, and then add five hundred years of the Puritan tradition, and you get the phenomenon known as the internet troll, often just called the Twitter troll, since Twitter is designed to facilitate this kind of behavior more than other platforms seem to be.

Here we come back to the free speech vs. censorship debate, minus any concepts around corporate wealth or power or influence or scale being part of the conversation.  The trolls of the left variety hone in on the perceived transgressions of anyone on the internet who gets a little more attention than they do, and do their best to take them down.  If a podcaster with 200 weekly listeners interviews someone who says something offensive, for the anarcho-puritan troll of the left variety, it’s totally irrelevant whether the podcaster is a high school kid in their mom’s basement, or a multimillionaire in a mansion in Los Angeles with an audience of tens of millions.  In fact, for the troll mentality, the one with only 200 weekly listeners is probably a better target, because there’s more of a chance of campaign success, to successfully get an event or a human being cancelled, which causes the troll to rejoice at their achievement.

For the rigidly-principled troll of the self-styled antifascist variety (or wanting to impersonate one), their role is to find anyone who is transgressing, by talking with someone with unacceptably rightwing opinions in a public forum of any kind, such as someone’s YouTube account with 200 viewers a week, and then hounding them.  They pick a target, then start harassing them and anyone they have any connections with on Twitter especially, trying to expose their home addresses, get them fired if possible, and spread any kinds of slander they can come up with that might be believable, in order to discredit their target.

The motivation of the troll behavior, it seems to me, is multifold.  Some of it is undoubtedly undercover operatives engaging in often-successful campaigns to disrupt communities, organizations, and careers through these tactics.  With the “volunteers” engaging in this kind of disruption, it should be noted that in no small way are their tactics both directly and indirectly facilitated by social media algorithms, and other aspects of how certain platforms are organized.

But the behavior is rooted deeply in the Puritan tradition of moral outrage, moral righteousness, and moral purity.  Thus, it doesn’t matter how insignificant your target may be.  If they have dared “provide a platform” for the wrong person by uploading an interview on a YouTube channel with a very small audience with the wrong person, this is grounds to endlessly hound and condemn the offending content provider.

In a sense, only for this most extreme group of puritanical, censorship-happy Twitter trolls does the discussion of media (or social media) truly boil down to questions of free speech and censorship.  Unlike those who see the problems presented by extreme corporate wealth and power, or the impact of secret, manipulative algorithms, for the anarcho-puritan (or disruptive element posing as one), all offensive content should be banned, and their creators cancelled.

Now I’m supposed to conclude with some kind of an idea about how we dig our way out of this pit of corporate power, widespread disinformation, and domination of large segments of society by a new form of puritanism, but I haven’t a clue.  I only hope that my understanding of the problem has been helpful for someone.