
Photograph Source: www.vpnsrus.com – CC BY 2.0
Morality and artificial intelligence have a contentious, and perhaps even an antagonistic relationship. Yet, artificial intelligence (AI) itself cannot have morality – for it is a machine.
However, AI’s use and application do have moral implications. At its most basic level, AI is a sophisticated computer code consisting of sequential equations formulated as an algorithm. The term “algorithm” dates back to medieval Persian mathematician, Al-Khwarizmi – al-gha-rizm.
Not to worry. This article is neither about mathematics, coding, the writing of algorithms, nor is this even about the computing logic of artificial intelligence. Instead, it is about the point where AI meets morality.
In a way, it is about moral philosophies such as Kantian philosophy, Aristotle’s virtue ethics and utilitarianism. Beyond all that, moralities need to focus on the people who engineer AI and the corporations that use AI.
In particular, the issue that “people” leads to the moral issue of human autonomy. This can be – and indeed already is – reducing human agency through the use of AI and algorithms.
It seems, society has handed over many decisions to AI such as, for example, how to find that new restaurant? With algorithms linked to GPS inside our cars, asking real people for directions has become obsolete.
Handing over decisions leads to the issues of human vs. machines and to ethical rules used in AI. Linked to all this are two important issues for AI:
1) Fairness – is AI fair? and,
2) Privacy – is AI invading our privacy when, for example, buying condoms?
While this article is about the morality of artificial intelligence, fairness, and privacy, it is, however, not written by an AI – the (in)famous ChatGPT algorithmic essay writing website OpenAI.com/blog/ChatGPT wasn’t used. It is also not “continue this text by what I have seen on the Internet” as Wolfram has said recently about ChatGPT.
From the start, one thing should be very clear. AI-machines are not conscious robots. AI engineers cannot – yet! – create AI machines matching the intelligence of even a two-year-old. Going beyond that – AI with consciousness comparative to the human brain – is what AI calls General AI or AGI. In reality, today’s algorithm-driven AI programs still do relatively narrow and highly focused tasks.
Like a coffee maker – a machine – AI does not have morality. Still, every time someone asks Siri – Apple’s virtual assistant – a question, this person uses AI. At the same time, AI is based on machine-learning algorithms that can, for example, also predict which criminals will reoffend – for morality, this is problematic.
Still, there is a rather common misunderstanding that AI is a one single thing. Just like how our intelligence is a collection of different skills, today’s AI, on the other hand, is a collection of different technologies.
Surprisingly and even after decades of researches into AI, its engineers freely and correctly admit that they have made virtually “no progress” in building a more general intelligence, that can, for instance, solve a wide range of human problems beyond simple things such as finding the right way to a restaurant, or write – an often rather superficial – essay.
With all that, AI does have an impact on society. One might want to emphasize that human society has already walked straight into an AI-generated minefield. Perhaps for the first time and on a grand scale, it occurred in the year 2016: first, there was the Brexit referendum in the United Kingdom and then the election of Donald Trump in the United States.
Machines are now routinely treating humans mechanically, although not with direct control, but are definitely manipulating populations politically. Worse, Facebook can be used to change public opinions in any political campaign.
Even worse than Facebook – at one time, and in cahoots with the highly manipulative Cambridge Analytica – there is a rather dangerous fact that is often ignored by advertisers and political pollsters. And this fact is that the human mind can be easily, as AI people would call it, “hacked”.
Moreover, AI tools like machine learning put this problem on steroids. Corporations using algorithms can collect data on a population and change people’s views at a scale, at a speed, and for very little cost.
Yet, when writing an algorithm for artificial intelligence, those who write these scripted codes are more often than not, white men. Some have called them a sea of white dudes. And worse, the sea of white dudes is also overrepresented in the venture capital that turns AI into corporate profits. These venture capital companies supporting AI can be divided into three roughly equally-sized parts:
1) Silicon Valley – which has now spread out into the larger San Francisco Bay Area;
2) the rest of America; and finally,
3) the rest of the world.
For many of the white dudes working in AI, the story-telling ideologue by Ayn Rand provides some sort of semi-plausible belief-system. Inside the world of AI, many readers of her fairy-tale Atlas Shrugged relate to her crypto-philosophical inklings. Some of these AI people truly believe in her egoistic and self-centered hallucination that our moral purpose is to follow our individual self-interest.
As a consequence of following this particular ideology, many AI dudes became techno-libertarians wishing, for example, to minimize almost all regulations. They also believe that the best solution to almost all problems is the neoliberal free market.
Inside AI, Rand’s ideology came to the forth with John Perry Barlow’s 1996 Declaration of the Independence of Cyberspace which is perhaps “the” expression of techno-libertarianism. Based on this, techno-libertarians have convinced themselves that, you can’t, and indeed, you shouldn’t regulate cyberspace.
They believe that you can’t because digital bits aren’t physical. Secondly, tech companies span national boundaries, so they cannot be bound by national rules. And you shouldn’t regulate cyberspace because, even if you could, regulation will only stifle innovation.
Yet, their staunchly ideological belief in the wonders of the free market and myths of de-regulation (read: pro-business regulation) continues undeterred even when challenged by plenty of uncomfortable facts. For example,
+ penicillin was invented by the University of London;
+ DNA was discovered by the University of Cambridge, and
+ the first general purpose digital computer was built in 1945 at the University of Pennsylvania.
Worse for the free neoliberal marketers, even artificial intelligence started out in universities – at places like MIT, Stanford, and Edinburgh – none of them are free market corporations and all of them are state-supported institutions.
Still, at some point, corporations took over. Today, we see, for example, that Apple’s turnover is more than the GDP of Portugal. Worse, most Big Tech companies are sitting on large cash mountains. It is estimated that US companies have over $1 trillion of profits waiting in offshore accounts – a sweet word for tax havens – the immorality of rather shady tax-reduction schemes or as heretics would say, scams.
Besides the semi-criminal side of big tech corporations that create AI, perhaps one might not get too worried about super-intelligent machines that can bypass human beings. Surprisingly, many of those working in AI are “not” greatly worried about super-intelligent machines.
Yet, the philosopher Nick Bostrom fears that super-intelligence poses an existential threat to humanity’s continuing existence. Think about this: suppose we want to eliminate cancer. “That’s easy”, a super-intelligent AI machine might say.
You simply need to get rid of all hosts of cancer. And so, the AI- robots would set about killing every living thing – a logical way of solving the problem.
Instead of such a nightmarish dystopia of super-intelligent robots, high-tech corporations have very different goals. In September 2020, Tim Kendall – appropriately called Director of Monetization at Facebook said, we took a page from Big Tobacco’s playbook, working to make our offering addictive at the outset. It has been said that the tobacco industry caused the death of 100 million people during the 20th century.
In short, clever corporate and highly manipulative marketing disabled many customers’ autonomy by making people addicted to tobacco (first) and online platforms (later). Yet, for AI, the autonomy of human beings remains a very serious issue.
For AI, human autonomy remains an entirely novel problem. Until recently, human society never had machines before that could make decisions independently of their human masters. On this, moral philosophers like to allude to the infamous trolley problem or trolleyology. On this well-known ethics problem, Philippa Foot’s trolley problem represents a classical moral dilemma.
Yet, it wasn’t designed to have a definitive solution. Still, the trolley problem permits us to explore the moral tension between deontological reasoning, where we judge the nature of an action rather than its consequences and consequentialism where we judge an action by its consequences.
In the end, it all boils down to this: it is wrong to kill versus it is good to save lives. Somehow, one might doubt that a bunch of computer programmers are going to solve a moral dilemma like this.
This brings the debate into MIT’s moral machine which in itself has moral problems. It has been reported that MIT was accepting donations from Jeffrey Epstein – a sex offender who died in prison. Meanwhile, MIT’s moralmachine.net basically asks you to vote on moral issues.
Many would argue that voting on morality is a dicey issue. For one, we humans often say one thing but do another. We might say that we want to lose weight, but we might still eat a plate full of “delightful” fast food – plus a coke.
Yet, there is also a second problem. Unlike real elections, MIT’s moral machine is not demographically balanced. It is not representative. MIT’s moral machine is mostly used by young college-educated men – white dudes.
Much worse than MIT’s rather silly “morality voting machine” is the issue of automated killer robots. The UN’s António Guterres nailed it by saying: “let’s call it as it is. The prospect of machines with the discretion and power to take human life is morally repugnant.”
Still, on the issue of automated and algorithm-guided warfare, one also finds the Martens Clause regulating armed conflict – despite the misbelief of techno-libertarians and neo-liberalists. And even more interesting, we also find the US Department of Defense’s ethical principles on the use of AI.
Naturally, defense.gov speaks with the moral authority of having dropped two atom bombs, engaged in the Korean and Vietnam wars, and engineered Abu Ghraib during the Iraqi war for no weapons of mass destruction.
Whether in war and outside of war, AI-engineers are extremely concerned with the issue of human v. machines. Overall, humansalready can – and interestingly, they often actually do – behave ethically. On this, two of the most pressing AI questions are,
1) could we reproduce human intelligence in machines?;
2) is human intelligence and machine intelligence fundamentally different?
Beyond all that, one of key differences between us and AI is that we are alive, and machines are not. What we might call life, AI calls a living system. By this, they mean a system fulfilling a handful of criteria, such as,
1) maintaining some sort of an equilibrium;
2) have a life cycle and can undergo metabolism;
3) can grow and adapt to their environment;
4) can respond to stimuli; and, perhaps most importantly,
5) can reproduce and evolve.
Even more interesting is the fact, that there is even a branch of AI called genetic programming that has evolution at its core – never mind genetic programming’s Knapsack problem.
Apart from all this, AI “might” – and actually already “does” – have an impact on what moral philosophy calls, the free will. AI people are convinced that, there’s no need – indeed, no place – for the free will.
On that, we should remind ourselves that computers are deterministic machines that simply follow the instructions in their code. AI does not have a free will – yet!
Next to this fact, the morality of AI comes with the following point which might not really concern us in the very short-term. At one point, AI machines might become conscious. At that point, we may have ethical obligations to them in how we treat AI robots. One of the moral questions might become, can we now turn them off?
Linked to all this is the differentiation between “us and AI” where we have emotions and AI doesn’t. On emotions, many AI people believe that there are just six basic emotions: anger, disgust, fear, happiness, sadness, and surprise. Strangely, the emotion of feeling “love” does not seem to be a human sensation for AI’s sea of white dudes.
Things are getting a bit more serious with, for example, Google Translator when translating the phrase, he is pregnant correctly into the rather nonsensical German, er ist schwanger.
Perhaps such nonsense is simply because AI is unable to experience emotions like pain and suffering. From that, it might follow that – no matter how intelligent AI becomes – AI does not need any rights. We can treat AI and robots like any other machine. A kitchen mixer does not have rights.
But then, how do we treat intelligent machines? At that point, the most celebrated ethical rule of robotics and AI comes into play. In the year 1942, Isaac Asimov proposed his famous laws of robotics.
These three laws require robots to protect themselves, unless this conflicts with an order from a human, and to follow any such order unless this might cause harm to a person:
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Next to such early attempts on formulating an ethical law for robots and computers, there are, more recently the following ethics codes for artificial intelligence:
+ Ohio State University’s Three Laws of Responsible Robotics;
+ EPSRC/AHRC’s Five Principles of Robotics Principle;
+ BS 8611 Robots and Robotic Devices: Guide to the Ethical Design and Application of Robots and Robotic Systems;
+ 23 Asilomar AI Principles; and finally,
+ European Union’s seven key ethical requirements’ for deploying trustworthy artificial intelligence.
All of them indicate that AI and morality remains particularly important in the realm of medicine where morality and AI are connected to one of the most problematic aspect. This leads – almost inevitably – to the morality of fairness.
A good example to illuminate when it goes wrong is the UK’s Ofqual system where students from poor state schools were more likely to have their grades marked down than students from rich private schools.
Even graver things have happened in the area of policing. For example, predicting a future crime while using historical data will only perpetuate past biases. One of the key problems is that those who use AI to learn from history are doomed to repeat it.
In fact, it is much worse than simple repetition. AI engineers might construct rather treacherous feedback loops in which AI amplifies the biases of the past. Particularly on the issue of fairness in policing, there is also the danger of making a rather common mistake in machine learning. People tend to confuse correlation with causation.
On the morality of fairness, one might realize that there are a whopping 21 different mathematical definitions of fairness that are used by the AI community – the white dudes. Yet, fairness might not be understood via a mathematical definition. In reality, fairness has something to do with what the philosopher John Rawls calls Justice as Fairness.
Fairness also plays into something as simple as speech recognition. Today’s speech-recognition systems such as those, for example, developed by Amazon, Apple, Google, IBM, and Microsoft actually perform significantly worse for black speakers than for white speakers.
The average word error rate of the five systems was 35% for black speakers, compared with just 19% for white speakers. In other words, AI systems reflect the biases of the society in which they are constructed as well as the bias of those who create algorithms – the white dudes.
This might also influence one of the most important issues for those often-single white AI dudes: meeting the opposite sex via the internet. In the USA, this has become the most popular way for people to get together.
There are also the mainstream apps like Bumble, Tinder, OKCupid, Happn, Her, Match, eHarmony, and Plenty of Fish. But there’s an app for every taste. Try SaladMatch if meeting someone who shares your taste for salad.
Or Bristlr if beards are your thing – obviously. Try GlutenFreeSingles for those with celiac disease. And Amish Dating for the select few Amish people who use a smartphone.
Not just for matchmaking, societies will have to expect more and more decisions to be handed over to algorithms. This is happening even in the most pressing issue of global warming, where digitalization is responsible for less than 5% of all electricity use – less than 1% of the world’s energy consumption.
At the same time, training, for example, the enormous model of ChatGPT-3 has produced 85,000 kilograms of CO2 – the same amount produced by four people flying a round trip from London to Sydney – in business class, which is equal to people in a row of eight in economy-class seats.
The three big cloud computing providers are Google Cloud, Microsoft Azure, and Amazon Web Services. Google Cloud claims to have zero net carbon emissions – claims to have. Simultaneously, transportation is responsible for around ¼ of global CO2emissions. Self-evidently, AI can be a great asset in reducing these emissions.
In the end, it is rather obvious that AI engineers cannot – today – build moral machines. AI calls the moment when AI surpasseshuman beings: AI singularity. Before that moment arrives – if it ever will – today’s AI machines cannot capture human values and cannot account for their decisions – morally or otherwise.
In his recent book – Machines Behaving Badly: The Morality of AI – one of the world’s foremost AI experts, Toby Walsh, argues that AI machines will always and only ever be machines. In other words, human-mirroring AI – despite the sensational claims of Blake Lemoine – is nowhere in sight.
Unlike us, AI machines do not have a moral compass. Yet, the most current moral question remains. Given that corporations can’t – yet – build moral AI machines, a morally good society needs to debate which decisions ought and which ought not to be handed over to AI machines?