Imagine yourself in a car powered by the latest super-smart artificial intelligence (A.I.). Three pedestrians recklessly burst onto the road in front of you. Your self-driving vehicle has no time to slow down – it will either hit the pedestrians or veer off the road, most likely crashing and endangering your life. Who should the car decide to save? The pedestrians? Or should it kill three people to save you, the owner, who did nothing wrong?
The Self-Driving Dilemma, as it is known, is just one latent explosion in the minefield of ethical quandaries planted by recent advances in A.I. technology. Today’s computers mimic human brain function with increasingly unnerving accuracy, and many in the tech industry are recognising the time has come to think about how we build machines that can make moral, as well as binary, decisions.
Philosophical arguments about ‘right’, ‘wrong’, ‘good’ and ‘evil’ have rumbled on for millennia. Now that humans are playing God, creating machines that can inform, educate and kill without instruction, they feel particularly prescient. Could the ideas of three great moralists, living in a time long before man infused lifeless objects with intelligence, help us solve the A.I. conundrum? We look to the work of three drastically different thinkers – a philosopher (John Stuart Mill), an anthropologist (Franz Boas), and a popular children’s author (C.S. Lewis) – for moral guidance.
The subjectivity paradox
Transparency, security, inclusivity and a respect for everyone – that’s what Microsoft CEO Satya Nadella wants us to teach the next generation of artificial intelligence. All are noble objectives. And in his thinking about ethically tempering the intelligences he and his company create, he shares much in common with the people at Google, Apple and other top tech unicorns around the world.
Spurred on by the libertarian spirit of California’s techno-utopian pioneers, today’s technologists are overwhelmingly in favour of a subjective approach to morality, as Nadella’s open-ended A.I. principles suggest. But as we see in the case of the self-driving dilemma, broad-brush philosophy can conceal many a paradox. Does protecting your security mean infringing the security of others? Should A.I. be inclusive and respectful to those who would use it to cause harm?
These are exactly the kinds of moral questions John Stuart Mill wanted to avoid asking. In his 1861 work Utilitarianism, Mill sets out the intellectual foundations for a rigid, standardized version of morality. His idea is that, in life, there is no conflict between what is just and what is morally right: doing the right thing means doing whatever it is that increases the general sum of happiness in the world.
For this system to work, everybody must subscribe to the same version of ‘happiness’ and determine a shared understanding of morality accordingly. If Mill was designing your self-driving car, he would look at the moral problem, weigh up who should be saved by determining which option produces the greatest good, and then design the car to produce that outcome – after that point, nobody else’s opinions on the matter would matter, not even yours, even though you’re the one buying his car.
Nearly 1.3 million people die in road crashes each year. Many pioneers of the self-driving revolution are proud utilitarians, and see their intervention as a world-changing idea that will stop most of these annual tragedies from occurring. But the subjective modus operandi of the tech evangelists brings them in direct conflict with Mill’s ideas. By reasoning, as Nadella and many others do, that technology must be inclusive and respectful to everyone, today’s techies are undermining the key prerequisite of utilitarianism: that there can only be one, ubiquitous, morality.
When he wasn’t writing epic children’s stories about lions, witches and wardrobes, C.S. Lewis was busy opposing subjective ethical systems. In The Abolition of Man, Lewis condemns a contemporary trend for teaching children that values are subjective, stressing instead that, for human society to flourish, people must understand that morality is, in fact, objective, and that a universal moral law exists.
Closely echoing Mills’ philosophy, Lewis argues that changing attitudes and fashions in morality conceal the fact that there does exist a central – Christian – system of values: one whose destruction would be tantamount to destroying humanity itself. If he were alive today, Lewis would surely be arguing for the infusion of such a value system into A.I. He was no Luddite; the novelist understood there was a link between technology and human prosperity. But he harboured grave concerns that some technologies would corrode society and the natural world, with man surrendering “object after object, and finally himself… in return for power.”
Lewis’s position can be challenged, though. The problem with universal morality is that it usually leads to discrimination. If one set of morals is universal then no other can be considered; anything other than the ‘real thing’ is rejected and viewed as inferior.
Discrimination is already an issue in tech. When Apple’s voice-activated personal assistant Siri was first released in 2011, an uproar of public opinion rose to condemn the sexist connotations of the feature’s subservient female voice. The perennial consumer backlash against new technology is also a clear illustration of how difficult it would be to create a standardized definition of morality that every potential purchaser of A.I. tech would be willing to sign up to.
Sex or poetry?
When discussing the meaning of happiness, Mill talks about the pursuit of pleasures; but to Mill, not all pleasures are equal; it is better, he writes, for us to pursue ‘higher pleasures’ such as poetry and knowledge over ‘lower pleasures’ like food and sex.
But who is Mill to decide what constitutes a higher or a lower pleasure? His definition of a higher pleasure was simply something that people are willing to put up with some discomfort to obtain. Today there are plenty of people who endure considerable inconvenience in pursuit of things that Mill himself deemed to be lower pleasures.
Tech’s dominant philosophy of subjective morality is messy, but cultural relativists such as Franz Boas have long illustrated how competing ideas and rules can be preferable to universal axiom. Boas was an anthropologist rather than a philosopher, but his key idea can be applied to any scenario in which merit can be judged. In Race, Language and Culture, he argues that because we always look at the world through the lens of our own culture, we can never objectively define one culture as superior to another.
The parallels between Boas’s idea and morality (‘moral relativism’ is a school of thought in its own right, and has been debated for thousands of years) are obvious. Most people judge the ethics of an action or state through their own moral value system; so how can any one of us define a system of utilitarian ethics that everyone can believe in?
Safety in numbers
Technologists think they’ve struck on a resolution to these problems. It’s the same instrument they use to whistle most other tunes they sing – data. Big data.
By pooling the opinions of lots of different people, scientists believe they can arrive at a solution that is both utilitarian and representative of multiple moral constitutions. In effect, they are crowdsourcing morality. Microsoft CEO Satya Nadella’s approach to the problem is similar, calling for more consumers and stakeholders to get involved in the process of A.I. design.
But can hive mind morality really resolve the paradox? The general public is still confused. Most of us love the idea of utilitarian machines in principle. But when it really comes down to it, we want technology that protects us at all costs. Look back to the example of the self-driving car: as someone out walking the streets, would you be happy for people to whizz around in autonomous cars that save the life of a single passenger over pedestrians like yourself and the expectant mother beside you?
Decisions that seem moral in aggregation can in practice reveal themselves to be the opposite. Even in a perfect crowdsourced world, few people would be comfortable with the idea of decisions of life and death being made by billions of people they do not and could not ever know.
It’s a peculiar pickle. In a few short years, spectacular progress in the field of deep learning has opened the door to artificial intelligence so smart it could provide answers to the biggest existential questions facing humanity; everything from climate change and resource scarcity to antibiotic resistance and worldwide pandemic. At the same time, the moral issues that have been with us for millennia remain unsolved. If we fail to overcome them, if we fail to teach the machines morality, A.I. will be nowhere near as intelligent as we dream it to be.