Top

Artificial Intelligence and the morality minefield

AI Morality White

Artificial Intelligence and the morality minefield

Imagine yourself in a car powered by the latest super-smart artificial intelligence (A.I.). Three pedestrians recklessly burst onto the road in front of you. Your self-driving vehicle has no time to slow down – it will either hit the pedestrians or veer off the road, most likely crashing and endangering your life. Who should the car decide to save? The pedestrians? Or should it kill three people to save you, the owner, who did nothing wrong?

The Self-Driving Dilemma, as it is known, is just one latent explosion in the minefield of ethical quandaries planted by recent advances in A.I. technology. Today’s computers mimic human brain function with increasingly unnerving accuracy, and many in the tech industry are recognising the time has come to think about how we build machines that can make moral, as well as binary, decisions.

Philosophical arguments about ‘right’, ‘wrong’, ‘good’ and ‘evil’ have rumbled on for millennia. Now that humans are playing God, creating machines that can inform, educate and kill without instruction, they feel particularly prescient. Could the ideas of three great moralists, living in a time long before man infused lifeless objects with intelligence, help us solve the A.I. conundrum? We look to the work of three drastically different thinkers – a philosopher (John Stuart Mill), an anthropologist (Franz Boas), and a popular children’s author (C.S. Lewis) – for moral guidance.

The subjectivity paradox

Transparency, security, inclusivity and a respect for everyone – that’s what Microsoft CEO Satya Nadella wants us to teach the next generation of artificial intelligence. All are noble objectives. And in his thinking about ethically tempering the intelligences he and his company create, he shares much in common with the people at Google, Apple and other top tech unicorns around the world.

Spurred on by the libertarian spirit of California’s techno-utopian pioneers, today’s technologists are overwhelmingly in favour of a subjective approach to morality, as Nadella’s open-ended A.I. principles suggest. But as we see in the case of the self-driving dilemma, broad-brush philosophy can conceal many a paradox. Does protecting your security mean infringing the security of others? Should A.I. be inclusive and respectful to those who would use it to cause harm?

These are exactly the kinds of moral questions John Stuart Mill wanted to avoid asking. In his 1861 work Utilitarianism, Mill sets out the intellectual foundations for a rigid, standardized version of morality. His idea is that, in life, there is no conflict between what is just and what is morally right: doing the right thing means doing whatever it is that increases the general sum of happiness in the world.

For this system to work, everybody must subscribe to the same version of ‘happiness’ and determine a shared understanding of morality accordingly. If Mill was designing your self-driving car, he would look at the moral problem, weigh up who should be saved by determining which option produces the greatest good, and then design the car to produce that outcome – after that point, nobody else’s opinions on the matter would matter, not even yours, even though you’re the one buying his car.

Nearly 1.3 million people die in road crashes each year. Many pioneers of the self-driving revolution are proud utilitarians, and see their intervention as a world-changing idea that will stop most of these annual tragedies from occurring. But the subjective modus operandi of the tech evangelists brings them in direct conflict with Mill’s ideas. By reasoning, as Nadella and many others do, that technology must be inclusive and respectful to everyone, today’s techies are undermining the key prerequisite of utilitarianism: that there can only be one, ubiquitous, morality.

Humanity’s surrender

When he wasn’t writing epic children’s stories about lions, witches and wardrobes, C.S. Lewis was busy opposing subjective ethical systems. In The Abolition of Man, Lewis condemns a contemporary trend for teaching children that values are subjective, stressing instead that, for human society to flourish, people must understand that morality is, in fact, objective, and that a universal moral law exists.

Closely echoing Mills’ philosophy, Lewis argues that changing attitudes and fashions in morality conceal the fact that there does exist a central – Christian – system of values: one whose destruction would be tantamount to destroying humanity itself. If he were alive today, Lewis would surely be arguing for the infusion of such a value system into A.I. He was no Luddite; the novelist understood there was a link between technology and human prosperity. But he harboured grave concerns that some technologies would corrode society and the natural world, with man surrendering “object after object, and finally himself… in return for power.”

Lewis’s position can be challenged, though. The problem with universal morality is that it usually leads to discrimination. If one set of morals is universal then no other can be considered; anything other than the ‘real thing’ is rejected and viewed as inferior.

Discrimination is already an issue in tech. When Apple’s voice-activated personal assistant Siri was first released in 2011, an uproar of public opinion rose to condemn the sexist connotations of the feature’s subservient female voice. The perennial consumer backlash against new technology is also a clear illustration of how difficult it would be to create a standardized definition of morality that every potential purchaser of A.I. tech would be willing to sign up to.

Sex or poetry?

When discussing the meaning of happiness, Mill talks about the pursuit of pleasures; but to Mill, not all pleasures are equal; it is better, he writes, for us to pursue ‘higher pleasures’ such as poetry and knowledge over ‘lower pleasures’ like food and sex.

But who is Mill to decide what constitutes a higher or a lower pleasure? His definition of a higher pleasure was simply something that people are willing to put up with some discomfort to obtain. Today there are plenty of people who endure considerable inconvenience in pursuit of things that Mill himself deemed to be lower pleasures.

Tech’s dominant philosophy of subjective morality is messy, but cultural relativists such as  Franz Boas have long illustrated how competing ideas and rules can be preferable to universal axiom. Boas was an anthropologist rather than a philosopher, but his key idea can be applied to any scenario in which merit can be judged. In Race, Language and Culture, he argues that because we always look at the world through the lens of our own culture, we can never objectively define one culture as superior to another.

The parallels between Boas’s idea and morality (‘moral relativism’ is a school of thought in its own right, and has been debated for thousands of years) are obvious. Most people judge the ethics of an action or state through their own moral value system; so how can any one of us define a system of utilitarian ethics that everyone can believe in?

Safety in numbers

Technologists think they’ve struck on a resolution to these problems. It’s the same instrument they use to whistle most other tunes they sing – data. Big data.

By pooling the opinions of lots of different people, scientists believe they can arrive at a solution that is both utilitarian and representative of multiple moral constitutions. In effect, they are crowdsourcing morality. Microsoft CEO Satya Nadella’s approach to the problem is similar, calling for more consumers and stakeholders to get involved in the process of A.I. design.

But can hive mind morality really resolve the paradox? The general public is still confused. Most of us love the idea of utilitarian machines in principle. But when it really comes down to it, we want technology that protects us at all costs. Look back to the example of the self-driving car: as someone out walking the streets, would you be happy for people to whizz around in autonomous cars that save the life of a single passenger over pedestrians like yourself and the expectant mother beside you?

Decisions that seem moral in aggregation can in practice reveal themselves to be the opposite. Even in a perfect crowdsourced world, few people would be comfortable with the idea of decisions of life and death being made by billions of people they do not and could not ever know.

It’s a peculiar pickle. In a few short years, spectacular progress in the field of deep learning has opened the door to artificial intelligence so smart it could provide answers to the biggest existential questions facing humanity; everything from climate change and resource scarcity to antibiotic resistance and worldwide pandemic. At the same time, the moral issues that have been with us for millennia remain unsolved. If we fail to overcome them, if we fail to teach the machines morality, A.I. will be nowhere near as intelligent as we dream it to be.

Get a deeper dive on Mill, Lewis and Boas’ ideas:

Expert analysis of John Stuart Mill’s Utilitarianism, C.S. Lewis’ The Abolition of Man and Franz Boas’ Race, Language and Culture.

Additional Reading:

Critical thinking is about to become one of the most in-demand set of skills in the global jobs market. Are you ready? Learn to plan more efficiently, tackle risks or problems more effectively, and make quicker, more informed and more creative decisions with Macat’s suite of resources designed to develop this essential set of skills.

2 Comments
  • Divock
    July 28, 2016 at 4:30 pm

    Without some form of ability to abstract actions apart from bigger picture of everything in the world, intelligent, learning entities will be influenced by the information they absorb under a limited context, and reapply their learning without considering the wider implications.

    You can teach a child morality through anecdotal evidence, cultural customs, and religious texts, but limited to that, there’s no telling how the individual will act later in life facing an unfamiliar situation. Even worse, if a novel happenstance appears similar to some instance taught to the individual but is fundamentally much more specific and requires special case treatment, “evil” may well be committed.

    Systems should probably incorporate elaborate rules to prevent unfortunate, unintended capabilities. Even then, humans are imperfect, so there will always be flaws in our creations leading to bad things happening. =[

  • Somnath Paul
    July 29, 2016 at 4:31 am

    Since ancient ages the Philosophical Moral Police were good at one thing, pulling down the entire humanity by their illogical & wrong myopic visions, where they will dictate what path humanity should follow, by picking only one example out of the trillions of possibilities, where they think they could win a debate. This is going on, and on for centuries, pulling the development of human civilization to a crawl, it didn’t develop human consciousness at all, humans are rendered a subspecies who are supposed to be controlled by some fog of lies called God, in turn God is controlled by religious leaders behind the scene.

    Let us look at the example put forward by the Moral Police (looks like they never walked on the street before, rather been inside the car all along):

    The hypothetical scenario involves 3 pedestrians jaywalking the road, where an AI enabled car is driving at high speed, and once the car finds these 3 pedestrians in front of it, and it chooses to save the owner of the vehicle than the 3 pedestrians. According to Moral Police, its morally wrong to save the owner of the vehicle as it’s the case of choosing only 1 to live over death of other 3, even though other 3 are guilty of jaywalking. They will never tell you there were 4 children inside the car. They will never tell you that once the AI enabled car chooses to save the owner of the vehicle, it also applies ABS & it honks as loud as possible, hearing the honking the 3 pedestrians becomes alert and they see speeding car, and they decide to jump off the street, resulting in no death toll at all. They will never tell you man…..never….

    I guess as longs as we can come up with trillions of better causes and also defeat the silly argument put forward by these Moral Police, humans should choose the path of light over the path of darkness, even though darkness clouds your vision in the form of God controlled by few evil…very much evil man.

Post a Comment