“Moral robots” and that messy human factor

Theory of Knowledge banner

In ethics, it’s the dilemmas that grab the headlines. They crash into the news for reasons similar to almost all news: they stand out from a norm of people muddling along in broad accord as they judge right from wrong; they sometimes pit groups of people against each other in noisy conflict; and they often have significant implications for people’s lives. Really, wouldn’t it be so much better if all dilemmas could be resolved without the conflict? Couldn’t we eliminate the messy human factor in ethics by using computer processing to help in our judgments – and wouldn’t that improve ethics as an area of knowledge? Wouldn’t we be so much better off under the guidance of MORAL ROBOTS?  Well….maybe.  But…no.   Well, no, maybe not!

Why not trust the robot?

With amusement, I read an article this week by a team of psychologists who have been considering this very question. (“Why are we reluctant to trust robots?”)  Their first conclusion isn’t likely to surprise most of us: that people don’t trust machines to make moral decisions, even if those machines have been fed good information and are superior to humans in being free of fatigue, cognitive biases, and assorted hostilities. We’re just not going to trust a computer in matters of morality.

Their second conclusion, though, is the one that catches my interest. It has implications for how we regard different systems of ethical thought: that people don’t entirely trust other people if they think they make their moral decisions entirely on the basis of calculation. Yet a major ethical system, known as utilitarianism or consequentialism, guides moral decision-making in exactly this way, by evaluating the projected outcomes of a choice, for benefit or harm. As the authors say,

In a paper published last year in the Journal of Experimental Psychology: General, we presented evidence that consequentialism might be a liability when it comes to social relationships. In other words, being a consequentialist makes you less popular.

Nevertheless, people using a consequentialist system were still considered socially acceptable if they acknowledged feeling a conflict. It seems that we actually like that messy human factor!

So much for moral robots! They’d never win a popularity contest! As the authors conclude,

it may not be enough for us that machines make the right judgments – even the ideal judgments. We want those judgments to be made as a result of the same psychological processes that cause us to make them: namely, the emotional reactions and intuitive responses that have evolved to make us distinctly moral creatures.

Normative and experimental ethics

 Their entire article is relevant to ethics as an area of knowledge in TOK. It identifies central features of systems of normative ethics: consequentialism and its major alternative ethical system, deontology, which guides choices not by evaluating outcomes but by following a set of ethical principles. Yet the authors’ own contribution is to add a piece to ongoing research in experimental ethics.  This field of ethics does not offer normative arguments over what people should do, ethically, in situations of choice. Instead, it overlaps firmly with psychology as an area of knowledge and the cognitive sciences, in researching how people actually do make their moral decisions.

The moral robot: a class activity?

What ends up appealing to me most, as a teacher always looking for engaging class material, is the possibility of bringing an ethical robot, metaphorically, into class. (It is, after all, an era in which we’re beginning to trust artificial intelligence to do practical things like drive our cars, and are even debating the role of AI for decisions in warfare.) I’d give students the following task, in small groups with a time limit:

Your mission is to provide a robotics design team with instructions on the most important features of a MORAL ROBOT, which will always make the right ethical decisions. What rules should govern its decisions? What kind of information should the robot be given, and to what further information should it have access?  Your group has 30 minutes to work out what instructions to give the designers and to list any problems you face in deciding what they should be.

I’d ask the small groups to share their thoughts in a full class discussion and expect numerous features of ethical systems to arise, including:

  • conflicting possible systems of rules depending on whether they favour weighing outcomes (consequentialism) or following ethical principles (deontology);
  • if the robot is given rules based on ethical principles, then what principles, or whose?
  • difficulties of “override” rules in cases of dilemma, and whether to allow exceptions;
  • uncertainties of predicting future consequences of choices made in the present,
  • difficulties, in any case, of assigning particular consequences greater or lesser relative weight in the harm/benefit scales;
  • difficulties of uncertainty of information, and further issues of the ethics of access to information.

And then, I wouldn’t predict an enthusiastically positive response to the following question:

  • Would you trust a moral robot to guide your own ethical decisions and the ethical decisions of your society? Why or why not?

Conclusion

Last week’s post in this blog traced the conscientious decision-making of an international humanitarian organization, trying to reach the best-founded factual conclusions about what was happening in the world, and then trying to make the best-founded decisions about what ethical action to take on the basis of its knowledge. They model what we teach in TOK: in a real life situation, with all of the human variables and possible human consequences of choices, we have to try to be informed, critical and thoughtful, to the best of our ability in a complex world.

It would be so much easier if such complex ethical decisions could be computed with clarity and common agreement – if we could treat ethics as though it were mathematics! Yet, personally, I’m reassured by the findings of the team of psychologists whose article I feature this week. I’m glad that I’m not alone in appreciating and valuing the very thing that the creation of a moral robot would aim to eliminate – that messy, messy human factor!


Looking for more thought-provoking resources to support your TOK teaching? Subscribe today


References

Jim Everett, David Pizarro and Molly Crockett, “Why are we reluctant to trust robots?” Psychology, The Guardian. April 24, 2017. https://www.theguardian.com/science/head-quarters/2017/apr/24/why-are-we-reluctant-to-trust-robots utm_source=esp&utm_medium=Email&utm_campaign=Lab+notes+2016&utm_term=223669&subid=19392600&CMP=ema-3242

Stuart Russell, “Take a stand on AI weapons”, in “Robotics: Ethics of artificial intelligence”, Nature. May 27, 2015. https://www.nature.com/news/robotics-ethics-of-artificial-intelligence-1.17611#/hauert

image, creative commons: https://pixabay.com/en/girl-woman-face-eyes-close-up-320262/