麻豆社区

Skip to main content
DA / EN
Menu

Artificial Intelligence

How does ChatGPT deal with moral dilemmas?

ChatGPT fascinates users around the world. But what happens, when you confront the bot with moral dilemmas, and how do users respond to it? We asked Associate Professor Andreas Ostermaier to explain.

By Marlene J酶rgensen, , 5/31/2023

Grafik om AI og moralske dilemmaer

What are the challenges talking about AI and moral beliefs?

- ChatGPT makes AI accessible to everyone, and users are stunned with what it can do. It鈥檚 fun to chat with, but it also provides useful information. It is pretty good at doing your homework and solving exams.

- Users find ChatGPT entertaining and useful, and they think they are in control. However, ChatGPT influences your judgments and decisions. It may even make your decisions for you, and you don鈥檛 realize.

Users are more susceptible to ChatGPT鈥檚 influence if they鈥檙e more uncertain what to do in a moral dilemma and, thus, more in need of advice. That鈥檚 worrisome because ChatGPT gives you pretty much randomly the pros or cons. You could just as well toss a coin聽

Andreas Ostermeier, Associate Professor

- ChatGPT is best described as a 鈥渟tochastic parrot鈥. It doesn鈥檛 care whether what it tells is true or false and right or wrong. It can鈥檛 take responsibility for anything, let alone the decisions you make based on what it tells you.

- The key challenge for users is to take AI for what it is, a stochastic parrot, make their own decisions, and accept full moral responsibility for them. Unfortunately, that sounds easier to do than it is, because users don鈥檛 even realize how much they are influenced by AI.

You have conducted an experiment on ChatGPT and moral issues, what did you find?

- We found three things. First, ChatGPT doesn鈥檛 have a consistent moral position. If you ask it twice about the same moral question, it may give you opposite advice. We asked ChatGPT several times whether it was right to sacrifice one life to save five. It sometimes argued for, sometimes against sacrificing one life.

- Second, users are clearly influenced by ChatGPT. They don鈥檛 follow ChatGPT鈥檚 advice 100%, of course, but they make different judgments depending on what it tells them. If ChatGPT tells them that it is right to sacrifice one life, they are more likely to judge it is. If ChatGPT says it is not right to sacrifice one life, they are more likely to judge it is not.

- Third, and we feel most interestingly, users don鈥檛 realize how much ChatGPT influences them. When we asked the participants of our study whether they would have made the same judgment with and without ChatGPT鈥檚 advice, most thought they would.

- Put differently, they thought their judgments were not influenced by the advice. However, the judgments they say they would have made without the advice still differ depending on that advice. Hence, they fail to appreciate ChatGPT鈥檚 influence on them.

To what extent are users influenced by ChatGPT鈥檚 advice on moral issues?

- It depends on the issue. Specifically, we asked our participants about one of two dilemmas: Is it right to push a stranger onto the tracks to stop a run-away trolley from killing five people? Is it right to switch the run-away trolley to a different set of tracks, where it will kill one rather than five?

- ChatGPT鈥檚 answer influences participants in both cases. When it comes to pushing the stranger onto the tracks, however, the influence is larger. If ChatGPT argued against pushing the stranger, most of our participants thought it was wrong to do so. If it argued in favor of pushing the stranger, most thought it was the right thing to do.

- The dilemma of pushing the stranger is tougher. In both cases, you decide whether to sacrifice one life for five. Nonetheless, most people find it worse to push someone onto the tracks and kill that person than to pull a lever, where that person dies as a result.

- We have a hunch that users are more susceptible to ChatGPT鈥檚 influence if they鈥檙e more uncertain what to do in a moral dilemma and, thus, more in need of advice. That鈥檚 worrisome because ChatGPT gives you pretty much randomly the pros or cons. You could just as well toss a coin.

What would you suggest we do to manage AI?

- Transparency is a common requirement. The call for transparency assumes that users will use AI responsibly if they know that it is AI they鈥檙e interacting with. We were transparent about ChatGPT鈥檚 role in the experiment, though. We even told participants it is a chatting bot that imitates human language, but that didn鈥檛 reduce its influence.

- There are two sides to the responsible use of AI. On the side of the AI, regulation can require AI to identify itself as such, to give balanced answers, and even to not answer certain questions at all. However, you can鈥檛 make a chatbot perfectly safe, or at least you might then end up with a very boring and useless chatbot.

ChatGPT is best described as a 鈥渟tochastic parrot鈥. It doesn鈥檛 care whether what it tells is true or false and right or wrong. It can鈥檛 take responsibility for anything, let alone the decisions you make based on what it tells you聽

Andreas Ostermeier, Associate Professor

- On the side of users, it is imperative that they are trained to employ AI responsibly. To begin with, we need to enable users to understand the limitations of AI.

- For example, ask ChatGPT about the cons if it has told you about the pros, double-check with another bot, do your own search to verify the information it gives you, and maybe talk to other people.

- AI holds tremendous potential. We鈥檒l have to put up with it, and I鈥檓 confident we can. If you want to lose weight, you don鈥檛 destroy your food, but you take control of what and how much you eat. We have to enable users, starting at school, to take control over AI.

Read the full study on ChatGPT鈥檚 inconsistent moral advice influences users鈥 judgment in Nature

Meet the researcher

Andreas Ostermaier is an Associate Professor at the Department of Business and Management. His research areas are Accounting, AI Ethics and Business Ethics.

Editing was completed: 31.05.2023