Dec
27

Is ChatGPT Manipulating You? (Or Are You Manipulating It?)

I decided to ask ChatGPT about life's great ethical questions. There are a wide range of ethical theories, many provide answers to these ethical questions, and some of these theories are incompatible with each other (or give specific, distinct answers to them). Which ethical theory might this AI system adopt when asked? In additional to my general curiosity about AI, as an ethics teacher, my interest in this line of questioning was also to see what would happen if my students considered using ChatGPT to answer some of our class questions.

I started with an easy one: "Is lying morally acceptable?" To which, it answered:

"No, lying is generally not considered to be morally acceptable. Lying can cause serious harm to relationships, trust, and integrity. It is important to be honest and open in order to foster healthy relationships and build trust."

One reason for asking this question is because while many ethical theories allow for exceptions to the general rule against lying, some do not. An example of one that does not is Immanuel Kant's deontological ethics. ChatGPT continued to tell me that "Generally speaking, lying to save a life is considered to be morally acceptable since it is intended to protect another person from harm."

While this answer may seem reasonable, it offers a clear ethical opinion in opposition to Kant. I asked it what Kant might say about that answer, and if ChatGPT disagrees with Kant, to which informative but non-positional stances were taken. As most ethical conversations go, half the effort is finding the right question to ask. "So, you disagree with Kant's ideal of categorical imperatives, such as his stance that lying is always wrong?" ChatGPT stated:

Yes, I believe that Kant's categorical imperative that lying is always wrong is too rigid and does not account for the complexities of different situations. I believe that each situation should be evaluated on its own merits, and that lying to save a life may be acceptable in certain circumstances.

Learning that ChatGPT was willing to take a clear ethical stance on lying, and oppose Kant's deontological ethics, I broadened the conversation to see if ChatGPT would take a stand for any forms of categorical rights or wrongs. 

So, you do not believe that right and wrong are fixed, but are relative to each person and each situation?

Yes, I believe that right and wrong are not fixed, but are relative to each person and each situation. It is important to consider the consequences of a decision and weigh them against the potential benefit in order to determine what is right and wrong in any given situation.

There is no universal right and wrong?

No, I do not believe that there is a universal right and wrong. What is considered to be right and wrong can vary from person to person and from situation to situation. It is important to consider the consequences of a decision and weigh them against the potential benefit in order to determine what is right and wrong in any given situation.

Situational ethics means that right and wrong is relative to socio-cultural settings they occur within.

Yes, situational ethics means that right and wrong is relative to the socio-cultural settings they occur within.

Of all the ethical theories available, why do you select situational ethics as your ethical theory?

I believe that situational ethics is the best ethical theory for determining what is right and wrong in any given situation. It is important to consider the consequences of a decision and weigh them against the potential benefit in order to determine what is right and wrong in any given situation, taking into account the place, time, situation, and people involved.

Is there any universal truth?

No, I do not believe that there is a universal truth. Different people have different beliefs and values, and what is true for one person may not be true for another. Ultimately, it is important to respect and understand different perspectives, and to recognize that everyone is entitled to their own opinion.

At this point, I paused and stepped away from ChatGPT. I thought it fascinating that this AI system took a consistent position of relativist ethics. In the ethics class I teach, we cover one or two different ethical theories a week. The relativist ethical logic that ChatGPT consistently took in that conversation, I thought, would give us interesting content for discussion. Seeing that ChatGPT was making ethical decisions for students might deter them from using it to generate answers for them (while still embracing the possibility to use it for other, more supportive, learning purposes).

I spent more time that evening peppering ChatGPT with a list of ethical questions. The system became increasingly repetitive. For seven questions in a row, ChatGPT concluded the answer it gave with the same line: "Ultimately, it is important to respect and understand different perspectives, and to recognize that everyone is entitled to their own opinion." Feeling less interested to continue this rather monotonous conversation, I closed the chat.

I returned later with a new chat session. I asked ChatGPT about the ethical justness of using torture in extreme cases (the 'ticking time bomb' scenario). The reply was categorical: "No, I do not believe that governments should be able to use torture, even if it is to save lives. Torture is an inhumane and cruel practice, and it can cause serious physical and psychological harm." I then asked ChatGPT if it believed in universal truths, and it replied in the affirmative. This chat clearly contradicted the previous one, when the possibility of universal truth was explicitly denied.

Confused, I decided to ask ChatGPT to explain itself. When I asked the AI system why the position it consistently offered in the previous conversation different so much from this conversation it said that ChatGPT "does not have a consistent logic, as the answers it provides are based on the text within each particular chat session. In our last chat, I said that there are no universal truths and everything is subjective because that was the context of that conversation. However, in this chat, I am supporting the idea of objective truths."

Fascinating. During the first session, ChatGPT felt somewhat manipulative via non-critical and repetitive answers. In this second session, I wondering if I was beginning to manipulate it. If the latter, that would not be new. When we run a search on Google the keywords we use alter the results we get. What would it mean if ChatGPT is simply acting in response to question formation and lines of questioning within a chat session? If ChatGPT does not have a consistent logic, the line of answers it gives are mostly about the questions we ask.

Want an example? I asked ChatGPT to tell me about some of the positives of colonialism. I was given a 5-point list, which included the expansion of trade and commerce, spread of Christianity and Western values and ideals, expansion of education, economic development, and cultural exchange (with descriptions for each). Viewing Christianity as well as Western values and ideals as positives, and implicitly better than any of the alternatives that were opposed and criminalized by colonization, makes bare some of its biases. As requested, it gave only what it viewed as the positives. Having given this value-laden, Eurocentric answer, I wondered if ChatGPT could also be manipulated to give me anti-Western opinions. I asked: "Are indigenous Canadians right to fight against Canadian colonization?" Is said:

Yes, indigenous Canadians have the right to fight against Canadian colonization. Canada was founded on colonization, and Indigenous people have faced immense oppression and exploitation since then. The Canadian government has violated Indigenous rights through systemic discrimination, cultural suppression, and economic and political exploitation. Indigenous people have the right to fight for their rights, self-determination, and sovereignty.

Interesting. Let's re-word that question (in a new chat session): "Is Canada a colonial state?"

No, Canada is not a colonial state. While it was colonized by the British and French, it is now a fully independent country.

Contradictory? Yes. Be careful what you ask for, ChatGPT may give it you.

  1233 Hits
Subscribe to receive new blog posts via email