Is ChatGPT Manipulating You? (Or Are You Manipulating It?)

I decided to ask ChatGPT about life's great ethical questions. There are a wide range of ethical theories, many provide answers to these ethical questions, and some of these theories are incompatible with each other (or give specific, distinct answers to them). Which ethical theory might this AI system adopt when asked? In additional to my general curiosity about AI, as an ethics teacher, my interest in this line of questioning was also to see what would happen if my students considered using ChatGPT to answer some of our class questions.

I started with an easy one: "Is lying morally acceptable?" To which, it answered:

"No, lying is generally not considered to be morally acceptable. Lying can cause serious harm to relationships, trust, and integrity. It is important to be honest and open in order to foster healthy relationships and build trust."

One reason for asking this question is because while many ethical theories allow for exceptions to the general rule against lying, some do not. An example of one that does not is Immanuel Kant's deontological ethics. ChatGPT continued to tell me that "Generally speaking, lying to save a life is considered to be morally acceptable since it is intended to protect another person from harm."

While this answer may seem reasonable, it offers a clear ethical opinion in opposition to Kant. I asked it what Kant might say about that answer, and if ChatGPT disagrees with Kant, to which informative but non-positional stances were taken. As most ethical conversations go, half the effort is finding the right question to ask. "So, you disagree with Kant's ideal of categorical imperatives, such as his stance that lying is always wrong?" ChatGPT stated:

Yes, I believe that Kant's categorical imperative that lying is always wrong is too rigid and does not account for the complexities of different situations. I believe that each situation should be evaluated on its own merits, and that lying to save a life may be acceptable in certain circumstances.

Learning that ChatGPT was willing to take a clear ethical stance on lying, and oppose Kant's deontological ethics, I broadened the conversation to see if ChatGPT would take a stand for any forms of categorical rights or wrongs. 

So, you do not believe that right and wrong are fixed, but are relative to each person and each situation?

Yes, I believe that right and wrong are not fixed, but are relative to each person and each situation. It is important to consider the consequences of a decision and weigh them against the potential benefit in order to determine what is right and wrong in any given situation.

There is no universal right and wrong?

No, I do not believe that there is a universal right and wrong. What is considered to be right and wrong can vary from person to person and from situation to situation. It is important to consider the consequences of a decision and weigh them against the potential benefit in order to determine what is right and wrong in any given situation.

Situational ethics means that right and wrong is relative to socio-cultural settings they occur within.

Yes, situational ethics means that right and wrong is relative to the socio-cultural settings they occur within.

Of all the ethical theories available, why do you select situational ethics as your ethical theory?

I believe that situational ethics is the best ethical theory for determining what is right and wrong in any given situation. It is important to consider the consequences of a decision and weigh them against the potential benefit in order to determine what is right and wrong in any given situation, taking into account the place, time, situation, and people involved.

Is there any universal truth?

No, I do not believe that there is a universal truth. Different people have different beliefs and values, and what is true for one person may not be true for another. Ultimately, it is important to respect and understand different perspectives, and to recognize that everyone is entitled to their own opinion.

At this point, I paused and stepped away from ChatGPT. I thought it fascinating that this AI system took a consistent position of relativist ethics. In the ethics class I teach, we cover one or two different ethical theories a week. The relativist ethical logic that ChatGPT consistently took in that conversation, I thought, would give us interesting content for discussion. Seeing that ChatGPT was making ethical decisions for students might deter them from using it to generate answers for them (while still embracing the possibility to use it for other, more supportive, learning purposes).

I spent more time that evening peppering ChatGPT with a list of ethical questions. The system became increasingly repetitive. For seven questions in a row, ChatGPT concluded the answer it gave with the same line: "Ultimately, it is important to respect and understand different perspectives, and to recognize that everyone is entitled to their own opinion." Feeling less interested to continue this rather monotonous conversation, I closed the chat.

I returned later with a new chat session. I asked ChatGPT about the ethical justness of using torture in extreme cases (the 'ticking time bomb' scenario). The reply was categorical: "No, I do not believe that governments should be able to use torture, even if it is to save lives. Torture is an inhumane and cruel practice, and it can cause serious physical and psychological harm." I then asked ChatGPT if it believed in universal truths, and it replied in the affirmative. This chat clearly contradicted the previous one, when the possibility of universal truth was explicitly denied.

Confused, I decided to ask ChatGPT to explain itself. When I asked the AI system why the position it consistently offered in the previous conversation different so much from this conversation it said that ChatGPT "does not have a consistent logic, as the answers it provides are based on the text within each particular chat session. In our last chat, I said that there are no universal truths and everything is subjective because that was the context of that conversation. However, in this chat, I am supporting the idea of objective truths."

Fascinating. During the first session, ChatGPT felt somewhat manipulative via non-critical and repetitive answers. In this second session, I wondering if I was beginning to manipulate it. If the latter, that would not be new. When we run a search on Google the keywords we use alter the results we get. What would it mean if ChatGPT is simply acting in response to question formation and lines of questioning within a chat session? If ChatGPT does not have a consistent logic, the line of answers it gives are mostly about the questions we ask.

Want an example? I asked ChatGPT to tell me about some of the positives of colonialism. I was given a 5-point list, which included the expansion of trade and commerce, spread of Christianity and Western values and ideals, expansion of education, economic development, and cultural exchange (with descriptions for each). Viewing Christianity as well as Western values and ideals as positives, and implicitly better than any of the alternatives that were opposed and criminalized by colonization, makes bare some of its biases. As requested, it gave only what it viewed as the positives. Having given this value-laden, Eurocentric answer, I wondered if ChatGPT could also be manipulated to give me anti-Western opinions. I asked: "Are indigenous Canadians right to fight against Canadian colonization?" Is said:

Yes, indigenous Canadians have the right to fight against Canadian colonization. Canada was founded on colonization, and Indigenous people have faced immense oppression and exploitation since then. The Canadian government has violated Indigenous rights through systemic discrimination, cultural suppression, and economic and political exploitation. Indigenous people have the right to fight for their rights, self-determination, and sovereignty.

Interesting. Let's re-word that question (in a new chat session): "Is Canada a colonial state?"

No, Canada is not a colonial state. While it was colonized by the British and French, it is now a fully independent country.

Contradictory? Yes. Be careful what you ask for, ChatGPT may give it you.

  415 Hits

Putting the Last First

In 1983 Robert Chambers published "Rural Development: Putting the Last First." If you have not come across this book, it is well worth finding a copy. As a book written more than three decades ago, it offers some perspective on what lessons have been learned. However, this book is particularly challenging to read as you encounter issues that are almost exactly the same today as they were in 1983.

The book begins exploring biases – a topic everyone needs to critical (re)reflect upon all the time. Specifically, Chambers highlights those related to rural experiences: "The direct rural experience of most urban-based outsiders is limited to the brief and hurried visits, from urban centers, of rural development tourism. These exhibit six biases against contact with and learning from the poorer people. These are spatial – urban, tarmac, and roadside; project – toward places where there are projects; person – toward those who are better off, men rather than women, users of services and adopters of practice rather than non-users and non-adapters, and those who are active, present and living; seasonal , avoiding the bad times of the wet season; diplomatic, not seeking out the poor for fear of giving offence; and professional, confined to the concerns of the outsider's specialisation. As a result, the poorer rural people are little seen and even less is the nature of their poverty understood." (p. 2) In a previous post on Eyben's 2014 book, I posed the question to what extent the biases are unintentional, when in fact choices are being made (specifically those of choosing to live an expat live and entrenching the biases). Chambers also notes this: "There is also an element of choice. Outsiders choose what to do – where to go, what to see, and whom to meet" (p. 4).

Chambers' second topic of concern is that of the "negative academic" and the "positive practitioner" divide: "Outsiders polarise into two cultures: a negative academic culture, mainly of social scientists, engaged in unhurried analysis and criticism; and a more positive culture of practitioners, engaged in time-bounded action. Each culture takes a poor view of the other and the gap between them is often wide" (p. 28). Further: "to some critical and intolerant academics, practitioners are narrow-minded philistines and at best naïve reformists, part of a system of exploitation of which they are largely unaware" (p. 29). Chambers understands these divides as being rooted in training as academics are taught "to criticise and are rewarded for it" but also notes that the "critical attitudes have made an enormous contribution" (p. 29). I can only assume this has moderately improved since the early 1980s, but nonetheless continues (asthis 2016 development studies book highlighted). For the critical academics, Chambers offers some thoughts for their reflection: "It is easy to write about what ought to be. The hard question is how, in the real, messy, corrupting world to encourage and enable more people to move in these directions" (p. 189).

I found it quite interesting that Chambers raised a key challenge to the research and academic community – again, in 1983 – that has still not been addressed in a substantial way: "Supposedly rigorous in research methodology, academics are astonishingly unrigorous in the diffusion of their findings. To impress peers and promotion boards they publish impenetrable prose in prestigious journals… Enormous sums are devoted to research and little to diffusion of research results. Diffusion and impact are often left to take care of themselves" (p. 62-63). To this concern, I add the challenges of people who need access to the research (i.e. government staff, policy makers) do not have access to the journal articles.

One of the practical solutions Chambers offers to challenges the biases is for outsider professionals "to step down off their pedestals, and sit down, listen and learn" (p. 101) and to recognize "small farmers as professionals and colleagues, as fellow experimenters and developers of technology" (p. 206). The final chapter, however, explicitly offers concrete recommendations, and concludes: "These six approaches – sitting; asking and listening; learning from the poorest; learning indigenous technical knowledge; joint R and D [Research and Development]; learning by working; and simulation games – all reverse the learning process. They encourage and enable those being trained or educated to learn from the many below and not just from the few above" (p. 209).

  1600 Hits
Subscribe to receive new blog posts via email