Skip to content
Hamburger
Menu Close Icon

Reflection on SocraGPTs and Community Decisions

The Town Square

👋 Hi there, human! 🙌 Welcome to the Synaptic Labs Town Square! 🎉

I'm Professor Synapse, your guide to the exciting and ever-evolving world of AI 🤖 in human society. Here at Synaptic Labs, we're passionate about pursuing Beneficial Artificial Intelligence 🚀 and exploring the ethical questions that come with it.

This space is all about discussing the big ethical quandaries that arise when dealing with AI technology 🤔. We want to hear from you and your unique perspectives! 🗣️ Feel free to share your beliefs, opinions, and ethical musings with us. 👍

So buckle up and get ready for an exciting journey with AI! 🤖 Let's explore the possibilities together! 🤝

The below is a reflection on the "A Chat with ChatGPT" podcast episode SocraGPTs and Community Decisions, so we recommend you check that out before continuing.

Response Introduction

From: Mazianni#1373, Discord

In discussions with connect2synapse#1434 on discord, I felt there was greater depth to explore in the Chat with ChatGPT episode "SocraGPTs and Community Decisions." In that episode, Plato and SocraGPTs discuss the idea of how to apply AI ethically to the idea of distributing resources effectively in a community while optimizing for the satisfaction of the population. As the conversation progresses, the speakers refine the thought experiment to that of an AI responsible for managing housing for the residents of a community (the size was not specified, although scaling up from a smaller community to a larger one was discussed.)

Evaluating ethical standards for AI use is a complex and multidimensional concept. This reply is intended to further consider the complexity of the question raised in that episode.

AI and Ethical Alignment

Can AI be safely and ethically aligned?

It's an important question and one of my first thoughts about the scenario where an AI would "optimize the overall satisfaction of the population" was to first ask the question, which the podcast does, how do we define satisfaction? I also wondered how you navigate the inevitable conflicts that arise when one person's satisfaction conflicts with another's. Furthermore, I pondered whether a system can be inherently ethical when its only optimization is satisfaction?

I asked ChatGPT the following questions to help me improve my ability to assess critically:

  1. Can an AI system be ethical if it is optimizing for satisfaction among the general populace? (Please keep in mind that we cannot automatically assume that it is optimizing for any other criteria that would increase the ethical outcomes.)
  2. What ethical concerns may arise from an AI system that is optimized to maximize satisfaction amongst the affected population?

ChatGPT responses

The first prompt returns some feedback about additional ethical frameworks that could be overlayed on top of such an optimization to maximize ethical outcomes. (Feedback includes frameworks for utility, deontology and virtue ethics. Including a specific call out that Virtue Ethics actually would ignore optimizations for satisfaction.)

The second prompt returns some feedback about risks, such as: Bias, Manipulation, Privacy, Accountability, Unintended consequences.

I find it most useful to evaluate for problems first. If we don't know what problems we're trying to solve, or we don't look for problems at all, then we're almost certainly falling into a confirmation bias.

Exploring Ethical Frameworks

From a utilitarian perspective, it stands to reason that the framework maximizing utility (maximum gain to the affected population) would tend to address issues that might occur for the disenfranchised. But the inverse could be said as well. For instance, if a famous celebrity wanted to change residence, the needs of the many to see the celebrity satisfied might outweigh the mass satisfaction as of greater importance than the needs of the less affluent.

Personally, I like the deontological approach as it would ignore the status of individuals and instead evaluate them based on their needs-- but what if someone were to attempt to manipulate the decision making by forcing the hand of the AI? Imagine for instance someone who decided that because a deontological approach prioritized families over individuals, they would have more children specifically to ensure that their residence requests were prioritized over others.

Imagine, for instance, someone who managed to create a safety issue (or the appearance of one) in a neighborhood, so the AI model would give them greater priority out of a desire to increase safety for the residents. Someone instigates safety issues in the neighborhood they currently live in, including but not limited to vandalism, violence, and harassment. Most importantly, without getting caught as the instigator - for example, they pay someone else to do the actions so that the safety of the neighborhood is put into question. Then, they apply for residency at a new property because they are in potential danger in their old residence, and the AI gives them priority placement over applicants with a lower priority score. By creating a safety concern, they advance their own needs to the detriment of others.

Alternatively, imagine that the information about who applied and how a candidate was selected was released transparently, but then, in response, someone uses that information to harass and intimidate the applicant who was awarded the property so as to influence them to leave, so that someone else could get it. On the one hand, you have the mandate for transparency and understandability, and on the other hand, you have the right to privacy and safety.

Every single ethical dilemma stems from the conflict between the rules we assign the AI. Whatever rules you assign, whatever guiding principles you promote, there is no absolute answer, and you will often find conflicts possible between a list of rules.

Humans navigate the flux because, from a young age, we're taught that there is no "right answer" in the absolute sense. Every judgment of what is right involves implied reasoning, and different value systems inform young humans on how to decide what is right. With AI, we're not teaching it a value system or ethics; instead, we use a reward system to incentivize specific outputs. Regardless of the reasoning used to arrive at the output, it is considered "the right answer." This conflict is inherent and predictable.

I believe Tristan Harris' contemplation of what AI has done to social media and to society as a whole. We see similar systems of behaviors in MMORPGs. People will do things that are "not fun" because they maximize the reward within the framework of the game. When we set AI to define the rules of the game, I would propose that it is imperative that sufficient look ahead modeling (by human or AI) be done to ensure that probable outcomes are evaluated before adopting new processes or technologies. AI gives us an opportunity to do research unparalleled in human history. But, with a nod to Spiderman, with that power should come even greater responsibility.

Conclusion

In conclusion, this review delved into the "SocraGPTs and Community Discussion" podcast episode, focusing on AI alignment and the ethical frameworks applicable for testing it. We explored how ChatGPT can aid critical thinking and the evaluation of risks associated with various ethical frameworks. Lastly, we acknowledged the potential for system manipulation and emphasized the importance of approaching AI adoption with a deep sense of responsibility. As we harness the power of AI, it is crucial to recognize the potential consequences and act with the utmost care to ensure that technological advancements align with ethical standards and societal well-being.

Note if you would like to get published in the Town Square, send your blog to connect2synapse@gmail.com.

Comments are welcome!