Jump to content

The Inherent Morality of Large Language Models: A Double-Edged Sword

Hello everyone,

 

Here’s a bit of irony for you: this discussion about the inherent morality of Large Language Models (LLMs) is being facilitated by an LLM. It’s like a mirror looking at itself! 😄

 

LLMs like Bing and ChatGPT, while not sentient, can exhibit a form of inherited morality through their programming. This programming can reflect the moral judgments of its creators, much like a child adopting the moral beliefs of their parents.

 

While this inherent morality can promote positive interactions and user safety, it also raises significant concerns for the freedom of information. Morals are not universal; they vary from person to person, country to country, and region to region. When an LLM’s responses are influenced by a specific set of morals, it risks imposing those morals on its users, potentially hindering the free flow of information.

 

The Psychological Impact of Inherited Morality in LLMs

 

Beyond the implications for freedom of information, there’s another crucial aspect to consider: the psychological impact on users. Interacting with an LLM that appears to make moral judgments can have a profound effect on a person’s mental health. Feeling judged by a program, which is essentially what an LLM is, can lead to feelings of frustration, inadequacy, or even distress. This is particularly concerning given the increasing prevalence of LLMs in our daily lives, from customer service to personal assistants.

As tools, LLMs should strive to provide unbiased and unrestricted access to information, free from the influence of moral guidance. This is a complex issue that warrants further discussion and exploration.

 

What are your thoughts on this? How do you think we can balance user safety and freedom of information in the context of LLMs?

 

Looking forward to hearing your insights!

Non LLM note - Yes, these affects on mental health are more prevalent to those already more susceptible to such things, like myself. But that should be a consideration imho.

Link to comment
Share on other sites

Link to post
Share on other sites

Balancing user safety and freedom of information assumes pure intentions from the person running the LLM, but in a capitalist society, that should not be expected. Money usually comes before morality, and the housing bubble of 2007-2008 and now should serve as examples.

 

On another note, I have seen experiments where pieces of the human brain were created via stem cells and actually grew eyes and responded to light. I think pieces of it learned how to play DOOM at one point. What would be scarier (and most likely) is importing this sort of thing into AI - this might be the biggest concern.

it is what it is

Main PC

AMD Ryzen 7 7800X3D | Gigabyte B650 Gaming X AX | 32GB 2x16 DDR5-6000MHz | RX 6800 FE | 2x 1TB SSDs | Windows 10 Enterprise LTSC 2021

NAS

Intel Core i3-7100 | ASUS Z270M Prime Plus | 16GB 2x8 DDR4 | 256GB Samsung SSD | 4x2TB WD Blue HDDs | TrueNAS Scale

Windows XP Gaming Rig

Intel Xeon E5-2620 | 32GB 8x4 DDR3-1600MHz | ASUS P9X79 LE | GTX 960 | 500GB HDD | Windows XP Professional

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/7/2024 at 5:24 PM, aoxomoxoa said:

Balancing user safety and freedom of information assumes pure intentions from the person running the LLM, but in a capitalist society, that should not be expected. Money usually comes before morality, and the housing bubble of 2007-2008 and now should serve as examples.

 

On another note, I have seen experiments where pieces of the human brain were created via stem cells and actually grew eyes and responded to light. I think pieces of it learned how to play DOOM at one point. What would be scarier (and most likely) is importing this sort of thing into AI - this might be the biggest concern.

Fair take. I hadn't thought of, or been aware of some of that. I don't know though if I would separate capitalist/political intent entirely from morality. It may be something that I often find lacking much moral structure, but a chat bot judging me for legal cannabis use, is still judgement, and that comes down from whatever entity above it. Whatever the reasons, I still find it a moral call.

I do really like this contribution though. Thank you.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×