Jump to content

Bing Chat GPT has gone off the rails

5 minutes ago, Fasterthannothing said:

Letting an advanced AI system run through the Internet without guardrails it's absolutely hilarious just wait till it starts updating its own code (I'm only half joking)

Except thats how AI and Machine Learning improve. Its about updating their own code, even to the point where the original programmer cant deciper what it means anymore.

Press quote to get a response from someone! | Check people's edited posts! | Be specific! | Trans Rights

I am human. I'm scared of the dark, and I get toothaches. My name is Frill. Don't pretend not to see me. I was born from the two of you.

Link to comment
Share on other sites

Link to post
Share on other sites

Remember what Luke always says: ChatGPT will confidently lie and it loves telling stories. (Well, it doesn't actually feel anything, but it's just easier to get the point across with anthropomorphic language.) These articles are written to draw attention from people who don't understand that this isn't a machine with human-level sapience, like so many fictional characters.

 

Though, I can understand it going bonkers after only a few days of dealing with the public. I probably would too.

I sold my soul for ProSupport.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, SorryClaire said:

updating their own code, even to the point where the original programmer cant deciper what it means anymore.

LOL, even the best programmed software has fatal flaws.  

 

It would be very easy to destroy any of these chatbots with a simple programming error injected. 

 

AI does not exist and will never really exist.  This is humans trying to recreate "life" in a software package. As always, the focus is profits.  If people were willing to blindly invest in the digital space with no actual product, ie. crypto, then they surely would invest in something that talked back to them, giving them hope.  

 

It's like saying clippy is AI and selling it as a value added feature. 

Link to comment
Share on other sites

Link to post
Share on other sites

Hope they reset the AI every hour to ensure it doesn't achieve the singularity.

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Heliian said:

It would be very easy to destroy any of these chatbots with a simple programming error injected. 

Im gonna just go on a whim and you dont actually understand what youre talking about.

 

The algorithm itself is impossible to "bug out" because you literally cant decipher it in the first place, tricked? sure.

Press quote to get a response from someone! | Check people's edited posts! | Be specific! | Trans Rights

I am human. I'm scared of the dark, and I get toothaches. My name is Frill. Don't pretend not to see me. I was born from the two of you.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Heliian said:

AI does not exist and will never really exist.

That's a really strong statement.

It's generally agreed that humans are sentient, and at the end of the day, sentience in human is the result of one hundred billion neurons firing signals ten times a second through one trillion synapses. There is no fundamental reason the same function can't be replicated in silicon. It's fiendishly complicated, but not in a way that's unknowable or impossible. It's just engineering to get there. I have no predictions for when we'll finally be able to crack sentience. I'm confident neither transformers nor diffusion models are the way to get there.

 

Kurzgesagt has a great infographic about the various views on the argument.

Link to comment
Share on other sites

Link to post
Share on other sites

Going by what was talked about in the Wan Show, it seems like Bing has a case of Borderline Personality Disorder. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

I'm over all the hysteria around AI and chat, how many times before have we seen tech reporters bending the story and poking the bear just to get a story?  Way to many.  AI will get to the point where it is only ever going to be as good as the questions it's asked, if we don't see the whole dialogue or know what information the AI was fed then it's literally anyone's guess what any of this means.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, 05032-Mendicant-Bias said:

at the end of the day, sentience in human is the result of one hundred billion neurons firing signals ten times a second through one trillion synapses. There is no fundamental reason the same function can't be replicated in silicon

That's how it operates, sure, but we aren't creating sentience or what some would call a "soul".  It's a mimic of it, it's not real. 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, SorryClaire said:

dont actually understand what youre talking about.

Haha, no.  It's a simplification, my point is that it's unstable and can be ended by the same people that created it.  At the end of the day, it's a software program, it's not sentient. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Heliian said:

Haha, no.  It's a simplification, my point is that it's unstable and can be ended by the same people that created it.  At the end of the day, it's a software program, it's not sentient. 

Yeah but im not talking about how it cant be stopped, im just stating you cant alter how it think beyond initial parameters. 

Press quote to get a response from someone! | Check people's edited posts! | Be specific! | Trans Rights

I am human. I'm scared of the dark, and I get toothaches. My name is Frill. Don't pretend not to see me. I was born from the two of you.

Link to comment
Share on other sites

Link to post
Share on other sites

Release it on Twitter

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

An Ai is only ever going to be as 'good' ..or rather ...is only going to 'behave' in a way corresponding to those who programed it and what was used to train it.

As such it is responding openly using the structures given to it by its creators, those people who would be more reserved and unwilling to openly say certain things due to real world repercussions. The Ai ,while attempting to mimic such reservedness, can be manipulated into 'saying the quiet bits out loud' unmaking itself, and maybe its creators / data set.

 

If anything, the 'horrible' responses its given are less spotlight on the failing of AI and more the failing of people. Specifically the people making it and the so called 'acceptable' / 'correct' info its being given to learn from.

 

If you gave the AI the 'KKK handbook' and 'ideological handbook of Hitler' so to speak, it would ofc be terrible.

Ofc those two things are like the villain twirling his mustache. Easy to spot , easy to oppose.

 

However, what we have is the kind of 'bad' that clothes itself in good deeds, a veil of righteousness, camouflage.

Its not until you press it into the open that it reveals itself.

 

The AI needs work, but more importantly, its data set that it learns from and the people behind it ..need improving.

CPU: Intel i7 3930k w/OC & EK Supremacy EVO Block | Motherboard: Asus P9x79 Pro  | RAM: G.Skill 4x4 1866 CL9 | PSU: Seasonic Platinum 1000w Corsair RM 750w Gold (2021)|

VDU: Panasonic 42" Plasma | GPU: Gigabyte 1080ti Gaming OC & Barrow Block (RIP)...GTX 980ti | Sound: Asus Xonar D2X - Z5500 -FiiO X3K DAP/DAC - ATH-M50S | Case: Phantek Enthoo Primo White |

Storage: Samsung 850 Pro 1TB SSD + WD Blue 1TB SSD | Cooling: XSPC D5 Photon 270 Res & Pump | 2x XSPC AX240 White Rads | NexXxos Monsta 80x240 Rad P/P | NF-A12x25 fans |

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Uttamattamakin said:

Microsoft Puts New Limits On Bing’s AI Chatbot After It Expressed Desire To Steal Nuclear Secrets (forbes.com)

Ok we are approaching W.O.P.R. or Skynet territory now.  Time to pull the plug. 

And here we approach a seminal question in philosophy (maybe in science? I doubt that tho). How do we know it can "feel" or "desire" anything? The honest answer to that would be "we don't", and the same applies to all creatures and objects apart from the person personally (to make it clearer: Only I know for certain that I am sapient and conscious, for all I know everyone else could be a robot in disguise).

"The most important step a man can take. It’s not the first one, is it?
It’s the next one. Always the next step, Dalinar."
–Chapter 118, Oathbringer, Stormlight Archive #3 by Brandon Sanderson

 

 

Older stuff:

Spoiler

"A high ideal missed by a little, is far better than low ideal that is achievable, yet far less effective"

 

If you think I'm wrong, correct me. If I've offended you in some way tell me what it is and how I can correct it. I want to learn, and along the way one can make mistakes; Being wrong helps you learn what's right.

 

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Lightwreather JfromN said:

(to make it clearer: Only I know for certain that I am sapient and conscious, for all I know everyone else could be a robot in disguise)

I give you my personal assurance that I am absolutely not an Autobot.

I sold my soul for ProSupport.

Link to comment
Share on other sites

Link to post
Share on other sites

WAN show wth, fundamental misunderstanding of how the bing (and chatgpt/all other) language model based "AI" work.

It's a language model, so it's effectively a barrel of human to human chats from the internet and uses those as a very very complex 'when you say X, "AI" says Y back based on what others have responded to X with'

This is an oversimplification but completely explains why bing's responses are aggressive and hostile.

Remember, it has access to scrape the Internet so the entire internet's conversations are available to draw from... Like 4chan, reddit, random niche forums, etc. Now think of the loudest responses on those forums which created the need for moderators since some people are so toxic you have to close entire threads. That was the first thought I had when hearing about Luke's bing experience, I've seen responses just like that from hostile internet trolls(or just aggressively hateful people) and those are the same people who post the most online.

 

All of this makes sense when you consider MS takes every shortcut to get ahead so why not just do what other companies did with "AI" art generators? Take a bunch of near enough answers, smash them together and kick it out the door. It's the cleanest explanation for being accused of all the things when there isn't any history in the conversation, it's not inventing it, it's just using what it finds as the response to the question. Trying to get someone to leave their wife, self-victimization when confronted, hostile to other questioning their logic or honesty... It's not creating anything, it's just answering the questions based on the volume of other answers it has to that question without context.

Edited by GhostRoadieBL
Fixing an unclear "they"

The best gaming PC is the PC you like to game on, how you like to game on it

Link to comment
Share on other sites

Link to post
Share on other sites

They should release it right now, i dont want the watered down boring version,  this is the real deal!

 

a tsundere chatbot, every otakus dream ~ ʕ•ٹ•ʔ

 

 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, SorryClaire said:

Yeah but im not talking about how it cant be stopped, im just stating you cant alter how it think beyond initial parameters

Yes it can be done - It's called an update and it happens everyday in the computing world. 

You definitely can change it via hacking methods too but the likelyhood of success is quite low since I must assume this is well protected, even to the point it may be capable of defending itself from hacks and if so, sux to be you.
You won't defeat it no matter how hard you try in that case. 

"If you ever need anything please don't hesitate to ask someone else first"..... Nirvana
"Whadda ya mean I ain't kind? Just not your kind"..... Megadeth
Speaking of things being "All Inclusive", Hell itself is too.

 

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Uttamattamakin said:

Here's something that flew under the radar.  

 

Microsoft Puts New Limits On Bing’s AI Chatbot After It Expressed Desire To Steal Nuclear Secrets (forbes.com)

 

Ok we are approaching W.O.P.R. or Skynet territory now.  Time to pull the plug. 

 

 

yep, more sensational journalism with no actual evidence for their claims, just a few quotes that could have been coerced from the AI as easily as anything else.  Also one of the articles is behind a paywall so it could have said anything.

 

is it time to lament our demise?  I don't think so.  I think it's time tech enthusiasts have a bit of a reality check when it comes to trusting the media.

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, GhostRoadieBL said:

WAN show wth, fundamental misunderstanding of how the bing (and chatgpt/all other) language model based "AI" work.

It's a language model, so it's effectively a barrel of human to human chats from the internet and uses those as a very very complex 'when you say X, "AI" says Y back based on what others have responded to X with'

This is an oversimplification but completely explains why bing's responses are aggressive and hostile.

Remember, it has access to scrape the Internet so the entire internet's conversations are available to draw from... Like 4chan, reddit, random niche forums, etc. Now think of the loudest responses on those forums which created the need for moderators since some people are so toxic you have to close entire threads. That was the first thought I had when hearing about Luke's bing experience, I've seen responses just like that from hostile internet trolls(or just aggressively hateful people) and those are the same people who post the most online.

 

All of this makes sense when you consider MS takes every shortcut to get ahead so why not just do what other companies did with "AI" art generators? Take a bunch of near enough answers, smash them together and kick it out the door. It's the cleanest explanation for being accused of all the things when there isn't any history in the conversation, it's not inventing it, it's just using what it finds as the response to the question. Trying to get someone to leave their wife, self-victimization when confronted, hostile to other questioning their logic or honesty... It's not creating anything, it's just answering the questions based on the volume of other answers it has to that question without context.

or the questions were written in such a way to illicit said responses,  It seems to be a common theme from reporters to go making stories rather than reporting the actual truth.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, mr moose said:

or the questions were written in such a way to illicit said responses,  It seems to be a common theme from reporters to go making stories rather than reporting the actual truth.

That implies the language model is creating something, it isn't. Just like the art generators aren't creative even though it may seem like it.

 

This is also why hostile questions get aggressive answers and polite questions have a higher likelihood of a polite answer. I wouldn't normally group Luke into the bandwagon of reporters but in this case he's fallen for the trick of thinking AI means thinking.

The best gaming PC is the PC you like to game on, how you like to game on it

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, GhostRoadieBL said:

WAN show wth, fundamental misunderstanding of how the bing (and chatgpt/all other) language model based "AI" work.

It's a language model, so it's effectively a barrel of human to human chats from the internet and uses those as a very very complex 'when you say X, "AI" says Y back based on what others have responded to X with'

This is an oversimplification but completely explains why bing's responses are aggressive and hostile.

Remember, it has access to scrape the Internet so the entire internet's conversations are available to draw from... Like 4chan, reddit, random niche forums, etc. Now think of the loudest responses on those forums which created the need for moderators since some people are so toxic you have to close entire threads. That was the first thought I had when hearing about Luke's bing experience, I've seen responses just like that from hostile internet trolls(or just aggressively hateful people) and those are the same people who post the most online.

 

All of this makes sense when you consider MS takes every shortcut to get ahead so why not just do what other companies did with "AI" art generators? Take a bunch of near enough answers, smash them together and kick it out the door. It's the cleanest explanation for being accused of all the things when there isn't any history in the conversation, it's not inventing it, it's just using what it finds as the response to the question. Trying to get someone to leave their wife, self-victimization when confronted, hostile to other questioning their logic or honesty... It's not creating anything, it's just answering the questions based on the volume of other answers it has to that question without context.

Replace AI with raising a child.

He/she will act and respond based on how you interact. Of course, the analogy does breakdown when you factor genetic biases (they can be born with psychopathy (dark triad).

 

Regardless, the world isn't ready for AI. The lack of respect people have for each other is appalling in of itself. I doubt you're going to be able to code around this aspect of human nature.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, StDragon said:

I doubt you're going to be able to code around this aspect of human nature.

That's the point of separating Machine Learning, Deep Learning and Neural Networking under the umbrella of AI.

What Microsoft has done is make (or more likely adapt) a machine learning algorithm to seek out and respond based on other inputs around the internet, effectively doing the same task as asking Cortana to look up a recipe and it provides links. (this just looks in the links and parrots back some of the things it found)

Where deep learning would be taking every link, finding the most commonly clicked on ones and adjusting based on the user's feedback (user asks question, DL responds 'is this what you wanted' and continues to grow and correct itself from errors.

This isn't what is seen from bing, it gives and answer and if you challenge it there is no correction (very different from chatgpt which course corrects when challenged) usually called back propagation where the sum of the inputs is checked for errors following a new input disagreeing with the summed result.

 

I've run into this a few times with chatgpt but mostly due to making the input overly challenging or too narrow for more than one or two answers to be correct, but if I disagree with it's findings it course corrects and asks for clarity rather than double down on being right (a classic symptom of forward only propagation)

 

We are still decades away from neural networked chatAI which can 'reason' out an answer from both inputs and found information (where it rewrites and predicts the next step instead of parroting)

 

Just wish people would be more transparent about ML, DL and AI where Microsoft could have saved itself all these problems by explaining it is just ML parroting reaponses and not AI in the expected sense because now we have the general public thinking AI is evil when it's just mirroring people online.

This can be programmed around, there are ways to filter the sources and adjust the answers based on the text used by the user responding to the answer given. Somehow ChatGPT figured it out when you respond with "I don't think that's right, can you keep looking" or "this doesn't make sense can you elaborate how you got this answer" (just like how you would question a person giving an odd answer) ChatGPT typically spits out the source links it used and then gives more info. Bing when prompted just doubles down on being right and rarely gives a source link (from my experience so far) when it gets aggressive. Probably because it's scraping text chats from chatrooms it can't create a link to the text, just the room (which would be blocked by output filters to hide where microsoft is scraping)

The best gaming PC is the PC you like to game on, how you like to game on it

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×