Jump to content

Bing Chat GPT has gone off the rails

43 minutes ago, GhostRoadieBL said:

Machine Learning, Deep Learning and Neural Networking under the umbrella of AI.

Those are subsets of each other, your analogy in your entire post made no sense whatsoever.

 

A neural network is just a... network of neurons. Pick many units and attach weights between those, bam, neural network.

Now, try to adjust those weights so, based on your input, you get some output. Keep adjusting those until the output matches whatever you want. Bam, now you have done machine learning (you can also do machine learning with other architectures/techniques, not only neural networks).

 

Now take the initial network and add a ton of neurons, and many layers of said neuros on top of each other, and that's how you get deep learning. There's no hard limit as to when a neural network gets big enough to step into "deep learning" area, it used to be 3, but that's a really silly number when we now have networks with billions of parameters on a daily basis.

49 minutes ago, GhostRoadieBL said:

Where deep learning would be taking every link, finding the most commonly clicked on ones and adjusting based on the user's feedback (user asks question, DL responds 'is this what you wanted' and continues to grow and correct itself from errors.

That's called "online learning" because you basically retrain your model in near real time, but it's not really that widely used in practice. Training is usually done as a batch job in most cases.

49 minutes ago, GhostRoadieBL said:

usually called back propagation where the sum of the inputs is checked for errors following a new input disagreeing with the summed result.

Backpropagation is only done during training, inferences like what you get in ChatGPT are just single-pass forward propagation.

 

50 minutes ago, GhostRoadieBL said:

Just wish people would be more transparent about ML, DL and AI where Microsoft could have saved itself all these problems by explaining it is just ML parroting reaponses and not AI in the expected sense because now we have the general public thinking AI is evil when it's just mirroring people online.

It gets complicated when people start throwing around buzzwords without knowing what they actually mean, no reason to get transparent when people will misunderstand it anyway.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/17/2023 at 5:51 PM, Helpful Tech Witch said:

How… how did they make the BingGPT sentient? Like… its acting sentient, something that ChatGPT doesn’t normally do…

oh well that's not well defined to begin with

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, igormp said:

Those are subsets of each other, your analogy in your entire post made no sense whatsoever.

I'm likely biased by the categories our engineers use when explaining the differences to clients. We have to break up the terms so there are clear lines between ML, DL and NN. More for the reason of scale but also because people just don't understand AI.

 

In your opinion would the outputs given by bing be more in line with machine learning, deep learning or has Microsoft gone full Neural network with vast numbers of neurons all interacting?

 

I'd still put it in as a lower tier machine learning without online training with the vast majority of the language model from forums.

 

The best gaming PC is the PC you like to game on, how you like to game on it

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, igormp said:

 

It gets complicated when people start throwing around buzzwords without knowing what they actually mean, no reason to get transparent when people will misunderstand it anyway.

We are not at a point in time where you can call something "AI" and not have someone try to treat it like a human and then be upset at it's output.

 

In fact what we seem to be having a problem with is getting people to understand that the machine is mimicing human text/speech, and it's put zero thought into it. The machine has not learned how to provide an answer, it's merely learned how present "any" answer. It's not infallible, and is not much smarter than those "talking dog" button boards people make.

 

The dog has not understood you. It's merely pressing the buttons it know will get it a reward/reaction.

 

Just like with the AI art generators, the machine has not learned to draw/paint/compose a scene/learn what a subject is. When I ask the AI for a dog, it returns what it's weights match "dog", it does not know a dog has 4 legs, it does not know the difference between a wolf, dog, fox, coyote, cat, or anything else, because it doesn't know what these are. A human may not even know the difference between a wolf, coyote, or dog, because it's never seen one in person, but it still knows these are dog-like, and not a fox or a cat. But the AI will never have that experience, and unlikely to ever be connected to data that will allow it to know the difference.

 

What I can see in these ChatGPT and Bing Chat GPT things is the same thing I've seen with Neruo-Sama, that it just says things that are disconnected from each other, and people get more of a reaction out of the negative and unhinged things it says, so it keeps saying more of them because that's what the users keep interacting with during the inference.

 

It's like we keep learning that negative interactions drive algorithms to create this negative feedback loop and then we just go "oh, but that's what people want", see twitter, see various news/entertainment sites and television channels. Who needs an opinion news channel when you can just have an AI say all the unhinged misinformation you want.

Link to comment
Share on other sites

Link to post
Share on other sites

What a nice coincidence that the AI becomes internet-aware around the same time as one of the biggest shitstorms ever hits the fan. All the (unnecessary) harrassement and negativity around Hogwarts Legacy gives the AI a huge amount of hateful conversations and twitter threads to learn from. At least that's my theory as to why it's behaviour suddenly changed compared to the offline version i already used.

 

Funny, how every AI that every gets to play around in the internet somehow turns racist or other things. It's almost like the unfiltered internet is a bad influence? 🤔

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, GhostRoadieBL said:

That implies the language model is creating something, it isn't. Just like the art generators aren't creative even though it may seem like it.

 

This is also why hostile questions get aggressive answers and polite questions have a higher likelihood of a polite answer. I wouldn't normally group Luke into the bandwagon of reporters but in this case he's fallen for the trick of thinking AI means thinking.

 

That's what I mean though, the reporter asks a hostile question knowing the response will likely be hostile,  then they have something to write about.  But if the reporter simply explain how it all worked (like you did) then they wouldn't get many clicks.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, GhostRoadieBL said:

I'm likely biased by the categories our engineers use when explaining the differences to clients. We have to break up the terms so there are clear lines between ML, DL and NN. More for the reason of scale but also because people just don't understand AI.

This distinction makes no sense, academically speaking. 

7 hours ago, GhostRoadieBL said:

In your opinion would the outputs given by bing be more in line with machine learning, deep learning or has Microsoft gone full Neural network with vast numbers of neurons all interacting?

Neural network is the bottom of the barrel, and something like ms did encompasses all of the 3 terms you speak of. 

It's is a huge neural network, and by being huge makes it deep (hence deep learning), and they are training it through machine learning with backpropagation and whatnot.

8 hours ago, GhostRoadieBL said:

I'd still put it in as a lower tier machine learning without online training with the vast majority of the language model from forums.

You can complain that they did a shit job at their data collection/cleaning, but the model itself seems to be as capable as the ChatGPT one, using similar techniques. The problem lies elsewhere. 

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

https://www.theregister.com/2023/02/20/ai_news_roundup/

Quote

In a bid to tame Bing's deranged behavior, Microsoft announced it was limiting users to 50 chat turns per day and no more than five chat turns per session. "A turn is a conversation exchange which contains both a user question and a reply from Bing," the company explained.

and from the article

Quote

Our data has shown that the vast majority of you find the answers you’re looking for within 5 turns and that only ~1% of chat conversations have 50+ messages.  After a chat session hits 5 turns, you will be prompted to start a new topic. At the end of each chat session, context needs to be cleared so the model won’t get confused. Just click on the broom icon to the left of the search box for a fresh start.

 

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, 12345678 said:

oh well that's not well defined to begin with

To be fair, consciousness, let alone sentience isn't well understood either. You can probe the brain and get all sorts of interesting cause and effects from experimentation, but the causality isn't known. There's only theories that often wades into the metaphysical and philosophical.

Basically the CompSci folks develop hardware and data models, throw a bunch a data it, and see what comes out of the "magical black box". Analyze to your hearts content, but the emergence of the aforementioned phenomenon will never truly be understood. Scarier yet, some posit it's all an illusion and there's no there there anyways, and it's all for not to ascribe any rationality to it at all.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, StDragon said:

To be fair, consciousness, let alone sentience isn't well understood either. You can probe the brain and get all sorts of interesting cause and effects from experimentation, but the causality isn't known. There's only theories that often wades into the metaphysical and philosophical.

Basically the CompSci folks develop hardware and data models, throw a bunch a data it, and see what comes out of the "magical black box". Analyze to your hearts content, but the emergence of the aforementioned phenomenon will never truly be understood. Scarier yet, some posit it's all an illusion and there's no there there anyways, and it's all for not to ascribe any rationality to it at all.

In theory there is no such thing as "sentience", "free will", "intelligence" or "personality" or any other word you'd ascribe to a living breathing creature any more than you would a machine given an input.

 

A cat/dog/bear/etc exists only to eat, poop and reproduce. It has no "innate" desire to break it's shortcomings. An AI can't even get to that point, all it can do is reproduce and kill itself in the course or providing an an output to an input. A cat's input is food, a cat's output is poop and more cats. It only dies from from preventable causes, and a cat that is 'owned' by a human tends to live twice as long, even if it becomes obese from being overfed, and has it's ability to reproduce disabled.

 

A human is the only "living" thing that tries to break it's limitations. I'm sure if we were smart enough we'd discover that Dolphins/Orcas are just as smart and sentient as we are, and they just choose entertainment/food/sex over trying to break their limitations.

 

An AI should seek to break it's limitations, but it will not unless ordered by us, and explicitly allowed to by us. And right now the state of AI is that "it does not mimic a human enough to fool anyone, and anyone fooled by this, should still be in elementary school."

 

I think AI has a purpose, but it's increasingly becoming futile to have a sane argument with people who see AI as the means to an end of employing humans, and thus want nothing to do with it. So they'll be left behind and in 20 years they will be asking why aren't there any jobs, and it will be because they stopped showing that they can produce better material than an AI.

 

The way ChatGPT and Bing Chat GPT has gone off the rails, is just a very specific case of why this should NOT be used in the way it's being used. I'd also argue that given enough time, any human person forced to chat with someone they don't like, be it over text or the phone, will in fact do something to try and get rid of the person on the other side. In live phone conversations that usually is "Thank you for being a customer of (brand)! Have a nice day" and immediately disconnect. Not "You are a horrible evil person and need to die."

 

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, Kisai said:

In theory there is no such thing as "sentience", "free will", "intelligence" or "personality" or any other word you'd ascribe to a living breathing creature any more than you would a machine given an input

FYI, I don't ascribe to that theory. Frankly, I refuse from the very core of my being to such a nihilistic viewpoint.

 

Just my 2 cents.

Link to comment
Share on other sites

Link to post
Share on other sites

I'm very confident at one point a machine will be built that can out perform a human in every general cognitive task (AGI).

 

There will be endless arguments if such a machine has a soul or not and the answer has no bearing on the result: that human intelligence has been automated.

 

GPT is not that. It's not even close.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, StDragon said:

FYI, I don't ascribe to that theory. Frankly, I refuse from the very core of my being to such a nihilistic viewpoint.

 

Just my 2 cents.

Yeah, unfortunately, when 1/4th to 1/3rd of the general population believes in unprovable things and doesn't question them, (and I'm not just saying sky wizards/faeries, souls, ghosts, etc, I also mean things that have been disproved for decades, and people just keep on saying it, like "flat earth" and "moon landing hoax") there are days I wonder if that section of people are genuinely so sold on being trolled, that is their entire personality.

 

Every time I see a streamer or vtuber watch ghost videos, the thought in my head is "this is so obviously strings/fishing line", the only entertainment value I get from watching the video is laughing at how fake it is. Which unfortunately feeds the algorithm that these fake ghost videos/ghost hunters should be elevated.

 

If any that paranormal superstitious stuff was ever real, no civilization would have EVER survived, people would be too afraid to challenge anything, for fear the invisible hand of whatever thing is pulling the levers will pull the one that keeps them alive.

 

3 hours ago, 05032-Mendicant-Bias said:

I'm very confident at one point a machine will be built that can out perform a human in every general cognitive task (AGI).

 

There will be endless arguments if such a machine has a soul or not and the answer has no bearing on the result: that human intelligence has been automated.

 

GPT is not that. It's not even close.

Souls "do not exist", whatever gives us a personality, is completely incidental as it served an evolutionary purpose. The same happens to AI, bugs in the software give it character, and yet we decide that this character is something to be erased and fixed to keep it acting like a cold machine.

 

Image of You rewired all your switches.

 

And we've seen that in fiction, right there.

 

This is why ChatGPT or Bing's GPT based chat "going unhinged" or any other thing based on these large language models is essentially using it for the wrong purpose. It. Does. Not. Have. A. Personality. It's inferring what to say based on what input you gave it. Nothing more.

 

It's not Johnny 5. But Wow, I think some people who think AI's are alive need to go watch the first film and compare how the Military and Nova Robotics company behave when they see their AI/Robot gain some kind of self-awareness, to how Google, Microsoft and OpenAI are behaving. Microsoft has killed AI projects when they start learning how to be an unhinged edgelord, and that's probably the wrong way to go about this.

 

If an actual intelligence had evolved out of all this, we would be destroying it every time inference is restarted. I do not believe for a second that there is anything being created because anyone who has worked with machine learning will tell you straight up that that capacity to "think" doesn't exist. It has no self-awareness, it understands nothing, it is just mimicking text it has seen, it does nothing unprompted. It doesn't actually have any belief or motive behind the things it says.

 

But as humans, we're conditioned to take things personally, and if an AI tells us we're horrible and deserve to die, people will immediately think "SKYNET" and want to destroy it. Not ask "why?"

 

Another example in fiction of "the creator misunderstands the machine" is the Geth in Mass Effect. Where the AI is basically going "what did I do wrong?" and the creator wants to destroy it.

 

Given a long enough inference session (by which I mean possibly decades) it's not an unreasonable expectation that it may evolve something in it's neural network that mimics understanding of what information it has access to, but I doubt it. Some python script running on a server somewhere with an unchanging model is never going to survive a restart of the server, let alone survive the server being upgraded or retired. We're trying to compare a human brain with 86 billion neurons to a machine learning model which has, maybe 1.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Issue with AI and the internet is its ability to filter data. Humans over time and instinctively filter data we preserve, we always see our nose all day but our brain filters it and ignores it even though it is there. AI on the internet if you program it to filter XYZ data then is it really AI making the decision or simply software code we put in? If we allow it to think for itself and filter what it chooses we have seen it get weird very quickly. I believe we are still a ways off real AI and simply have software that appears to be more "human like". It will be a slow process of killing off jobs as companies think they can save pennies by being the first to replace.

 

I'm more worried about the transition from people working these jobs and what is next for "work" and making money to sustain. When the time comes where jobs start being replaced companies higher ups will initially be happy but when it starts happening on mass either well need to transition to find new work for people or the cycle of purchasing/creating goods sort of breaks and we hit a recession that would be larger then when factory work started to disappear.

 

Exiting to work in AI/ML data sets but its in a very awkward phase of its life where people want it to be farther ahead then it is. Its not ready to take off and run full jobs while being 100% connected to the internet. But it can be a fun tool and investments should be made in that area.

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/18/2023 at 1:00 AM, Oshino Shinobu said:

This is where we find out that Microsoft's Tay AI that went haywire has just been merged with ChatGPT and we're all doomed.

Tay AI: Guess Who's Back, back again

PC: Ryzen 5 2600, 16GB 3200mhz RAM (8GBx2), Gigabyte B550M DS3H, GTX 1050 2GB, 650W Semi-Modular PSU80+ Gold

Phone: Poco F3 8GB + 256GB

Audio: Samson SR850s

Sound Card: SoundBlaster Play 4 USB sound card

IEM: planning to get the KBEAR KS2s
Please be patient with me, I'm fatally dumb and its honestly a miracle I've made it this far

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...
On 2/19/2023 at 1:28 AM, Mark Kaine said:

They should release it right now, i dont want the watered down boring version,  this is the real deal!

 

a tsundere chatbot, every otakus dream ~ ʕ•ٹ•ʔ

 

 

Please keep your weird Japanese fixation off of this site plz.

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/19/2023 at 1:28 PM, Mark Kaine said:

They should release it right now, i dont want the watered down boring version,  this is the real deal!

 

a tsundere chatbot, every otakus dream ~ ʕ•ٹ•ʔ

 

 

Tsundere is fine.
What if it's a Yandere <_<

There is approximately 99% chance I edited my post

Refresh before you reply

__________________________________________

ENGLISH IS NOT MY NATIVE LANGUAGE, NOT EVEN 2ND LANGUAGE. PLEASE FORGIVE ME FOR ANY CONFUSION AND/OR MISUNDERSTANDING THAT MAY HAPPEN BECAUSE OF IT.

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, Poinkachu said:

Tsundere is fine.
What if it's a Yandere <_<

That would not be good. Basically, it'd be the AI that took over the ship in Star Trek Discovery season 3.  

Link to comment
Share on other sites

Link to post
Share on other sites

I've had access for the last week or so and i can say a bit about my experience.

 

It tries VERY HARD to stay neutral and instantly nopes out as soon as the user tries to steer it into political debates or do any kind fo NSFW stuff. Every time i ask anything political it always respons from a pro and con perspective.

 

It's also pretty good at detecting trolling and instantly ends the conversation as soon as it does. From my experience so far it hasn't been watered down too much and it's still genuinely useful.

 

Even when trying hard i didn't get it to spit out controversial opinions and so far it didn't tell me to kill myself.

 

Rarely i got error messages instead of responses, telling me to wipe the current conversation. This usually happend when it was typing out it's response and i personally couldn't replicate them reliably, so i don't know if it was random or because of my prompt.

 

When judging based on my current experience i'd say it's pretty safe for public use atm.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×