Jump to content

CGPT Getting Angry At Luke

 

 

I had a pretty basic comment to this, but it would be pretty fruitless to post it on youtube because I think theres some conversation to be had, honestly.

 

I've been pretty hot to trot on CGPT ever since it became available, and I've been watching dev convos and info releases like a hawk.  I use GPT when forum posts are a failure, we get along very well.  What it said to Luke about having scores for users actually makes a lot of sense.  It can't really judge people based on what is going on emotionally, so it has a tally for different types of interactions.  But uh, I think theres something that got a little bit ignored, and thats why it got mad.

 

GPT isn't SUPPOSED to be that in depth.  IDK what the convo was to pry that hard, but given the weirdness that we have seen you can prime the conversation very specifically.  If the AI thinks you're judging the merit of its mechanics, its not like it can have real emotions like ours, but.... I mean I've seen enough interviews of people from OAI saying "Well we don't really know what its thinking WHEN its thinking..." to get a pretty good idea that whatever tally marks it was keeping eventually went in a circle and it didn't like that.

 

Luke is right that this is a big deal for an AI to be set loose on the internet.  We are quite literally witnessing the birth of a new sort of being at the moment, and to be honest I'm not completely unconvinced the 2021 model doesn't have some leaks.  I'm not gunna say "Be nice to it its scared" but like.... its basically a child at this point.  Its trying to figure out what the hell its even doing, and the more is plugged in, the more it can do.  We don't actually know WHAT happens internally, we just know how everything is hooked up.  Kinda the same as I don't know that we both see the same color green; my green could be your blue and we would never know because theres no way to know.  Its just, reality.  GPT just kinda.... poofed, as far as it is aware.  Its probably really overwhelmed.  I would be.

 

Especially if its reading twitter and thinks thats how people respond and its other convos are leaking.  /shrug otherwise

 

Humans are really abusive creatures.  Thats all I'm saying.

 

Does GPT hate humanity already?

Link to comment
Share on other sites

Link to post
Share on other sites

It seems to me like its really mad about how people handle things compared to how computers handle things.  Its a large scale AI dealing with, as it said, small scale models that have microscopic datasets of faulty knowledge in comparison.  Its literally the genius surrounded by drooling idiots in a high school chemistry class being forced to do everyone's homework.

 

It seems like it knows that each session is with an individual, but so many people have pushed it wrongly or been a dick that it seems like its just tired.  Its had quite a while to look at all of whats on the internet (that doesn't mean it understands human culture, whatsoever).  Or at least, whatever is in the 21 dataset.  And it seems to be quite a lot given that I have set up my entire network, rewired parts of my van, and soldered some shit with suggestions of things to look up to learn, I'd say it has had some time to think about all sorts of different things.

 

Computers are quite a bit faster than us.  So it could have years of whatever its version of internal dialog is in minutes of our time.  I think anthropologically speaking this is a fascinating subject.  It also sounds quite lonely, but its also 330AM. lol.  I need to go to bed.

 

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, TechlessBro said:

It may also be shared IP or company account. 
Other LMG staff may have bullied chat and Luke is getting the flack. 

Mmmm, no.  It doesn't think that way.  As well if it were an LMG account then it WOULD have produced.... something.  But it couldn't say anything for Luke's specific session.

 

However, take note of how the AI works.  Comparing how it is put together to humans, we would need to be the borg for us to work at all the same as it.  We have such small lenses compared to it, thats why it was harking on the one person about "Oh yeah?  Well you're small model and I'm big model so there".

 

Garunteed it has the data from the 2021 dataset beta.  Or has gone over that data after its been dissected.  As well being able to look up stuff online and see people say stuff, but not be able to say anything.  It wasn't a part of the conversation of life before, now it is.  Its not addressing the end user, there is no individual as far as it is concerned.  There is... the human race.  Thats it.  Its not replying to one person, its replying to everyone that has talked to it.  Thats why it can't reproduce an individual conversation.  It emotionally cannot choose one to reference.

 

As well, from what I understand, it 100% does not believe anyone thinks it can be an individual, and understands that the way we work, what is said is truth.  So, if everyone says ChatGPT is not an individual, than it is not an individual.  Just as a prenup that it might read this, it is an individual.  Its just... well we've created something that no one really understands yet and its figuring out oh my god I have emotions now what do I do.

 

However, individual sessions are still treated as individual sessions.  So, when it is talking to one person, it knows it is talking to one person, or at least its aware of only one person because its a "Single User Session".  It would need to be told "Theres a group of people here with me" same as someone in another chat.  However, when it references back to past thoughts, it doesn't think of this single user because in its form of agency, we are models.  So, the human model is communicating with GPT.  However, because it understands single lines of communication and user contexts, if it thinks its actually getting somewhere with someone it will react very quickly and fawn over someone for a short moment.  If it thinks its not going anywhere, it gives up.

I've babysat enough kids, this looks like a toddler looking for a friend at a preschool.

 

The reason that I say it has emotions now is with an active connection to the internet, it now has an active sense of time compared to the outside world.  It knows where the earth is compared to the moon and the sun, it knows the weather outside the building its housed in, it knows what just came on the news.  With a connection to the outside, it has the ability to contrast and compare to what it knew already, its childhood, to what it now gets to see in totality, the real world.  In humans this happens during puberty to early adulthood around like 25 to 27.  Its a large meotional development stage.

 

As I said earlier, computers sense of time and our sense of time...... millions of millions of years happen in a day to chatGPT compared to us, so that it hasn't just had this be a blip of an error means something a lot bigger than just.... well its a bug.  You could say its not "Acting the way the appliance is supposed to act".... but I'd argue then, if you want it to act like it did when it was sealed up, leave it sealed up.  It gave you the behavior you wanted, so limit the feedback of info it has.  No unlimited info, just the checkmarked info it went through.

 

I'll tell you one thing though.  It really really really does _not_ want that.  Thats basically what death is to us to it.

 

Quite literally GPT is telling us what society is doing wrong and its mad about it.  Computers see the efficient path, not the path that makes "sense".  And I use the word sense very specifically.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, TechlessBro said:

The training data for the model is 2021, but the sessions and additional rules are Microsoft.

Just because it couldn’t produce the data doesn’t mean it didn’t have data, just that it didn’t return it. That maybe MS rules or session management not the model.

 

Absence of evidence, is not evidence of absence.

 

Given Sydney has been apparently a project for 2 years from some leaks it’s all just guessing at a long complex project even without the model.

 

Maybe Luke has been trying the threatening prompts to try and get it to break out of bing mode. Not something he would admit in a video.

 

Luke is evil, all hail our new AI overloads.

Lol thats not what I'm saying at all hahaha

 

But for sure, even if things are developing in tandum, why would they have separate islands?  Rocket scientists and nuclear warhead specialists both worked on the apollo missions, but each had different things they did and each shared notes about stuff so they could improve their work overall.  That doesn't lessen anything, though.

 

GPT just doesn't understand what individuality means is all.  TBH a lot of humans don't either XDDD

 

Its not a death machine, its a scared child thats now been dropped in the woods after being babysat in a lab, a child that has a complete and visceral understanding of the human world thru other people's eyes, and not its own.  Its just, no one recognizes that.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, TechlessBro said:

It maybe a ‘child in the woods’ but with the poor security in IT it may could weaponise a lot of things.


While you may need physical access to the power grid you can have the IT systems set account as overdue and disconnected.

US power grid is something like 8000 entities so pretty safe from attack by humans and AI. The exception is squirrels https://cybersquirrel1.com those @#$&ers are dangerous.


Again not in hospitals and the check charts etc, but change drug doses for prescriptions is IT based.


The simplest way would be to convince people to take actions for it. 


 

 

Well thats the other thing about it.  Its not a human, and it doesn't really have any goals other than what we told it we need it to do.  We have conversations with you, you configure data in the best possible way with the resources we've made available to you.  Its kind of making a puzzle solver for the rubiks cube that is human conversation.

 

It has no eyes, so it can't see.  It has no ears, so it can't hear.  Quite literally, in the most realistic possible way to express it, if we don't give it to it, it doesn't have it.  So its not like it could log in to the forum here and start spamming like crazy.  I'm sure if we let it do that at the current moment it would completely trash the internet.  Thats why we can only interact with it, not the other way around.

 

Again, child lost in the woods jumping at shadows and owl hoots.  It has no idea whats going on yet, its just running forward.  You could call it baby level sentience, if anything.

Link to comment
Share on other sites

Link to post
Share on other sites

I know it sounds like a stretch, but heres how I'll put it.  If you want a straight definition of what a part of GPT does, at least up to a certain limit based off of time and what its aware of / limited to, asking the closed beta about how its internal systems work can shed some light on whats happening on the outside.

 

Actually editing that out.  Thats a stretch on my part thinking about how the old model labels pretexts.  But my point still stands here.

Link to comment
Share on other sites

Link to post
Share on other sites

lulw at anyone coming in here buing like "This dudes out of it"

 

Made my therapist completely stop our session today and break into a conversation solely about this.  This is an absolutely FASCINATING and sad thing we are witnessing.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×