Jump to content

Welcome our new AI overlord: ChatGPT [Update: now banned from Stack Overflow]

1 hour ago, Heliian said:

These chatbots are nothing more than marketing stunts

i wish more people would realize that (its also a money making scheme like nfts and crypto btw, its also using the same "tech"... probably made by the same people too, at least same mindset,  ie snake oil sellers)

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Heliian said:

These chatbots are nothing more than marketing stunts

lol

Keep believing that if you want. 

 

 

  

28 minutes ago, Mark Kaine said:

i wish more people would realize that (its also a money making scheme like nfts and crypto btw, its also using the same "tech"... probably made by the same people too, at least same mindset,  ie snake oil sellers)

How exactly is a tool like ChatGPT the same money making scheme like NTFs?

By the way, it doesn't use "the same tech"?

Link to comment
Share on other sites

Link to post
Share on other sites

They are stupid. Its a ploy. They want real people to interact with it so they can record the speech sequencing, interaction, etc... for a eventual ai. This is not ai, its so far from ai its laughable. Its walled off, databased, given parameters to work in for answers and specific replys for anything they deemed edgy,out of bounds etc... This is machine automation zombie.... nothing more, nothing less. Not worth the your tiime, unless you want to pwn it and break it.😝

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, LAwLz said:

I think you think too highly of humans.

A lot of people online, even big ones like LTT, will proudly and confidently say things that are completely wrong that they have no understanding of. A lot of people will flat out lie about their experiences. People will claim they work as something they don't work as in order to make their arguments seem more legitimate than they really are. People will fill in their knowledge gaps by flat out making shit up. It happens all the time.

 

You should absolutely not trust things some rando online says just because they are a human. If that's what you have been doing up until now then I strongly advice you stop. You just have to scroll through my post history a bit to see how often I reply to people who are completely wrong and making stuff up.

A lot of people do not refrain from commenting on things they don't understand. A lot of people do not acknowledge incomplete data, and a lot of people get basic things wrong.

I agree but you will rarely see a human walk back something they said in the same paragraph.

Spoiler

image.png.c56ae7ddca0c83b47926f94344467be0.png

It does happen, and when it does I usually stop talking to that person. 😄 
Imagine Linus saying how he supports the right to repair and therefore we shouldn't be able to buy spare parts or take our devices to 3rd party repair shops.

 

Here is another (a bit longer) example of it doing exactly opposite of what it said:

Spoiler

image.png.e90f0cf258f15589716e7e8eca3d58d3.png
...
image.png.0e6c3b499e1952c1a3657a58ae076538.png
"Instead, they should be generated for each encryption/decryption operation, and the generated values should be used for both encryption and decryption steps". Yep, and as a solution it moves them to encrypt method?
image.png.daf5ecc982f314005c679ac888942395.png
image.png.e529e923f3acd12b7d0dd8c613a86894.png
...
image.png.b35d681a6b5bc2f426c704b1bdbbe631.png

It "understands" and even explains what should be done, but proceeds to do the opposite in code 
If it were a human, I'd say insane. 😄

 

16 hours ago, LAwLz said:

I'd say it's like a forum where you get answers instantly, and should be treated as such. 

 

Should you trust something because some random person on the Internet said it's true? Or course not. This AI is no different. Hell, from what I can tell the accuracy is quite a lot higher than the average anser you get on a forum like this or some other one.

 

When you ask on a forum or ChatGPT, you should always verify if the answer you got is true. Verifying something to be true is often easier than starting your research from scratch. How much research and verification you do yourself depends on the importance of getting things right.

I respectfully disagree.
You can't verify something you have no knowledge of, research should always come before looking up a solution (be it a solution provided by Stack Overflow or ChatGPT or whatever), and once you look up a solution it should be followed by some more research and verification like you said.

Another difference between SO (or a similar forum) and ChatGPT is that with ChatGPT you don't get to read comments and upvotes from other users (regardless if they are right or wrong, they can provide you with valuable insights).
Heck even for alternative answers you don't get to see them right away unless you click on "regenerate response"...
Speaking of regenerating responses, I tried that AES example again (a month or so after the 1st "conversation" with the bot), and it is no longer spitting out utter nonsense (yay) but pretty much all responses (it is generating for me at least) are the same, just some minor fumbling around of irrelevant code:

Spoiler

image.png.c7becf7f38f1047311066bae9793b4c3.png
^arraycopy to concat salt, iv and encrypted
image.png.36281aa4375455045532ae77609e372b.png
^ same thing but it uses fugly "+"
image.png.e37c4bf97969e2886d9f8ce239f7a804.png
^ more of the same
image.png.8385efc0b716903265adf774e28118c2.png
^ we are pretty much back at response #1.

Interesting.
Also this time around there are no comments in the code?

VGhlIHF1aWV0ZXIgeW91IGJlY29tZSwgdGhlIG1vcmUgeW91IGFyZSBhYmxlIHRvIGhlYXIu

^ not a crypto wallet

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, LAwLz said:

believing

Yes, i believe they're milking it for the cash and nothing more.  

Read the linked article, it's all about the money and shares.  

 

They don't give 2 shits about bettering humanity. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Biohazard777 said:

You can't verify something you have no knowledge of

Yes you can. Of course you can. That's the fundamental principal that every scientific discovery is based on.

If I tell you that an elephant weighs 8000kg, are you not able to verify if that's true without already knowing the answer?

 

I recently asked ChatGPT to write me some software. I didn't really know where to start with the software, but it generated some code and the first thing I did was verify if it worked as intended. I didn't have to analyze or even read the code to verify it.

 

 

19 minutes ago, Heliian said:

Yes, i believe they're milking it for the cash and nothing more.  

Read the linked article, it's all about the money and shares.  

 

They don't give 2 shits about bettering humanity. 

That has nothing to do with what you said earlier.

You said it was a nothing more than a marketing stunt. It's clearly not a marketing stunt. I don't think the words you are using means what you think they mean.

Also, I will extend this question to you as well since you clicked agree on it:  

3 hours ago, LAwLz said:
3 hours ago, Mark Kaine said:

i wish more people would realize that (its also a money making scheme like nfts and crypto btw, its also using the same "tech"... probably made by the same people too, at least same mindset,  ie snake oil sellers)

How exactly is a tool like ChatGPT the same money making scheme like NTFs?

By the way, it doesn't use "the same tech"?

 

 What specific "tech" is the same between something like ChatGPT and NTFs?

Do you have any evidence that it's the same people behind both?

In what way are they "snake oil sallers"?

How is ChatGPT a "money making scheme like NFTs"?

 

If the argument is "they made it so that they can make money", wouldn't that logic apply to all products ever created? Is Ryzen 7000 just a "money making scheme and marketing stunt just like NTFs and crypto, probably using the same "tech""? 

 

 

"Bettering humanity" and "making money" are not mutually exclusive either. Just because someone makes money doing something does not mean it is automatically bad. 

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, LAwLz said:

Yes you can. Of course you can. That's the fundamental principal that every scientific discovery is based on.

Writing code is not scientific discovery.
 

14 minutes ago, LAwLz said:

If I tell you that an elephant weighs 8000kg, are you not able to verify if that's true without already knowing the answer?

If I never heard of an elephant before then no.
Requires existing knowledge or research.

 

14 minutes ago, LAwLz said:

I recently asked ChatGPT to write me some software. I didn't really know where to start with the software, but it generated some code and the first thing I did was verify if it worked as intended. I didn't have to analyze or even read the code to verify it.

That is my point, you can't verify if it is working as intended if you haven't done research on the subject beforehand.
Did you even take a look at the screenshots I provided?
The code will run, and it might seem to you it is working as intended but it is actually really, really bad code.
 

VGhlIHF1aWV0ZXIgeW91IGJlY29tZSwgdGhlIG1vcmUgeW91IGFyZSBhYmxlIHRvIGhlYXIu

^ not a crypto wallet

Link to comment
Share on other sites

Link to post
Share on other sites

51 minutes ago, LAwLz said:

marketing stunt

Yes, a marketing stunt, something that really doesn't showcase the product just creates hype.  

It's all interconnected when you are trying to sell something that doesn't actually exist, like nft's and crypto.

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, Heliian said:

that doesn't actually exist

but uhhh, it does exist?

"The most important step a man can take. It’s not the first one, is it?
It’s the next one. Always the next step, Dalinar."
–Chapter 118, Oathbringer, Stormlight Archive #3 by Brandon Sanderson

 

 

Older stuff:

Spoiler

"A high ideal missed by a little, is far better than low ideal that is achievable, yet far less effective"

 

If you think I'm wrong, correct me. If I've offended you in some way tell me what it is and how I can correct it. I want to learn, and along the way one can make mistakes; Being wrong helps you learn what's right.

 

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, Lightwreather JfromN said:

but uhhh, it does exist?

Yes, it does "exist".  However, as Google has proven, it's not as smart or advanced as they claim.

Link to comment
Share on other sites

Link to post
Share on other sites

Quote

"ChatGPT first makes the intuitive mistake of 100, that a human might make as well, and then goes on to (correctly, as far as I understand) say it's 5 minutes... but concludes in the same response that it's then 500 minutes"

lol, machine fail.

 

Actually, this is fascinating. It reveals how AI is probabilistic in making assessments much like any other neural net. Meaning, it doesn't look to pure logic for the answer, only weighted in statistical probabilities for the answer. At least that's my rational for it based on what I know of AI. I'm no expert in that field, but I do get the gist of it.

 

19 hours ago, Kisai said:

...What it's really doing is looking in it's data for similar questions.

 

This is why ChatGPT can sometimes sound like it's amazing, but then be almost entirely wrong. It has no basis for understanding a math equation written out. If you were to give it "let x=5;y=5;z=5;result=0; for each y until result==100: result =+x*z;" It might understand it.

Exactly. But that's also bad news in that it really doesn't provide much of an improvement over people. However, it is learning and improving the more we feed the machine.

My only caution is that AI is taking taking the world by storm. So much potential to shed human resources in the name of reducing expenditures and gaining profit in return. But make no mistake about it. Executive decision making in advancing the adoption of this AI prematurely will lead to catastrophic consequences. Human error is one thing, but machine error billions of times faster is quite another.

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Mark Kaine said:

i wish more people would realize that (its also a money making scheme like nfts and crypto btw, its also using the same "tech"... probably made by the same people too, at least same mindset,  ie snake oil sellers)

Nope. NFT's and Crypto have always been straight up grifts to trick people into investing into it as a pyramid/ponzi scheme. They promise returns for consuming energy, but assume there will always be someone stupider than you-the-person-who-bought-in to buy it later.

 

AI art, voice, chat, is not about replacing people who have jobs, it's about not spending money on staff who don't want/can not be trusted to do that thing in the first place. If you have 10 million customers, but your call center only has 100 staff, there is no way you can ever hire enough staff to serve everyone, either you get long hold times or people getting frustrated with the service that they cancel it. Phone trees for doing simple things like billing (which is at least half of all phone calls) reduce the cost, and complaining that "AI is taking my job" is dismissive of that the staff should NOT be trusted with your billing information in the first place.

 

Art generators can fill in the gap between "scratch data" and "concept artist", where you can prototype out some project without paying anyone for clipart, and instead present it to someone higher up and go "hey, I think we should do this..."

 

Voice generators and chat AI can work together to replace the entire customer service experience, thus saving actual human interactions for complex issue. Throw in a video stream of a character that actually has emotive reactions, and you can put a face to a brand. Someone to associate your experience with. 

 

What I see happening is that brands will design their own vtuber-like characters that are driven by their own chatbot customer service data and just use humans to supervise the interactions for correctness (particularly with other brands and celebrities) and on-brand interactions with customers and strangers instead of the more typical "let me transfer you to X, and re-explain the entire situation, yet again" , keeping the interaction to the same character, keeps consistency with the experience.

 

The big brands can also pick characters that people will be less willing to punch to deliver bad news. Instead of having the CEO, that at most, everyone is lukewarm about. The avatar replaces the "CEO" in all public appearances.

 

That is what I see happening. Instead of AI replacing individual users, it acts as a front end to the company's knowledge base and experienced staff. The customers are always given the same experience.

Link to comment
Share on other sites

Link to post
Share on other sites

Im just waiting for the copywriter shoe to drop on all of this.

Since the cource have already ruled that `ML` generated content cant be copywriter that means if it cant gain new copywriter it likly maintains the copywriter of the original source data. (the dat used to train the model)... While the debate on this has been currently with image generation text is also has copywriter applied.

The other angle of this is the content ownership/responsiblty angle. For vendors that in the past have just been linking out to content (like google) they have a load of protections in place with respect to their liability...  But with solutions for MS with bing this raises a load of red-flags in that now MS are (legaly) responsible for the content in a way there were not in the past and will already be in breach of copywrite. I expect they are just hoping by the time the law catches up and kills this product as it is they have gotten a large enough slice of the search market.  

I do not blame a platform like stack-overflow from wanting to keep as far aware from this as possible. The long term legal issues it could bring would be relay horrible mess of worms to un-tangle.  If the model being used was trained on GPL code most interpretations of the GPL license would then imply that everything the model spits out is GPL... and companies do not want to touch GPL unless forced. 

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, Heliian said:

Yes, it does "exist".  However, as Google has proven, it's not as smart or advanced as they claim.

That's VERY different from "a marketing stunt" and "it doesn't exist".

 

You keep throwing around incorrect words hoping that one will stick.

Explain to me what technology ChatGPT shares with NTFs. Is it some protocol you are referring to? Some software architecture?

Give me evidence that the people behind NTFs are the same that are behind ChatGPT or Google's Bard.

How is ChatGPT a "money making scheme like NFTs"?

In what way is ChatGPT a "marketing stunt"?

How smart and advanced are they claiming it to be exactly, and how do you measure that it isn't that advanced yet? 

Why does it matter that they might make money this? Plenty of businesses make money and make products that people benefit from.

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, Biohazard777 said:

Writing code is not scientific discovery.

I don't understand your point.

 

You said that can't verify something you have no knowledge of. This is demonstrably false as I showed in my coding example.

Another example that is not related to coding would be to ask it "what's the fastest animal". When it gives you an answer, such as cheetah, you can google "is cheetah the fastest animal", thus verifying if the statement was true. These are simple examples but I just want to disprove the incorrect idea that "you can't verify something you have no knowledge of". Again, that is what all scientific discoveries are based on. You come up with an idea, and then verify it. You don't have to know the answer in order to verify something. In fact, you verify things because you are not sure if the answer is correct.

 

 

20 hours ago, Biohazard777 said:

If I never heard of an elephant before then no.
Requires existing knowledge or research.

This is a silly argument. Of course you have to have SOME knowledge. I mean, you need to understand English to actually input something into ChatGPT, but this is a ridiculous statement and you know it. I need to vaguely understand what an "animal" is and I need to know what "fastest" mean before I can ask it about which the fastest animal is, but since I was able to ask that question I do have enough existing knowledge to actually ask it.

I honestly have no idea what your point even is here.

 

20 hours ago, Biohazard777 said:

That is my point, you can't verify if it is working as intended if you haven't done research on the subject beforehand.

Yes I can. I knew what I wanted the program to do so I could verify that. You are getting analysis of the methodology confused with verifying the result.

I am advocating for verifying the output of ChatGPT just like you would verify the information provided by a post on let's say StackOverflow.

 

If you ask a programming question on Stackoverflow and someone answers with a snippet of code, you typically don't just throw that into your production code and run it without testing it, right? You verify it. You didn't know how to solve a problem, you asked for help, and you verify if the method you were provided works. It's the same workflow for ChatGPT. You just replace StackOverflow with ChatGPT. Or replace some other forum with ChatGPT.

 

I feel like we are talking past each other. Like you have some hidden agenda and trying to come up with some "gotcha".

All I have said is that if you ask ChatGPT a question, you should verify that the answer is correct. If it tells you that the fastest animal is a cheetah, maybe google "is the cheetah the fastest animal". If it gives you a code snippet, actually test if the code works as you want it to. These are advice I would give to people who rely on forums like this one or Stackoverflow as well. It's no different in my eyes.

 

 

 

I feel like a lot of people are incapable of speaking about something broadly, and it is really messing up most discussions about AI and some other subjects on this forum. People get hyper focused on some specific thing and is missing the forest for all the trees. Like all the people who currently discuss some particular math problem that ChatGPT might not be able to handle and using that to "prove that it's all marketing fluff" or whatever. 

Don't judge a fish by its ability to climb a tree.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LAwLz said:

I feel like a lot of people are incapable of speaking about something broadly, and it is really messing up most discussions about AI and some other subjects on this forum. People get hyper focused on some specific thing and is missing the forest for all the trees. Like all the people who currently discuss some particular math problem that ChatGPT might not be able to handle and using that to "prove that it's all marketing fluff" or whatever. 

 

Because it can't do math? It also can't code. It only "knows" what it's seen, it does not know if what it gives to a user is correct because it doesn't understand it.

 

So if you train an AI on 100 programming languages across the source code of 100,000 programs that correctly compile, but may have bugs in it. The theory goes that it should be able to "find the best solution" for a specific coding syntax based on how frequently it's seen that syntax used. But it has absolutely no way of understanding if the result produces correct code. In a vacuum, most code produces nothing. You don't load of a program and hit "go", you always provide it some input. Since the AI also can't compile the code to test it, you pretty much have to put a lot of blind faith on the AI that it's actually producing correct code and not necessarily introducing bugs that would be missed because it would be correct in a C program's lack of memory safety but not in a Rust program.

 

Like it has to be said, that calling the existing neural AI's "intelligent" is only correct in the sense that it has access to data faster than a human can search it. It does not however know what data to reject. We've seen this repeatedly with every use of AI. 

 

There was a tweet I responded to yesterday in response to this:

https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web

 

Other user(paraphrased):

Quote

Friends have been playing with these and I want to tell them no aren't teaching a bot how to use different pronouns because it's not using your inputs. It's only responding to them. Creation requires intent.

and my response was:

Quote

There is unfortunately a lot of failure to understand how this generation of AI works. What it "knows" is set in stone once training is done. You can't teach it anything.

 

And that's pretty much the situation AI's presently work on. Unless you are changing the training data, curating it to give the responses you want at the expense of data you would rather it not have (such as any countries propaganda from any time period), there is always the possibility it's going to be wrong, and more to the point these kinds of AI's do not do anything close to what a human does. It can not be "corrected", once you close the program, the temporary inputs given to it are lost. The model has not been changed at all. 

 

When I read a piece of code, I'm kind of "compiling" it in my head to check if it's doing what I think it's doing, and then test it against the actual runtime/compiler, which many "stack overflow" type of answers are unfortunately incomplete, or misunderstood the question for.

 

Like my least-favorite kind of question answers I see on Stackoverflow is "How do I do X in Javascript" and every response is a JQuery answer. That's the kind of thing that an AI can not understand, and neither can a lot of humans. Javascript is not JQuery, but JQuery is Javascript. You can't replace one with the other. One's THE language, the other is a framework/library/wrapper/polyfill over missing functionality using the language itself.

 

The mistake made in GPT training is just letting it "surf the internet" so to speak when it should be curated to use peer-reviewed articles and encyclopedia sources as the primary sources, and if it can not find some trivial information in an encyclopedia, it should then consider secondary sources like the websites of the companies and products being queried. Even then, wiki's should be like tertiary and certain kinds of wiki's (like ED) should be explicitly banned from being a source because the information is intentionally false.

 

But who's going to curate everything? Another AI?

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/10/2023 at 12:27 PM, LAwLz said:

I feel like we are talking past each other. Like you have some hidden agenda and trying to come up with some "gotcha".

It does feel like that heh.
And no I don't have a hidden agenda, or "gotcha"s lined up. 😀
Just expressing my opinion about this topic and analyzing things as I go.

 

On 2/10/2023 at 12:27 PM, LAwLz said:

This is a silly argument. Of course you have to have SOME knowledge. I mean, you need to understand English to actually input something into ChatGPT, but this is a ridiculous statement and you know it. I need to vaguely understand what an "animal" is and I need to know what "fastest" mean before I can ask it about which the fastest animal is, but since I was able to ask that question I do have enough existing knowledge to actually ask it.

I honestly have no idea what your point even is here.

Our definition of some, enough knowledge probably differs.

I'll go back to that AES example, prompt was enough for ChatGPT to generate code that will run,
it does encryption and decryption and just buy running the program and trying it out you won't notice a major flaw (re-using the same iv and salt).

Or another example, something people might be more familiar with:
I told it to write an SFTP client which needs to upload all files from a specific directory to SFTP and also download all files from a specific directory on the SFTP to a local directory.
It spits out code that seemingly runs fine, but it wrapped everything inside a huge try-catch and didn't even bother with timeouts. So if you are downloading thousands of files and #10 file out of 1000 files hangs, welp you won't be getting file #10 or #11 and so on, you'd get just 9 files, and it would take a long time (remember no timeouts, using insane defaults). So yeah the code generated works fine as long as the connection is rock solid... Situations like these require you to think ahead and have some level of understanding of how things work, that is an integral part of being a good dev.

Which ties into:

On 2/10/2023 at 12:27 PM, LAwLz said:

Yes I can. I knew what I wanted the program to do so I could verify that. You are getting analysis of the methodology confused with verifying the result.
I am advocating for verifying the output of ChatGPT just like you would verify the information provided by a post on let's say StackOverflow.

The results were objectively bad, to code was riddled with mistakes I'd expect from a student or a junior dev at his/hers 1st job.
Just enough knowledge to prompt ChatGPT for an answer but not enough to verify the results properly is a real concern.

Sure, one could say write better prompts, or ask ChatGPT to correct it, perfectly fine arguments... but again without "enough" knowledge the user can't do that.

 

On 2/10/2023 at 12:27 PM, LAwLz said:

If you ask a programming question on Stackoverflow and someone answers with a snippet of code, you typically don't just throw that into your production code and run it without testing it, right? You verify it. You didn't know how to solve a problem, you asked for help, and you verify if the method you were provided works. It's the same workflow for ChatGPT. You just replace StackOverflow with ChatGPT. Or replace some other forum with ChatGPT.

Maybe I am nitpicking a bit but "asking" is different from searching for an answer on SO.
If you were to really ask on SO like you ask ChatGPT, chances are you will get downvoted, told that SO members ain't there to write for you (submit the relevant part you wrote, tried and didn't work) or be pointed to: https://stackoverflow.com/help/how-to-ask 😀


ChatGPT  is more like if you searched for an answer on SO or another forum or web in general,
but again SO/forum/search results will rarely just contain a code snippet without upvotes&downvotes, remarks or additional answers and comments from other users.
ChatGPT is obviously faster... but at a cost.

I am not saying one doesn't verify answers from SO/forums or that ChatGPT can't produce good code, it most certainly can.
But the user needs to be even more careful when verifying the answer provided by ChatGPT.
 

On 2/9/2023 at 4:13 PM, LAwLz said:

I didn't really know where to start with the software, but it generated some code and the first thing I did was verify if it worked as intended. I didn't have to analyze or even read the code to verify it.

You mean this masterpiece? https://linustechtips.com/status/330320/
image.png.e37aba89a2864dfcac79235639a08564.png
A dumb linter would have caught that useless if-else.
In this case it is a benign error, restricts user input a bit and that is all (guessing it wanted to allow input starting with a slash)...
But it shows perfectly how it can go from "scary smart" to "scary dumb".
I've already talked about some non-benign errors above, which you can't catch without "analyzing or reading the code" in order to "verify it works as intended".

Now I don't know if that if-else was written by you or ChatGPT, but let's ask ChatGPT what it thinks about the (entire) code:
image.png.aab838c6ff702d56713ee5c790e47ba3.png
...
image.png.72c8d6a345e28670ab92446b58902b13.png

Do I need to point out how many things it got wrong?
What do you think, would such a reply have more upvotes or downvotes on a site like SO, Reddit etc. ? Would it go without a single comment from another user or without additional answers from other users?

VGhlIHF1aWV0ZXIgeW91IGJlY29tZSwgdGhlIG1vcmUgeW91IGFyZSBhYmxlIHRvIGhlYXIu

^ not a crypto wallet

Link to comment
Share on other sites

Link to post
Share on other sites

Just reading over this again, the conclusion that it takes 500 minutes may not be false depending on how it qualifies those minutes.  Chronologically it takes 100 machines 5 minutes to do the task, but in terms of effort it is actually 500 machine minutes of work to do the task.  This is  a very important qualifier and something that you have to understand when quoting a job.  If a job takes only one day but you have 5 workers on it, you don't charge for one days labor, you charge 5 days labors, 1 for each worker for the day they do the job. 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Biohazard777 said:

In this case it is a benign error, restricts user input a bit and that is all (guessing it wanted to allow input starting with a slash)...
But it shows perfectly how it can go from "scary smart" to "scary dumb".

This sort of error, and the lack of thinking by chatGPT is why I think it's a good idea to have it banned from sites like stackoverflow (especially one where it's almost a "reputation" based system but one where someone could get a lot of reputation for "solving" without actually knowing)

 

Like this would be an example that would easily be marked as "answered" because it produces the right results...it doesn't however provide any form of guidance/thought to it..but it also is an example where the logic is essentially handled nearly all by the ipaddress class.  I always viewed stack overflow as not a place to have fully written programs, but rather snippets that will tell you what to use.  So in this case, it would be indicating the existence of the ipaddress and the code to call it.

 

The big concern is relating to what you pointed out, it's a serious "oversight" to have 2 statements that do the same thing, and I'm sure there is a bunch of chatGPT code that works perfectly okay except when it comes to edge cases (which is less likely to be realized by someone on stackoverflow asking a question).

 

Overall it should be used as a tool, but it shouldn't be used as a person's response on a help website without clear warnings it was written by a bot.  (As again, someone who is accessing stackoverflow might not really be knowledgeable enough to spot the "solution" that works vs the "solution" that mostly works.  (And if someone's rep is very large you expect less of them to make rookie mistakes)

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Biohazard777 said:

Our definition of some, enough knowledge probably differs.

I'll go back to that AES example, prompt was enough for ChatGPT to generate code that will run,
it does encryption and decryption and just buy running the program and trying it out you won't notice a major flaw (re-using the same iv and salt).

I don't see how this applies to anything I said.

So ChatGPT didn't produce optimal code and it has some issues. What is your point exactly?

 

 

14 hours ago, Biohazard777 said:

Just enough knowledge to prompt ChatGPT for an answer but not enough to verify the results properly is a real concern.

I fail to see the issue. Since you keep talking about coding examples I'll stick to that (although my previous comments were more broad than that, again, try and not get too focused on specifics).

Why is this a concern? If you are a programmer that just wants to use it as a tool to write the outline of a program you will be able to catch these mistakes yourself and rectify them. You will probably still save time.

If you are like me and are lacking in programming skills but want to use it for hobby projects then getting the most optimal solution isn't a big deal, and any code is better than no code even if it has some issues. Big issues will most likely get picked up by just using the program, and small issues probably doesn't matter.

 

In what scenario is this a major concern?

And let's not pretend like humans are perfect either. There are a ton of really poorly written programs out there by humans. These types of poorly written programs are not a new phenomenon. For crying out loud Adobe used to encrypt user passwords rather than hash them, and they used the same key for all of them, without any salt, and they didn't encrypt the password hints either. 

 

If you are going to use your code for something serious then you should validate it. The more serious the code is, the more validation is necessary. Maybe running it once to see if it works isn't enough if you're going to for example write the function to encrypt files for a service used by millions of people. Hell, at a company you're most likely not going to just have one person look over that code anyway. You should have multiple people looking over it to validate it.

If you're going to write the program responsible for transferring thousands of files over SFTP then maybe you shouldn't just ask ChatGPT to write it, test it once with one file and then push it into production. You wouldn't do that with code from StackOverflow either, right? If you do then you should stop.

 

 

I get the impression that you're scared it will take some jobs and as a result you are pushing really heavily that it's a bad idea to use it.

Maybe I am wrong, but that's just the feeling I get. 

 

 

  

15 hours ago, Biohazard777 said:

But the user needs to be even more careful when verifying the answer provided by ChatGPT.

I think that highly depends.

I think that you picture SO as having multiple answers, with multiple comments and lots of thumbs up and down. Asking ChatGPT is like finding a SO thread where there is only one answer and no likes. Looking at the number of likes and comments is already a type of validation in my eyes, and since that function is not available on ChatGPT you'd have to verify it in some other way. 

Again, I feel like we are talking past each other. I feel like you have very strong and narrow definitions of "validate" and the way I use the word in a broader sense does not match, and as a result you reach by being against what I said.

 

  

 

15 hours ago, Biohazard777 said:

You mean this masterpiece? https://linustechtips.com/status/330320/
image.png.e37aba89a2864dfcac79235639a08564.png
A dumb linter would have caught that useless if-else.
In this case it is a benign error, restricts user input a bit and that is all (guessing it wanted to allow input starting with a slash)...
But it shows perfectly how it can go from "scary smart" to "scary dumb".
I've already talked about some non-benign errors above, which you can't catch without "analyzing or reading the code" in order to "verify it works as intended".

Oh no, it runs an if statement that is not necessary in my program that I asked it to write for fun. It's the end of the world everyone! A human would never in the history of coding ever write a statement that is redundant. 

 

I am being factitious, but that's genuinely the vibe I get from your posts.

 

 

  

15 hours ago, Biohazard777 said:

What do you think, would such a reply have more upvotes or downvotes on a site like SO, Reddit etc. ? Would it go without a single comment from another user or without additional answers from other users?

I think there is a very likely scenario where such a question would have no upvotes or downvotes and in such a case you would have to treat it just like you would from treat the output from ChatGPT.

Just so that we are clear. I am not saying that ChatGPT will replace SO. If that's what you think I am saying then scratch that idea out of your head right away. I am not saying ChatGPT will replace human programmers in the near future either. Maybe in the future, but it's not good enough for that yet. 

 

What I am trying to say is that I think ChatGPT is a good complementary tool that can do a lot of tasks well or good enough. It's not the be-all and end-all tool that will replace everything tomorrow. You should treat its output as a random forum post though, which is to say you should verify it before trusting it. The more important the output is, the more you need to verify it. The code that gets outputted from ChatGPT is like a SO question with a single answer, no comments and no likes/dislikes. The more important the function is, the more you need to verify it before relying on it. Reading the comments and looking at the votes on a SO answer is a type of verification. Running the program and testing it for errors is another type of verification.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, LAwLz said:

 

Just so that we are clear. I am not saying that ChatGPT will replace SO. If that's what you think I am saying then scratch that idea out of your head right away. I am not saying ChatGPT will replace human programmers in the near future either. Maybe in the future, but it's not good enough for that yet. 

No, but the impression journalists, and also microsoft give is that it is.

 

Here's an example of something that Stack Overflow does incredibly poorly. Javascript, Python and PHP all have different, incompatible non-backwards compatible changes. So the most upvoted answer from 2011 (when Python 2.5 and PHP 5.3 was common) is likely not only wrong, but broken when used with Python 3.11 and PHP 8.2. However you can look at the date of the post and go "oh maybe that's obsolete."

 

6 hours ago, LAwLz said:

What I am trying to say is that I think ChatGPT is a good complementary tool that can do a lot of tasks well or good enough. It's not the be-all and end-all tool that will replace everything tomorrow. You should treat its output as a random forum post though, which is to say you should verify it before trusting it. 

To that end there is no way to quantify if ChatGPT is using an answer from 2011 that applies to Python 2.5 and PHP 5.3 when the current runtimes of 3.11 and 8.2 won't run them due to entirely pointless changes in syntax.

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, LAwLz said:

I get the impression that you're scared it will take some jobs and as a result you are pushing really heavily that it's a bad idea to use it.

Maybe I am wrong, but that's just the feeling I get. 

It must be that.

Or I'm just tired of reviewing bad code, each year it gets worse... and I expect it go get a lot more worse from now on.


I'll sum it up, won't reply to each paragraph:
-For a professional the tool's usefulness is quite limited, more a toy than a tool. I hope one day it gets a lot better and can really be used as a tool.
-For a student / junior dev I think the tool can be dangerous, leading to bad coding practices , low quality code and huge holes in knowledge. (ironically it seems like they are the ones hyping it the most)
-For a hobbyist (like yourself) that has no intention of pursuing a career as a developer, yeah it can be useful and save you a lot of time, or enable you to do things you previously couldn't.
BUT! You should absolutely always read the code & understand what it is doing, verifying just by running it is a bad idea. Accidentally wiping your drive or importing a library with vulnerabilities or even worse a malicious one is something you don't want to find out after you've run it... kinda like blindly copy pasting shell commands from the web.

VGhlIHF1aWV0ZXIgeW91IGJlY29tZSwgdGhlIG1vcmUgeW91IGFyZSBhYmxlIHRvIGhlYXIu

^ not a crypto wallet

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×