Jump to content

Australian Launches Defamation Claim against ChatGPT AI

Dirtyshado

Summary

 

An Australian politician, Brain Hood, is suing for defamation after ChatGPT and its maker incorrectly identified him as an individual facing criminal charges for one of Australia's largest bribery scandals...

when he was in fact the whistle blower that alerted authorities and a key witness in the cases who the Judge said "showed tremendous courage" in coming forward.

 

Quotes

Quote

Asked “What role did Brian Hood have in the Securency bribery saga?“, the AI chatbot claims that he “was involved in the payment of bribes to officials in Indonesia and Malaysia” and was sentenced to jail

 

Quote

Hood paid a heavy price for doing the right thing, losing his job after asking questions of his bosses, and experiencing years of anxiety as prosecution witness while the court cases stalled and shrouded in blanket suppression orders.

 

My Thoughts

He has every right to be angry, the case cost him a lot and consumed 11 years of his life to get even some justice for the crimes he uncovered...  to be called a criminal by a AI bot that is suppose to be the future on internet searches.

 

Sources

Sydney Morning Herald https://www.smh.com.au/technology/australian-whistleblower-to-test-whether-chatgpt-can-be-sued-for-lying-20230405-p5cy9b.html

 

Story on how the corruption was uncovered

https://www.smh.com.au/politics/federal/how-a-meeting-in-a-cafe-with-a-journalist-prompted-australia-s-biggest-foreign-bribery-case-20181127-p50inv.html

 

The Federal Prosecution Case which clearly states the individuals prosecuted. https://www.cdpp.gov.au/case-reports/securency-and-note-printing-australia-foreign-bribery-prosecutions-finalised

 

Edited by Dirtyshado
I got his name wrong
Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Dirtyshado said:

Australian politician, Robin Hood

Hmm?

mY sYsTeM iS Not pErfoRmInG aS gOOd As I sAW oN yOuTuBe. WhA t IS a GoOd FaN CuRVe??!!? wHat aRe tEh GoOd OvERclok SeTTinGS FoR My CaRd??  HoW CaN I foRcE my GpU to uSe 1o0%? BuT WiLL i HaVE Bo0tllEnEcKs? RyZEN dOeS NoT peRfORm BetTer wItH HiGhER sPEED RaM!!dId i WiN teH SiLiCON LotTerrYyOu ShoUlD dEsHrOuD uR GPUmy SYstEm iS UNDerPerforMiNg iN WarzONEcan mY Pc Run WiNdOwS 11 ?woUld BaKInG MY GRaPHics card fIX it? MultimETeR TeSTiNG!! aMd'S GpU DrIvErS aRe as goOD aS NviDia's YOU SHoUlD oVERCloCk yOUR ramS To 5000C18

 

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, Dirtyshado said:

He has every right to be angry, the case cost him a lot and consumed 11 years of his life to get even some justice for the crimes he uncovered...  to be called a criminal by a AI bot that is suppose to be the future on internet searches.

ChatGPT is based on outdated information and isn't even hooked up to the internet. It's not the future of internet search. The version that's used by bing search is much more neutral. I couldn't even get bing to flat out say nazis are bad people.

 

Your self-esteem has to be pretty low if you're butthurt about some non-thinking chat bot (that is known to spread outdated misinformation now and then) saying you're a criminal.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, tkitch said:

ChatGPT isn't a person, and there was no person involved in creating the text it output.

 

Can you sue a software for defamation?

Google won previous cases that web search links are not defamation, since its only showing the result pointing to the article... but ChatGPT wasn't linking to an existing article, it wrote a completely new article that was factually flawed.    The biggest challenge is proving damages and liability, given the way its generated and viewed.

 

Yes he can sue the company and the author of the software itself, they published the software... that published the results.  Its also why this could be a Landmark case, that AI is liable for its results.

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, Levent said:

Hmm?

I hear he's a real man of the people: he's known to take from the rich, to give to the poor. Also rumored to practice archery in his spare time and is apparently rather good at it.

Razer Blade 14 (RZ09-0508x): AMD Ryzen 9 8945HS, nVidia GeForce RTX 4070, 64 GB 5200 DDR5, Win11 Pro [Workhorse/Gaming]

Apple MacBook Pro 14 (A2442): Apple M1 Pro APU (8 Core), 16 GB Unified Memory, macOS Sonoma 14.3 [Creative Rig]

Samsung GalaxyBook Pro (NP930QDB-KF4CA): Intel Core i7-1165G7, 16 GB DDR4, Win11 Pro [WinTablet]

HP Envy 15-k257ca: Intel Core i5 5200U, nVidia GeForce 840M, 16GB 1600 DDR3, Win7 Pro [Retro]

Toshiba Satellite A70-S2591:  Intel Pentium 4 538, ATI Radeon 9000 IGP, 1.5 GB DDR RAM, WinXP Pro [Antique]

Link to comment
Share on other sites

Link to post
Share on other sites

59 minutes ago, Dirtyshado said:

Robin Hood

59 minutes ago, Dirtyshado said:

Brian Hood

 

...so what's his actual name?

 

Anyway, while obviously the tool doesn't actually know what it's saying, you could make a case for openAI having the responsibility to filter these answers if notified by interested parties. Kind of like how youtube gets away with users posting copyrighted content as long as that content is taken down when asked.

17 minutes ago, Stahlmann said:

The version that's used by bing search is much more neutral. I couldn't even get bing to flat out say nazis are bad people.

That's because it got neutered after severe bad behavior, on launch you could have had it say nazis were good people 😛

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

Just asked ChatGPT myself 😆
image.png.8062d35c609049f34b5e5c12947088af.png

VGhlIHF1aWV0ZXIgeW91IGJlY29tZSwgdGhlIG1vcmUgeW91IGFyZSBhYmxlIHRvIGhlYXIu

^ not a crypto wallet

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, tkitch said:

Can you sue a software for defamation?

no but their malicious/ careless makers, i don't understand how thats even a question. 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Again... this is like blaming a parrot.

 

The one at 1:32 can say hello, and also sound like a car alarm.

 

Note how, the bird always has to be prompted to say it. The Bird doesn't know what it's saying, it just knows it gets food if it does it.

 

If I train a parrot to say "Robin Hood is guilty", and then it gets traction on youtube. Who's to blame? The parrot doesn't know what it's saying.

 

At any rate, this is one of those problems with the large language models. There is factually WRONG or misleading/misinformation in it's knowledge because it's been trained to "talk" not "think"

 

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, Kisai said:

If I train a parrot to say "Robin Hood is guilty", and then it gets traction on youtube. Who's to blame? The parrot doesn't know what it's saying.

 

 

Yep he is not suing ChatGPT he is suing open AI.   

If you train a parrot so say "Robin Hood is guilty" the person training the part is liable.  Im assuming he sent a case-and desist letter and OpenAI said no (or did not response) at this point they are legaly liable for the product they crate.  In periculare since they are the ones operating it.

if they were selling the software and others were training it and running on thier own hardware then it would be much more difficult to say they are liable, but they are the ones training and it is running on thier (rented from cloud provided) hardware.  Consumers have the $ relationship directly with openAI.  And they have not advertised it as entertainment product like one would if you trained a parrot.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Dirtyshado said:

 

Yes he can sue the company and the author of the software itself, they published the software... that published the results.  Its also why this could be a Landmark case, that AI is liable for its results.

Yer if openAI were just selling the softwere that others could train on thier own systems and run on thier own systems then it would be like suing them from making a forge that someone used to make a knife. But since openAI train it and run it on thier own machines and do not fill it with disclaimers saying it is making stuff up for entertainment what it says is thier resposivlty. Furthermore given they have attempted to put other guard rails in place they also cant just say `it is an art project that reflects the liqustic combinations of matrices from harvesting the web` so some arty crap like that.  (also charging people to use it sort of puts an end to the art project idea)

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, hishnash said:

 

Yep he is not suing ChatGPT he is suing open AI.   

If you train a parrot so say "Robin Hood is guilty" the person training the part is liable.  Im assuming he sent a case-and desist letter and OpenAI said no (or did not response) at this point they are legaly liable for the product they crate.  In periculare since they are the ones operating it.

if they were selling the software and others were training it and running on thier own hardware then it would be much more difficult to say they are liable, but they are the ones training and it is running on thier (rented from cloud provided) hardware.  Consumers have the $ relationship directly with openAI.  And they have not advertised it as entertainment product like one would if you trained a parrot.

 

I feel OpenAI in this case has a duty to actually remove or blacklist certain training material if it's materially incorrect. You can't just go "if 'robin hood' in input, remove any crime-related keywords", especially since "Robin Hood" is also a fictional character who commits crimes in literature.

 

Like there are both Left-wing and Right-wing weirdo's that constantly throw garbage on the internet to conflate unrelated issues, and that's something that needs to addressed with language models. lt's almost as if an "untraining" step is needed after it's trained to make sure that it doesn't become a nazi or racist by misunderstanding the language in a wikipedia article.

 

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, Stahlmann said:

Your self-esteem has to be pretty low if you're butthurt about some non-thinking chat bot (that is known to spread outdated misinformation now and then) saying you're a criminal.

Really, you're construing this as a matter of self-esteem? That LLMs spout bullshit is understood by knowledgeable beta testers—how many normies who basically googles headlines to fill their gaps in "knowledge" do you think knows or cares about that bit? ChatGPT is available as public trial, and more of GPT is being exposed with search engine integrations like Bing to reach more people who don't know or care about AI intricacies and simply jumped on the hype train that news and influencers are going crazy about. Even in a conventional search, how many people do you think even clicks through to something like a Wikipedia page instead of just looking at whatever snippet Google may serve up with each link, or even the quick summary thing they have going on the right.

Link to comment
Share on other sites

Link to post
Share on other sites

Regardless of the subject, the underlying problem itself is huge and really bad. If anything, at least this case should be a prompt for OpenAI to get this fixed.

 

And it should be an interesting case study too, since any mention of the relation of this guy and the bribery scandal should be factual statements, instead of accusatory or some sort of discussion regarding his involvement. How then, did GPT twist it to be completely contrary to factual statements? This is very different from GPT generating plausible answers to questions it knows nothing about: presumably, it has been trained on news articles that presents a clear relation of being a whistleblower in a massive bribery scandal, yet it chose generate its own statement.

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Stahlmann said:

Your self-esteem has to be pretty low if you're butthurt about some non-thinking chat bot (that is known to spread outdated misinformation now and then) saying you're a criminal.

 

Why do you excuse the billion dollar company?

 

I don't see this any different than a news outlet spreading misinformation. ChatGPT isn't a simple index search. 

i5 2400 | ASUS RTX 4090 TUF OC | Seasonic 1200W Prime Gold | WD Green 120gb | WD Blue 1tb | some ram | a random case

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, hishnash said:

 

Yep he is not suing ChatGPT he is suing open AI.   

If you train a parrot so say "Robin Hood is guilty" the person training the part is liable.  Im assuming he sent a case-and desist letter and OpenAI said no (or did not response) at this point they are legaly liable for the product they crate.  In periculare since they are the ones operating it.

if they were selling the software and others were training it and running on thier own hardware then it would be much more difficult to say they are liable, but they are the ones training and it is running on thier (rented from cloud provided) hardware.  Consumers have the $ relationship directly with openAI.  And they have not advertised it as entertainment product like one would if you trained a parrot.

If they advertised the AI to he accurate then I can see this having a case but if they didn't I don't think they should be liable as that would be too restrictive for AI. It would mean someone is responsible everytime and AI messes up which is inevitably going to happen. So long as they put a disclaimer it should be fine. I don't think we should prosecute someone for something that an AI does when it obviously wasn't intentional. If it was intentional then that is a different story. 

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, tkitch said:

ChatGPT isn't a person, and there was no person involved in creating the text it output.

 

Can you sue a software for defamation?

Depends on locality and legal context. If ChatGPT's company is using it as a service within certain areas or products where the information would be expected to be trusted, then it's probably under Libel laws not direct Defamation. But, I could very much see it being as such.

 

I'm pretty sure there's been cases about Wikipedia bad information as well. Though that was more about the editors of it.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Kinda Bottlenecked said:

 

Why do you excuse the billion dollar company?

 

I don't see this any different than a news outlet spreading misinformation. ChatGPT isn't a simple index search. 

While it's not a simple index search, searching itself for a long time hasn't been just a simple search (Google has been using AI for a long time).

 

While I do think that it is a point of concern, I don't think a chat bot like ChatGPT should get a lawsuit after it for making claims.  It simply is pretty much like a predictive text where it garners information from articles and other sources without actually truly understanding it.  To allow claims of lawsuits against it will open the door to lawsuits which will effectively shutdown tools like ChatGPT or get to the point where it's so curated it loses a lot of what it's capable of doing.

 

Imagine the world if safe harbour laws weren't a thing, the modern internet as we know it wouldn't exist...similar thing can happen here.

 

11 hours ago, Mark Kaine said:

no but their malicious/ careless makers, i don't understand how thats even a question. 

It's not malicious, for it to be malicious they would have to intentionally make it say that result.  It's also not careless, they scrapped the web can had it form essentially connections between data...they can't curate all that data and check it for accuracy because it would be impossible (just like all those people who still claim the Dodo went extinct because it was dumb and was tasty despite that not being the reason)

 

It is a valid question as well, as it's going to start stradling the line of what is and isn't considered part of the safe harbour laws.

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Continuum said:

Regardless of the subject, the underlying problem itself is huge and really bad. If anything, at least this case should be a prompt for OpenAI to get this fixed.

 

And it should be an interesting case study too, since any mention of the relation of this guy and the bribery scandal should be factual statements, instead of accusatory or some sort of discussion regarding his involvement. How then, did GPT twist it to be completely contrary to factual statements? This is very different from GPT generating plausible answers to questions it knows nothing about: presumably, it has been trained on news articles that presents a clear relation of being a whistleblower in a massive bribery scandal, yet it chose generate its own statement.

This is actually a version, though small, of "winners write the history" of events. If ChatGPT was trained on brutally factually incorrect information, that means someone had put enough of it out there that it got picked up. There's some deep ethical issues involved in all of this.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Taf the Ghost said:

Depends on locality and legal context. If ChatGPT's company is using it as a service within certain areas or products where the information would be expected to be trusted, then it's probably under Libel laws not direct Defamation. But, I could very much see it being as such.

 

I'm pretty sure there's been cases about Wikipedia bad information as well. Though that was more about the editors of it.

Libel is under defamation...anyways though, most libel laws have sections in regards to public figure where there has to be intent/willful disregard.

 

i.e. if you read something on the internet and believed it truthfully you aren't necessarily held liable for slandering the person.

 

The bigger thing is whether or not this falls under safe harbor

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, hishnash said:

But since openAI train it and run it on thier own machines and do not fill it with disclaimers saying it is making stuff up for entertainment what it says is thier resposivlty.

But they do say that. Not only does it give warnings about it on the first page (as shown in a screenshot in this thread), there is also text right below the input prompt which stats "ChatGPT may produce inaccurate information about people, places, or facts". They have two warnings in place, one of which is permanently on the screen, saying that it might make up incorrect information.

 

I don't think the issue here is that ChatGPT makes up information. I think the issue is that people seem to expect that it always provides accurate information, which it absolutely doesn't. 

 

 

 

Anyway, I am not surprised that this is happening in Australia. They have very vague and broad laws regarding defamation. Intent doesn't matter, and anyone who republishes a false statement is also potentially guilty in the eyes of Australia. In Australia, you can sue Google and Facebook for publishing negative user reviews of for example your restaurant. It will be interesting to see how the court rules on this. As with many AI-related "scandals" recently, I think the issue lies with the humans using the technology rather than the technology itself. I wish more people had a better understanding of how it actually worked but sadly I don't think the average Joe does, nor will the court in this civil case understand it either.

 

To me, this is like suing Microsoft because Swiftkey's auto-predict suggested writing that Trump is a nazi, or Hitler was a good guy, or whatever. It's not something you should think of as a fact being published. It's a suggestion for a phrase that might be right or wrong and it's up to the user to interpret it.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Kinda Bottlenecked said:

Why do you excuse the billion dollar company?

You completely disregard the context and only see a billion dollar company versus a single person. I'm not gonna defend someone just because they're on the smaller side. He's just wrong imo, it doesn't have to anything to do with who he is sueing.

 

4 hours ago, Kinda Bottlenecked said:

I don't see this any different than a news outlet spreading misinformation. ChatGPT isn't a simple index search. 

Except ChatGPT never claimed to be a news outlet, which are obliged to neutral reporting. But even official news outlets fail at that because the individual reporters always have some bias or agenda they're trying to push.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Continuum said:

Really, you're construing this as a matter of self-esteem? That LLMs spout bullshit is understood by knowledgeable beta testers—how many normies who basically googles headlines to fill their gaps in "knowledge" do you think knows or cares about that bit? ChatGPT is available as public trial, and more of GPT is being exposed with search engine integrations like Bing to reach more people who don't know or care about AI intricacies and simply jumped on the hype train that news and influencers are  going crazy about. Even in a conventional search, how many people do you think even clicks through to something like a Wikipedia page instead of just looking at whatever snippet Google may serve up with each link, or even the quick summary thing they have going on the right.

Your problem here is that you compare ChatGPT to google or bing, or even bings chat version. They're not the same thing. Like i said in a previous post, the version meant to replace search engines (bing chat) already is a lot more neutral and will probably not straight up say someone was a criminal. So in a way you're barking at the wrong tree. You assume other iterations like bing chat will behave the same, but they don't. 

 

I'm not on anyone's hype train. I'm just not afraid of change and that's exactly what AI represents atm. People are breaking out in a blind panic every time some chat bot or machine learning algorithm makes a mistake because they think this software is somehow taking over everything in a heartbeat. I get that this is how mainstream media reports on AI, but as usual that's far from the truth. The integration of AI into our everyday tasks will take time.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×