Jump to content

US's FTC starts probe into OpenAI regarding data use and harm

Summary

The USA's Federal Trade Commission (FTC) have started a probe on OpenAI in terms of data use, data security, and liability of ChatGPT's outputs. Questions like copyright of the data input, and how potentially harmful the outputs are to the users, are some of the topics that the probe is investigating [0, 1, 2]. OpenAI have said they will work with the FTC on this, and that GPT-4 was built using "years of safety research" [1].

 

Quotes

Quote

The U.S. Federal Trade Commission has opened an investigation into OpenAI on claims it has run afoul of consumer protection laws by putting personal reputations and data at risk, the strongest regulatory threat to the Microsoft-backed startup yet.

The FTC this week sent a 20-page demand for records about how OpenAI — the maker of generative artificial intelligence chatbot ChatGPT — addresses risks tied to its AI models.

The agency is probing if OpenAI engaged in unfair practices that resulted in "reputational harm" to consumers.

The investigation marks another high-profile effort to rein in technology companies by the FTC's progressive chair, Lina Khan

[1]

Quote

The FTC wants to get detailed information on how OpenAI vets information used in training for its models and how it prevents false claims from being shown to ChatGPT users. It also wants to learn more about how APIs connect to its systems and how data is protected when accessed by third parties.

 

[2]

My thoughts

Slowly but surely, the legislative machinery is beginning to approach LLMs. This could be highly impactful for both OpenAI but also other LLM companies, as it will likely set a precedent on these things.

 

In terms of harm, we've seen numerous examples of everything from defamation [3] and making up court cases [4], to death [5]. Although OpenAI seemingly tries their best to keep ChatGPT "safe" (for a suitable definition of that word), communities have sprung up to "jailbreak" it and get around these restrictions [6-8]. It's an arms race, but one which also requires a lot of philosophical thought in terms of what is safe and in what context. (And the question of "truth" can depend on what language you are currently chatting to it in, see e.g. [12]).

 

Another interesting thought is whether this is even a valid question to ask in terms of LLMs like ChatGPT. Another user on the forum, Sauron, pointed out [13] that the systems weren't necessarily designed with safety/alignment in mind: they were made to believably predict and generate the next word given the existing text, and at that they are excellent! [14]. But this design-goal doesn't specify anything about safety. Obviously LLM companies have started caring about this, there was a whole section (Section 6) in the GPT-4 technical report dedicated to how they filtered harmful prompts [11], but it slightly seems to be something which has now been added to the spec, rather than intended from the get-go (at least to the extent we're now seeing).

 

LLMs and copyright is also an interesting question. Copyright and the internet/digital age has always been awkward, and now that companies are building (highly profitable) systems using potentially copyrighted data, the situation is unlikely to get better; there is a lot of money at stake.

(A completely naive question, but could ChatGPT output (and maybe even input) be considered "fair use", given that it is extremely transformative?)

 

Finally there's the question of data privacy. ChatGPT ran afoul of Italy some months ago, due to GDPR concerns [9], and various companies have tried their best to keep employees from leaking company secrets by using ChatGPT [10]. However, LLMs need this extra interactive data; it is what improves them and makes them more "natural" to interact with. I don't think what OpenAI (and other LLM developers) are doing is any worse than social media and targeted ads, people post secret information on Discord these days [15], not to mention the myriad of War Thunder leaks, and all of this data is being used to train/improve some sort of algorithm. If more privacy regulations come of this however, I will be very glad to see that.

 

Sources

[0]: https://www.washingtonpost.com/technology/2023/07/13/ftc-openai-chatgpt-sam-altman-lina-khan/

[1]: https://www.reuters.com/technology/us-ftc-opens-investigation-into-openai-washington-post-2023-07-13/

[2]: https://www.theverge.com/2023/7/13/23793911/ftc-openai-investigation-consumer-ai-false-information

 

[3]: https://www.theverge.com/2023/6/9/23755057/openai-chatgpt-false-information-defamation-lawsuit

[4]: https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt

[5]: https://www.brusselstimes.com/430098/belgian-man-commits-suicide-following-exchanges-with-chatgpt

[6]: https://www.digitaltrends.com/computing/how-to-jailbreak-chatgpt/

[7]: https://github.com/0xk1h0/ChatGPT_DAN

[8]: https://www.jailbreakchat.com/

[9]: https://techcrunch.com/2023/03/31/chatgpt-blocked-italy/

[10]: https://www.axios.com/2023/03/10/chatgpt-ai-cybersecurity-secrets

[11]: https://arxiv.org/abs/2303.08774

[12]: https://thediplomat.com/2023/03/will-asian-diplomacy-stump-chatgpt/

[13]: https://linustechtips.com/topic/1517888-33-46-of-amazons-mechanical-turk-workers-estimated-to-use-llms-to-automate-their-work/?do=findComment&comment=16026532

[14]: https://fortune.com/longform/chatgpt-openai-sam-altman-microsoft/

[15]: https://www.polygon.com/23683683/discord-classified-documents-leak-thug-shaker-central-jack-teixeira

Link to comment
Share on other sites

Link to post
Share on other sites

All okay in theory. However,

 

My confidence this will be done by anyone with proper knowledge on "AI"?: Zero

 

My confidence any outcome will be actually helpful and logical?: Pretty much zero.

Link to comment
Share on other sites

Link to post
Share on other sites

@Holmes108I don't know the FTC is generally more technically inclined just based on what they do. I don't think LMM is rocket science and you don't need to know everything to know some hazards of LMM. That being said I think we treat chat gpt as what it is. Not always accurate and often needs to be verified. I do think a problem is sometimes it feels like people are advertising chat gpt as more than what it is. Regardless I do think the issue of using copyrighted materials for training is definitely a concern. 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Brooksie359 said:

@Holmes108I don't know the FTC is generally more technically inclined just based on what they do. 

 

Recent arguments they made regarding Microsoft Activision concern me. But whether the issue is ignorance on a market, or political, the result can be the same. I hope you're right though.

 

Also, I feel I need to add a foot note. You can agree or disagree with the MS acquisition in general. But there seems to be a rough consensus that the FTC's specific arguments were perplexing at best.

Link to comment
Share on other sites

Link to post
Share on other sites

Liability and declaration of automated production are going to be the big things. I expect the latter to be the FTC's big thing, and I expect the former to be the biggest issue coming out of the UK and other Common Law countries. I very much expect a required automation note on the production of a work similar to the FTC required declaration of Sponsored Content, in the future.

 

Considering we're already adapting insults about the quality of creative products being inferior to LLMs, I also can see required disclosures being extremely important to protecting a lot of human created works. (Which will actually command higher prices in the future.)

 

Obviously, the copyright issues matter a lot, but it's not the only big issue.

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, Taf the Ghost said:

Considering we're already adapting insults about the quality of creative products being inferior to LLMs, I also can see required disclosures being extremely important to protecting a lot of human created works. (Which will actually command higher prices in the future.)

The interesting question, at least to me, here is what makes a human-created work "better" than an AI-generated one? The time and effort invested? The physical nature of it (e.g. sculptures)? If (and it's a big 'if') we achieve a level of generative AI that can to some extent be considered creative, what differentiates its works from a human's? Is there something inherently different in human work vs something which is exceptionally good at mimicking human behaviour?  : )

(also, how do we meaningfully define "creative"? is that even possible?...)

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Silverflame said:

The interesting question, at least to me, here is what makes a human-created work "better" than an AI-generated one? The time and effort invested? The physical nature of it (e.g. sculptures)? If (and it's a big 'if') we achieve a level of generative AI that can to some extent be considered creative, what differentiates its works from a human's? Is there something inherently different in human work vs something which is exceptionally good at mimicking human behaviour?  : )

(also, how do we meaningfully define "creative"? is that even possible?...)

 

All big questions. I think we're going to evolve in some big ways over the coming decades, and our way of thinking about all of that is going to change drastically. I don't think there's an inherently right/wrong better/worse when it comes to that stuff. We just have a lot of bias in the way things have been. 

 

I'd have to sit and think longer to come up with a better example than this (and I'm sure there is one) but perhaps once there was an idea that letting a machine do arithmetic for you was ridiculous, lazy, and unreliable. And now we think nothing of it. I think all of these notions of made man vs not will just become a thing of the past.

 

We might lament that now, but I'm not prepared to say that it's an objectively bad (or good) thing. I just constantly try and keep my "old man that hates change" instinct in check. 

 

And as I hinted at, I realize my computers/math example is very imperfect. Just trying to convey the general idea that our values and ways of thinking may be unrecognizable in the future, and that it's probably okay.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Silverflame said:

The interesting question, at least to me, here is what makes a human-created work "better" than an AI-generated one? The time and effort invested? The physical nature of it (e.g. sculptures)? If (and it's a big 'if') we achieve a level of generative AI that can to some extent be considered creative, what differentiates its works from a human's? Is there something inherently different in human work vs something which is exceptionally good at mimicking human behaviour?  : )

(also, how do we meaningfully define "creative"? is that even possible?...)

The question has already been answered, actually. Before any of our times on this forum, "manufactured" goods were viewed as higher quality.  Because of the precision that came with machine-based consistency. But, now, what is a "high quality good", to you? While it's a combination of machine and hand-crafted, the reality is that the "value" is in the *skilled* human time put into it. Which is why Branding is so important these days. Something like an iPhone has insane R&D, Physical Production and highly skilled human investment, but it doesn't look like it. It's a commodity. It's produced at commodity scales, but knowing what to produce at that scale is the "skill" part of it.  (Higher quality products also have tighter production tolerances, which are noticeable to the end-user.)

 

"Creative" is easy to define. It's a product that elicits an emotion response from a human. While it means there is an "eye of the beholder" effect to it, it's not hard to define. In the case of AI generation, the creative part would be on the director of the program defining inputs that result in an output that doesn't look manufactured or fake.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Holmes108 said:

 

All big questions. I think we're going to evolve in some big ways over the coming decades, and our way of thinking about all of that is going to change drastically. I don't think there's an inherently right/wrong better/worse when it comes to that stuff. We just have a lot of bias in the way things have been. 

 

I'd have to sit and think longer to come up with a better example than this (and I'm sure there is one) but perhaps once there was an idea that letting a machine do arithmetic for you was ridiculous, lazy, and unreliable. And now we think nothing of it. I think all of these notions of made man vs not will just become a thing of the past.

 

We might lament that now, but I'm not prepared to say that it's an objectively bad (or good) thing. I just constantly try and keep my "old man that hates change" instinct in check. 

 

And as I hinted at, I realize my computers/math example is very imperfect. Just trying to convey the general idea that our values and ways of thinking may be unrecognizable in the future, and that it's probably okay.

Machine-based mathematics is over 4000 years old. The Abacus isn't exactly new.

 

The real question is how education adapts. Essay writing by hand is absolutely going to be making a comeback.

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, Taf the Ghost said:

Machine-based mathematics is over 4000 years old. The Abacus isn't exactly new.

 

The real question is how education adapts. Essay writing by hand is absolutely going to be making a comeback.

 

Well sure, but we're talking more automation here. 

 

And yeah ,I've had a similar discussion (regarding "progress") about handwriting in general with my wife. She really hates that she feels everyone's printing is becoming sloppy, and almost nobody knows cursive. For me, I kinda get where she's coming from in a sentimental way, but also just kind of shrug it off as inevitable given the way our society is moving. There's not necessarily any logical reason we need to love and miss handwriting, for example (of course, I think people need to have a basic understanding of how to print our alphabet...).

 

I think a lot of this stuff (though certainly not all of it) is emotional and cultural/sociological. That doesn't make it unimportant, and there are some issues that need to be dealt with. But I think if our society survives another 100 years, it's going to be a machine/computer/automated world. That just seems to be the destiny of our species, and to a certain degree, we need to accept it.

 

But as I said, even if, say, the idea of copyright, or artistic merit somehow becomes antiquated/non-existant in 50 years (I doubt it will), I'm not suggesting we ignore those issues today.

 

I'm kind of just rambling today, it seems. Not sure what my main point is. The one post just got me thinking about the philosophical side of it. I suspect that we may look back at a lot of these concerns and not understand what the big issue was. If we don't blow up that is.

 

Edit: And through all of that, I ignored your main point, about education. I agree. I care about that much more than computer paintings. How on earth do we adapt and have students demonstrate knowledge aside from locking them in a room for an exam. I'm sure we'll adapt, but it's going to be tricky for sure. 

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, Holmes108 said:

 

Well sure, but we're talking more automation here. 

 

And yeah ,I've had a similar discussion (regarding "progress") about handwriting in general with my wife. She really hates that she feels everyone's printing is becoming sloppy, and almost nobody knows cursive. For me, I kinda get where she's coming from in a sentimental way, but also just kind of shrug it off as inevitable given the way our society is moving. There's not necessarily any logical reason we need to love and miss handwriting, for example (of course, I think people need to have a basic understanding of how to print our alphabet...).

 

I think a lot of this stuff (though certainly not all of it) is emotional and cultural/sociological. That doesn't make it unimportant, and there are some issues that need to be dealt with. But I think if our society survives another 100 years, it's going to be a machine/computer/automated world. That just seems to be the destiny of our species, and to a certain degree, we need to accept it.

 

But as I said, even if, say, the idea of copyright, or artistic merit somehow becomes antiquated/non-existant in 50 years (I doubt it will), I'm not suggesting we ignore those issues today.

 

I'm kind of just rambling today, it seems. Not sure what my main point is. The one post just got me thinking about the philosophical side of it. I suspect that we may look back at a lot of these concerns and not understand what the big issue was. If we don't blow up that is.

 

Edit: And through all of that, I ignored your main point, about education. I agree. I care about that much more than computer paintings. How on earth do we adapt and have students demonstrate knowledge aside from locking them in a room for an exam. I'm sure we'll adapt, but it's going to be tricky for sure. 

You lock them in a room with no electronics.   They've been doing that in mathematics testing for decades now. Depending on the field.

 

The loss of cursive is more about the consequences for life long fine motor control. That's kind of the only primary difference between it and block print. It still should be taught, however, for the multi-level benefits it provides. ("Education" as a social function is thousands of years old and staples exist for a reason. They're time-tested and not going away.)

 

The big issue with ChatGPT and the like is essay writing. I'm aware of how big of an industry Essay Writing already is at the college and high school level, but the fact it'll be so easy to access means the ability to write more than a text message is going to collapse in a lot of places. We're about to see a very weird situation that's going to require us to sub-divide the concept of "Literate", since we'll have far more people that can Read but are almost incapable of Writing at their comprehension level.  And they're going to have Masters degrees. It's a problem already, and it's going to be worse going forward. (This doesn't mean that modern Education isn't way too over-reliant on essay writing in general, but that's a different & longer discussion.)

 

As a side point, a suggestion I've given a few people over the years, but if you have younger kids this works well. Have them physically write letters to their grandparents. (Or some other living relative that you're on good terms with.) Aside from the relationship building aspect, physical writing is its own task while also forcing the skillset necessary to think through how to write both long form & coherently. The mental cadence is also much more similar to speaking in an effective manner. (This is a problem with high levels of computer-based text interactions; you "speak" before you "think" and the mental loops are offset. Strong interpersonal communication via Speech is far more valuable than Text.)  It's a self-reinforcing task, and it'll keep your parents off your back about a lot of things.

 

I think the important related topic here is playing instruments. We can infinitely replicate the best musicians in the world's greatest pieces. Yet, more people are learning instruments than ever before. The functional, human development aspect of the task is even more valued now than it was in the past.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Taf the Ghost said:

You lock them in a room with no electronics.   They've been doing that in mathematics testing for decades now. Depending on the field.

 

 

Yep, it's pretty much the only solution I see. It's not going to always be convenient, but it is what it is.

 

Quote

I think the important related topic here is playing instruments. We can infinitely replicate the best musicians in the world's greatest pieces. Yet, more people are learning instruments than ever before. The functional, human development aspect of the task is even more valued now than it was in the past.

 

Yes, I believe there will always be value in the human aspect. Even in Star Trek TNG where they have replicators, they still also have vineyards for wine, and I think that'll be true in real life. There will also be frauds, but that just comes with the territory.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×