Jump to content

Kickstarter bans AI-generated art enthusiast group "Unstable Diffusion" and refuses to deliver their successfully raised $56k funds

grg994

Summary

 

Kickstarter - one of the biggest crowdfunding website - banned AI-generated art enthusiast group "Unstable Diffusion" which raised $56k in the last 2 weeks, looking forward to expand it's AI text-to-image generation capability and train their own AI models specific for the needs of their community.

 

The fundraisers representing the group "Unstable Diffusion" had the promise to use the raised money to deliver "unrestricted" text-to-image AI art service to the common folks - based on Stable diffusion and similar open-source models - but without censorship to the image prompts or limitation on NSFW/adult image generation capability.

 

Kickstarter who reserves the right based on their ToS to suspend campaigns on their website without reasoning, announced that for "Unstable Diffusion" the raised funds will be denied, and will be refunded to the backers of the project.

 

 

Quotes

Part of the mission statement by "Unstable Diffusion" for their fundraising:

Quote

"[...] will be supporting the reversal of the censorship of Stable Diffusion. By removing images that could potentially lead to NSFW or borderline NSFW outputs, Stable Diffusion has greatly hindered its ability to accurately depict anatomy and other subjects. We believe that adults should be able to see what they want and that removing this content does not protect anyone [...]"

 

Kickstarter released a statement that they are unsure what fill be the fate of AI art projects on their website:

Quote

"[...] we’re considering when it comes to what place AI image generation software and AI-generated art should have on Kickstarter, if any:

 - Is a project copying or mimicking an artist’s work? We must consider not only if a work has a straightforward copyright claim, but also evaluate situations where it's not so clear [...]"

 

Leading members of "Unstable Diffusion" responded in their official Discord server (https://discord.gg/aTzxbWbatn), announcing to their over 100,000 members that they will continue their expansion without Kickstarter:

Quote

We will not back down. [...] We will stand up and fight for our right to create, to innovate, and to push the boundaries of what is possible.

[...] We've set up a direct donation system on our website so we can continue to crowdfund in peace and release the new models we promised on Kickstarter [...]: https://equilibriumai.com/index.html

 

 

My thoughts

I believe in freedom of software, including AI technologies. To be clear this project did NOT wanted to sell AI art / AI NSFW art, only wanted to provide a software model and hardware resources to people - that they can use to make AI art / AI NSFW art / whatever. Community-driven tech project as such should not suffer from discrimination by "payment providers" and similar private companies.

 

 

Sources

Main article (techcrunch):  https://tcrn.ch/3FMruDl

Further articles: https://kotaku.com/kickstarter-ai-art-image-porn-unstable-diffusion-nsfw-1849921325

 

Kickstarter's statement on AI art: https://updates.kickstarter.com/ai-current-thinking/

 

Unstable Diffusion's sites:

https://www.kickstarter.com/projects/unstablediffusion/unstable-diffusion-unrestricted-ai-art-powered-by-the-crowd

https://discord.gg/aTzxbWbatn (or a copy of just their answer for the situation: https://www.reddit.com/r/StableDiffusion/comments/zsagy1/unstable_diffusion_commits_to_fighting_back/)

 

         \   ^__^ 
          \  (oo)\_______
             (__)\       )\/\
Link to comment
Share on other sites

Link to post
Share on other sites

Has absolutely nothing to do with NSFW vs SFW...

 

The image set that these AI models are trained on (LAION) is filled with billions of copyrighted images/artworks. The technology is built on the backs of people who never gave consent to steal their work for profit.

 

20 minutes ago, grg994 said:

I believe in freedom of software

Your freedom of software doesn't include freedom to steal other people's work. These AI models steal from millions upon millions of people.

MacBook Pro 16 i9-9980HK - Radeon Pro 5500m 8GB - 32GB DDR4 - 2TB NVME

iPhone 12 Mini / Sony WH-1000XM4 / Bose Companion 20

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Roswell said:

Has absolutely nothing to do with NSFW vs SFW...

 

The image set that these AI models are trained on (LAION) is filled with billions of copyrighted images/artworks. The technology is built on the backs of people who never gave consent to steal their work for profit.

 

Your freedom of software doesn't include freedom to steal other people's work. These AI models steal from millions upon millions of people.

This is "steal" in the same context of software, video and music piracy, and you'll likely find the same people who rage about AI art generation are the same people who have no bones about watching pirated video or reading pirated comics.

 

Let's put aside "this" project for a moment and put all the cards on the table, because nearly everyone tries to understand AI art generation at a kindergartener level, and people fight about it in the same way Metallica vs Napster does. With the idea that the tools are expressly designed for piracy. Some people even seem to equate this with NFT's.

 

LAION and CLIP is the process that actually goes out and indexes, NOT stores images. When some entity like stable diffusion creates a corpus for training their AI models, they use these indexes (which stable diffusion , dall-e, and likely everything derived from it) to retrieve the images, resize them to 512x512, and then generate the word pairs. If it found your copyrighted material at this point, you should check the terms and conditions of where you hosted the image, because you likely gave the rights away.

 

Under US Copyright law, LAION and CLIP would be fair use since they're used for research purposes and do not devalue the commercial use of the original. They are indexes in the same way that Google is an index. LAION is the image-to-keyword index, CLIP is the identification of what that image is in English.

 

There is still a language model the AI has to learn. That is where things can be "censored", by intentionally, or accidently, where many of these AI projects are less alike. https://arstechnica.com/information-technology/2022/12/stability-ai-plans-to-let-artists-opt-out-of-stable-diffusion-3-image-training/ , Stability AI plans to let artists opt out, and my theory here is when they next generate a model, they will simply delete LAION entries that have been "opt'd out", which will degrade CLIP's ability to identify "art-theft", and people who re-uploaded artwork, but it is what it is.

 

So with Dall-E, Stable Diffusion, and things derived from it, we get to the "throw spaghetti at the wall and compare it to artwork", there've been plenty of videos on how this actually works. Suffice it to say "the actual artwork" is not, and has never been in the model. 2TB of images does not compress down to a 4GB model, no matter how you try to frame it. What is stored in the model is "weights", so if something over-fits (because it shows up repeatedly) that's a problem, but not an unsurmountable one.  There is no magic prompt that will re-generate any training image. So this is a largely overblown argument except when it comes to trying to generate artwork in a specific style.

 

Which leads us to what kicked off this "artists hate AI art generation and have behave as luddites every time someone has brought it up", Yes artists, specifically "concept artists", have screamed loudly on twitter for this stuff to die. They're not interested in learning how it works, have no interest in using it, and believe nobody should have access to it, in the off chance it leads to one less job for them. Where have I heard that argument before?

 

https://www.riaa.com/reports/the-true-cost-of-sound-recording-piracy-to-the-u-s-economy/

Quote
  • The U.S. economy loses $12.5 billion in total output annually as a consequence of music theft.
  • Sound recording piracy leads to the loss of 71,060 jobs to the U.S. economy.

Yes, the RIAA, and it's fictitious "facts" about piracy where it equates every unlicensed use with a full CD sale. Even when it's 2 seconds of someone else listening to it on a stream or the background of someone's video shot at a birthday party.

 

That is what this looks like. Despite the exasperated arguments from artists, most people do not want AI generated artwork to begin with. Why would you settle for an 8-bit bit-crushed cover of a music track that takes 10 minutes to generate, when you can just go and buy the legit music off itunes and listen to the the actual thing. If people need artwork, they will continue to do what they've always done and either hire people to do it, or do it themselves.

 

All AI art generation is doing at this point is bringing "do it themselves" closer to scratch data that can take hours or days to find a "prompt" that gives the user anything remotely resembling what they want. And all the cherry-picked stuff you see on twitter and in news articles is rarely "prompts", but rather image2image.

 

image2image, is the style transfer of one image to another. And the technology isn't even new

https://affinelayer.com/pixsrv/ is from 2017.

 

All ML AI stuff really is at this point is "auto-complete", and the conflation of the image2image with "all AI art generation" is what makes this a mess, because the image2image style transfer is the only case where you can in fact "steal an artists style", but that is still limited to what the AI understands, and the AI understands nothing about composition. This is why these AI art generators universally suck at rendering fingers, text, and eye-directions/colors. People don't label those things way back in the LAION dataset. Hell the LAION dataset is biased towards actual photographic images and not "artwork" in the first place. This is why asking the AI with a prompt for something that only became a fad or popular in 2021 or 2022 will produce at best garbage, and at worst, nothing.

 

The models will constantly have to be retrained on new data, otherwise they will be stuck in the past.

 

Which brings us to the other half of the argument. Consent and "why does it not just use Public Domain data"

 

Because if you train an AI only on public domain data, you will get an AI that only produces things with the understanding of a 19th century scholar, at best. All the racist, sexist, and ableist language that has gone out of style will be in there. Ideally one would attempt to get consent to include specialized datasets so that the model, could generate celebrities likeness, but that's unlikely to ever be the case, and waiting a century for them to die and be irrelevant is an even worse argument. So I believe Stability Diffusion is correct, they do not need the consent to use the images if they've been posted to the internet, because they can lean on the research part of fair use to do so.

 

That said, "making money" off the research dataset? That violates the spirit of the fair use laws. They should always release the production model  to the public if they want to use the research argument as a means of generating it.

 

And before anyone chimes in and goes "what about (imagen, midjourney, etc)", no it's very likely they have used LAION too, or have done something similar to it. 

 

AI research is very "reuse old data forever" as far as datasets are used. Other AI research like ASR/TTS and Text(GPT), it's far easier to curate what you put on the input side of these, because you can train a TTS from just 500 samples, or about 10 minutes of one person and get something sounding reasonable, but incapable of saying any zoomer language, because the training data is still only public domain books, and has to rely on a pronunciation dictionary of english words to fill in the blanks.

 

There will always be blindspots, and you have to re-train the model to fix them.

 

Now... does THIS project have merit?

 

https://updates.kickstarter.com/ai-current-thinking/

 

Quote
  • Does a project exploit a particular community or put anyone at risk of harm? We have to consider the intention behind projects, sometimes beyond their purpose as stated on our platform. Our rules prohibit projects that promote discrimination, bigotry, or intolerance towards marginalized groups, and we often make decisions to protect the health and integrity of Kickstarter.

Alright, so what was the intent?

https://kotaku.com/kickstarter-ai-art-image-porn-unstable-diffusion-nsfw-1849921325

Quote

Kickstarter Suspends AI-Generated Image (Well, Porn) Project [Update]

I think the title speaks for itself.

 

The reason Kickstarter suspended the project, was because it could "be used to create porn of people without their consent", which seems to be the goal. This seems more like a grift than it does an actual serious attempt to address the problems created by the first model released by Stable Diffusion.

 

At least Stable Diffusion got (possibly unfairly) ripped to shreds for how it presented it being able to copy an art style. Reminding me of all those crappy "emulator console's" who's entire selling point is the piracy angle.

 

You know, who doesn't get ripped to shreds? When people talk about emulators. The same arguments about "is an emulator a good or a bad thing" can be applied to "is an AI Art Generator a good or a bad thing" because people are going to pirate games even if you tell them not to, and people are going to generate AI art, even AI generated porn, even if you tell them not to. 

 

All the yelling in the world is not going to stop people from working on AI art generators, but what it will do is make people who are working on better ones, less willing to share their progress, because they don't want to be villainized. Meanwhile, corporate AI art generators end up becoming established, and profit off these "research" models.

 

And yes, I've seen it happen several times. An open source project stops being developed and the people who were working on it are now part of some commercial enterprise, doing the same thing the open source project did, but now has a fancy UI.

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Kisai said:

This is "steal" in the same context of software, video and music piracy, and you'll likely find the same people who rage about AI art generation are the same people who have no bones about watching pirated video or reading pirated comics.

 

Let's put aside "this" project for a moment and put all the cards on the table, because nearly everyone tries to understand AI art generation at a kindergartener level, and people fight about it in the same way Metallica vs Napster does. With the idea that the tools are expressly designed for piracy. Some people even seem to equate this with NFT's.

 

LAION and CLIP is the process that actually goes out and indexes, NOT stores images. When some entity like stable diffusion creates a corpus for training their AI models, they use these indexes (which stable diffusion , dall-e, and likely everything derived from it) to retrieve the images, resize them to 512x512, and then generate the word pairs. If it found your copyrighted material at this point, you should check the terms and conditions of where you hosted the image, because you likely gave the rights away.

 

Under US Copyright law, LAION and CLIP would be fair use since they're used for research purposes and do not devalue the commercial use of the original. They are indexes in the same way that Google is an index. LAION is the image-to-keyword index, CLIP is the identification of what that image is in English.

 

There is still a language model the AI has to learn. That is where things can be "censored", by intentionally, or accidently, where many of these AI projects are less alike. https://arstechnica.com/information-technology/2022/12/stability-ai-plans-to-let-artists-opt-out-of-stable-diffusion-3-image-training/ , Stability AI plans to let artists opt out, and my theory here is when they next generate a model, they will simply delete LAION entries that have been "opt'd out", which will degrade CLIP's ability to identify "art-theft", and people who re-uploaded artwork, but it is what it is.

 

So with Dall-E, Stable Diffusion, and things derived from it, we get to the "throw spaghetti at the wall and compare it to artwork", there've been plenty of videos on how this actually works. Suffice it to say "the actual artwork" is not, and has never been in the model. 2TB of images does not compress down to a 4GB model, no matter how you try to frame it. What is stored in the model is "weights", so if something over-fits (because it shows up repeatedly) that's a problem, but not an unsurmountable one.  There is no magic prompt that will re-generate any training image. So this is a largely overblown argument except when it comes to trying to generate artwork in a specific style.

 

Which leads us to what kicked off this "artists hate AI art generation and have behave as luddites every time someone has brought it up", Yes artists, specifically "concept artists", have screamed loudly on twitter for this stuff to die. They're not interested in learning how it works, have no interest in using it, and believe nobody should have access to it, in the off chance it leads to one less job for them. Where have I heard that argument before?

 

https://www.riaa.com/reports/the-true-cost-of-sound-recording-piracy-to-the-u-s-economy/

Yes, the RIAA, and it's fictitious "facts" about piracy where it equates every unlicensed use with a full CD sale. Even when it's 2 seconds of someone else listening to it on a stream or the background of someone's video shot at a birthday party.

 

That is what this looks like. Despite the exasperated arguments from artists, most people do not want AI generated artwork to begin with. Why would you settle for an 8-bit bit-crushed cover of a music track that takes 10 minutes to generate, when you can just go and buy the legit music off itunes and listen to the the actual thing. If people need artwork, they will continue to do what they've always done and either hire people to do it, or do it themselves.

 

All AI art generation is doing at this point is bringing "do it themselves" closer to scratch data that can take hours or days to find a "prompt" that gives the user anything remotely resembling what they want. And all the cherry-picked stuff you see on twitter and in news articles is rarely "prompts", but rather image2image.

 

image2image, is the style transfer of one image to another. And the technology isn't even new

https://affinelayer.com/pixsrv/ is from 2017.

 

All ML AI stuff really is at this point is "auto-complete", and the conflation of the image2image with "all AI art generation" is what makes this a mess, because the image2image style transfer is the only case where you can in fact "steal an artists style", but that is still limited to what the AI understands, and the AI understands nothing about composition. This is why these AI art generators universally suck at rendering fingers, text, and eye-directions/colors. People don't label those things way back in the LAION dataset. Hell the LAION dataset is biased towards actual photographic images and not "artwork" in the first place. This is why asking the AI with a prompt for something that only became a fad or popular in 2021 or 2022 will produce at best garbage, and at worst, nothing.

 

The models will constantly have to be retrained on new data, otherwise they will be stuck in the past.

 

Which brings us to the other half of the argument. Consent and "why does it not just use Public Domain data"

 

Because if you train an AI only on public domain data, you will get an AI that only produces things with the understanding of a 19th century scholar, at best. All the racist, sexist, and ableist language that has gone out of style will be in there. Ideally one would attempt to get consent to include specialized datasets so that the model, could generate celebrities likeness, but that's unlikely to ever be the case, and waiting a century for them to die and be irrelevant is an even worse argument. So I believe Stability Diffusion is correct, they do not need the consent to use the images if they've been posted to the internet, because they can lean on the research part of fair use to do so.

 

That said, "making money" off the research dataset? That violates the spirit of the fair use laws. They should always release the production model  to the public if they want to use the research argument as a means of generating it.

 

And before anyone chimes in and goes "what about (imagen, midjourney, etc)", no it's very likely they have used LAION too, or have done something similar to it. 

 

AI research is very "reuse old data forever" as far as datasets are used. Other AI research like ASR/TTS and Text(GPT), it's far easier to curate what you put on the input side of these, because you can train a TTS from just 500 samples, or about 10 minutes of one person and get something sounding reasonable, but incapable of saying any zoomer language, because the training data is still only public domain books, and has to rely on a pronunciation dictionary of english words to fill in the blanks.

 

There will always be blindspots, and you have to re-train the model to fix them.

 

Now... does THIS project have merit?

 

https://updates.kickstarter.com/ai-current-thinking/

 

Alright, so what was the intent?

https://kotaku.com/kickstarter-ai-art-image-porn-unstable-diffusion-nsfw-1849921325

I think the title speaks for itself.

 

The reason Kickstarter suspended the project, was because it could "be used to create porn of people without their consent", which seems to be the goal. This seems more like a grift than it does an actual serious attempt to address the problems created by the first model released by Stable Diffusion.

 

At least Stable Diffusion got (possibly unfairly) ripped to shreds for how it presented it being able to copy an art style. Reminding me of all those crappy "emulator console's" who's entire selling point is the piracy angle.

 

You know, who doesn't get ripped to shreds? When people talk about emulators. The same arguments about "is an emulator a good or a bad thing" can be applied to "is an AI Art Generator a good or a bad thing" because people are going to pirate games even if you tell them not to, and people are going to generate AI art, even AI generated porn, even if you tell them not to. 

 

All the yelling in the world is not going to stop people from working on AI art generators, but what it will do is make people who are working on better ones, less willing to share their progress, because they don't want to be villainized. Meanwhile, corporate AI art generators end up becoming established, and profit off these "research" models.

 

And yes, I've seen it happen several times. An open source project stops being developed and the people who were working on it are now part of some commercial enterprise, doing the same thing the open source project did, but now has a fancy UI.

 

You're jumping into a bunch of logical fallacies to support your argument, not really interested in engaging with that nonsense. Not to mention some weird assumptions that have no grounding in reality.

 

Regardless, the results these services spit out would not exist without stolen artwork. If you were in the space (which you're not according to prior discussion), your opinion on how artists should feel, however flawed, would at least be valid. But it's not.

MacBook Pro 16 i9-9980HK - Radeon Pro 5500m 8GB - 32GB DDR4 - 2TB NVME

iPhone 12 Mini / Sony WH-1000XM4 / Bose Companion 20

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Roswell said:

Has absolutely nothing to do with NSFW vs SFW...

 

The image set that these AI models are trained on (LAION) is filled with billions of copyrighted images/artworks. The technology is built on the backs of people who never gave consent to steal their work for profit.

 

Your freedom of software doesn't include freedom to steal other people's work. These AI models steal from millions upon millions of people.

What percentage of a copyrighted work was used in images compared to royalty-free? How many artists have assimilated copyrighted work? Have they properly attributed and paid for access?

Link to comment
Share on other sites

Link to post
Share on other sites

AI generated art doesnt exist, what exists are AI generated images. But those are a pretty far cry from real art since there is nothing creative or unique about them,  just a mish-mash of ideas from pre-existing work.

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Roswell said:

You're jumping into a bunch of logical fallacies to support your argument, not really interested in engaging with that nonsense. Not to mention some weird assumptions that have no grounding in reality.

 

Regardless, the results these services spit out would not exist without stolen artwork. If you were in the space (which you're not according to prior discussion), your opinion on how artists should feel, however flawed, would at least be valid. But it's not.

Case law from Author’s Guild v. Google (https://fairuse.stanford.edu/case/authors-guild-v-google-inc/) suggests that data mining in this particular use case, very likely falls within the Fair Use Doctrine, as the end result is transformative
 

https://www.copyright.gov/fair-use/

It doesn’t matter if copyrighted data was used to train the AI, rather, what matters is the end result, and if it meets the standard for Fair Use. And with the case law above, the standard is surprisingly lax. 

 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, Roswell said:

Has absolutely nothing to do with NSFW vs SFW...

 

The image set that these AI models are trained on (LAION) is filled with billions of copyrighted images/artworks. The technology is built on the backs of people who never gave consent to steal their work for profit.

 

Your freedom of software doesn't include freedom to steal other people's work. These AI models steal from millions upon millions of people.

Bro where the actual heck do you think any AI model training comes from??? Like are people that dense AI literally uses everyone and anyone's work to learn from. It's not stealing anything just because you used it to train another system. If you have a AI system crawl the web learning are you going to claim it's copywrite infringement because it found something on Google images that was copywritten? 

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, divito said:

What percentage of a copyrighted work was used in images compared to royalty-free? How many artists have assimilated copyrighted work? Have they properly attributed and paid for access?

That's not how AI works no one is getting paid for licensing anything related to AI training. If you point AI at Google and it finds a copywritten image and learns from that there is zero legal standing to require payment because it's literally no different than you going on Google and searching for the image and looking at it then gathering info from that image.

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, grg994 said:

AI-generated art enthusiast group "Unstable Diffusion"

Now there's two ways to read that...

 

 

What were they even trying to collect money for? From orignal post it seems they already have AI built, but were crowdfunding to feed it data? what? Seems like another kickstarter project to rake in cash and then disappear a month later.

 

6 hours ago, jagdtigger said:

AI generated art doesnt exist, what exists are AI generated images. But those are a pretty far cry from real art since there is nothing creative or unique about them,  just a mish-mash of ideas from pre-existing work.

Just like every single human-generated piece of content?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Fasterthannothing said:

That's not how AI works no one is getting paid for licensing anything related to AI training. If you point AI at Google and it finds a copywritten image and learns from that there is zero legal standing to require payment because it's literally no different than you going on Google and searching for the image and looking at it then gathering info from that image.

That's my point. I was pointing out his flawed reasoning.

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, Roswell said:

 

Regardless, the results these services spit out would not exist without stolen artwork. If you were in the space (which you're not according to prior discussion), your opinion on how artists should feel, however flawed, would at least be valid. But it's not.

Google would not exist if those fair use laws did not exist. Search engines primarily engage in copyright infringement, and various countries laws outside the US are amazingly tone-deaf on how that drives traffic TO their websites, not diminishes it. It's also a double-edged sword because Google can also find pirated versions of their content and drive traffic to THAT instead if you tell google to not index your site. You either want google to point people looking for your content to your store, or your website. Not piracy websites, and not AI art generation websites.

 

If artists do not want their art to be indexed. DELETE IT. NOW. Don't let anyone ever see it. Don't upload it to a website you do not control and are subject to Terms of Service that says "You are giving the website permission to do whatever with it short of taking credit for it's existence."

 

I wrote the post because every 2-3 days, artists I follow retweet the most unhinged takes about "AI", so yes, I AM in the space, and I am in the position to talk to artists, and many of them have taken up the toxic view that the only reason anyone is building these things is to destroy art jobs. Some total luddite points of view, and we have the NFT cryptobro's to thank for that.  People see "AI Art generator" and are thinking "bored apes 2.0". Like hell no. Nobody wanted that garbage in the first place, and if you really equate the two together then you know the corporate use of AI Art  generators is going to fail, when nobody wants to pay for the computational time to generate new models.

 

I can generate a TTS model in about 8 hours that will sound reasonable enough, from 10 minutes of someone's voice. To do the same with images or artwork it would take almost a year. There is no generating the model on a home computer without some crowd-sourced effort, and to generate it on cloud services (which I suspect was the purpose of this kickstarter) would cost hundreds of thousands of dollars, EACH and EVERY time. And what if the model is bad? 

 

At some point the model will self-destruct if it's not tested every so often for generating correct output. It generally goes through a phase of "unintelligible garbage" - "low-quality output" - "reasonable output" - "some high quality output, others overfit" - "can only produce the overfit output, everything else is garbage"

 

I sincerely doubt that any "AI Art generator" is going to result in job losses. Sure, yes, it may shift art jobs around a bit, as people with no skill will likely use an AI generated image as a stand-in for what they want, instead of wasting time paying someone to create something that isn't going to end up in the final output. But I do not see people going "this 512x512 kitbash of subject X is good enough to use in the final result", that's a nonsense claim. What I do see happening is people using AI Art generators instead of "stock photos", and maybe also using it for set-dressing of movies/animation where what the "thing" is and who made it, isn't important to the scene.

 

But why would anyone want to generate kitbashed porn? Seems like just an avenue for bullying people.

Link to comment
Share on other sites

Link to post
Share on other sites

I am not a fan of companies like Kickstarter deciding what projects are or aren't allowed on their platform. 

 

I also like how they claim to be on the side of humans, but then seemingly ban a tool that would benefit 99% of humans by giving them access to creating images they need assistance in creating. I dislike this narrative currently going around that creating images should be something reserved for an elite group of people, not something the common person should have access to. 

 

 

So copying or mimicking others' art is no longer allowed on Kickstarter? Are they going to ban all western projects that use animestyle artwork then? Because those are definingly mimicking and copying a particular style of imagery.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, LAwLz said:

I am not a fan of companies like Kickstarter deciding what projects are or aren't allowed on their platform. 

 

I also like how they claim to be on the side of humans, but then seemingly ban a tool that would benefit 99% of humans by giving them access to creating images they need assistance in creating. I dislike this narrative currently going around that creating images should be something reserved for an elite group of people, not something the common person should have access to. 

 

 

So copying or mimicking others' art is no longer allowed on Kickstarter? Are they going to ban all western projects that use animestyle artwork then? Because those are definingly mimicking and copying a particular style of imagery.

Really spot on comment.

MSI x399 sli plus  | AMD theardripper 2990wx all core 3ghz lock |Thermaltake flo ring 360 | EVGA 2080, Zotac 2080 |Gskill Ripjaws 128GB 3000 MHz | Corsair RM1200i |150tb | Asus tuff gaming mid tower| 10gb NIC

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, LAwLz said:

So copying or mimicking others' art is no longer allowed on Kickstarter? Are they going to ban all western projects that use animestyle artwork then? Because those are definingly mimicking and copying a particular style of imagery.

Manga/Anime would be gone under their logic; everything belongs to Tezuka.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Kisai said:

I wrote the post because every 2-3 days, artists I follow retweet the most unhinged takes about "AI", so yes, I AM in the space, and I am in the position to talk to artists, and many of them have taken up the toxic view that the only reason anyone is building these things is to destroy art jobs. Some total luddite points of view...

 

I sincerely doubt that any "AI Art generator" is going to result in job losses. Sure, yes, it may shift art jobs around a bit...

Most artists understand that AI-art generators are useful tools not just for the non-creative but also for them. It's an excellent tool for idea generation and aesthetic problem-solving. But there are real concerns that are easy to dismiss if you're not experiencing them first-hand.

 

General artists are worried that their positions will be diminished further because art can be generated and refined quicker and cheaper than paying a human. Most people don't need "specific" art and really do not care whether a piece came from a human not. Most people can't even tell who made it.

 

But like you've said, professionals won't want unspecific, non-functioning designs. Designers might be worried about this tool because ideation is the backbone of design and for many designers, the best part. It's all about creativity and coming up with interesting new ideas. Designers would obviously still be necessary for choosing and refining ideas but that work becomes tedious and uninteresting if the person generating or directing ideas isn't part of the team. 

 

I agree that generators will shift the role an artist has in art. That's the biggest issue that most professional artists have with them. There are other issues that you've addressed that also concern artists, but those issues aren't exclusive to artists or AI-art generators.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, theninja35 said:

Most artists understand that AI-art generators are useful tools not just for the non-creative but also for them. It's an excellent tool for idea generation and aesthetic problem-solving. But there are real concerns that are easy to dismiss if you're not experiencing them first-hand.

 

General artists are worried that their positions will be diminished further because art can be generated and refined quicker and cheaper than paying a human. Most people don't need "specific" art and really do not care whether a piece came from a human not. Most people can't even tell who made it.

 

But like you've said, professionals won't want unspecific, non-functioning designs. Designers might be worried about this tool because ideation is the backbone of design and for many designers, the best part. It's all about creativity and coming up with interesting new ideas. Designers would obviously still be necessary for choosing and refining ideas but that work becomes tedious and uninteresting if the person generating or directing ideas isn't part of the team. 

 

I agree that generators will shift the role an artist has in art. That's the biggest issue that most professional artists have with them. There are other issues that you've addressed that also concern artists, but those issues aren't exclusive to artists or AI-art generators.

With every new invention people panic their jobs will be lost but in reality they just transition to a new field. I mean look at all the people panicking because McDonald's is going all automated yet those robots will still need to be maintained, programmed, and upgraded.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/24/2022 at 3:32 AM, grg994 said:

The fundraisers representing the group "Unstable Diffusion" had the promise to use the raised money to deliver "unrestricted" text-to-image AI art service to the common folks - based on Stable diffusion and similar open-source models - but without censorship to the image prompts or limitation on NSFW/adult image generation capability.

there already are no restrictions if you run it locally though. and NSFW models are normally the first ones to popup. they are literally everywhere

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Just that Mario said:

Just like every single human-generated piece of content?

Taking inspiration from something and making your own art in your own style and using your own creativity is very far from the copy-pasta AI image creation.....

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Fasterthannothing said:

With every new invention people panic their jobs will be lost but in reality they just transition to a new field. I mean look at all the people panicking because McDonald's is going all automated yet those robots will still need to be maintained, programmed, and upgraded.

I agree with that, but there are a lot of people who were looking forward to something or have spent a long time either in the career or developing their skills to be in that career and are now forced to switch to something else. Regardless, I was just explaining why artists feel the way they do and why the generators are actually a concern to artists.

Link to comment
Share on other sites

Link to post
Share on other sites

Most of the violations from the AI against the art community are usually not law binding but a set of fairly unwritten code of ethics that have been understood by most artists. 

 

For example here: 

 

As a programmer, I would say that the AI has successfully done its job. It has managed to transpose a different style above a base reference with many different shading techniques and styles. 

 

However as a creative I would not choose to compose such a piece with the same pose and composition as the image the I referenced from. More has to change from what was referenced from to be considered rightfully my "own" work. That is why you often see call out posts often accusing artists of "tracing" and "copying". You learn by copying but you do not copy it and call it your own. 

 

 

i5 2400 | ASUS RTX 4090 TUF OC | Seasonic 1200W Prime Gold | WD Green 120gb | WD Blue 1tb | some ram | a random case

 

Link to comment
Share on other sites

Link to post
Share on other sites

So this is the kind of project Kickfarter does act upon.

 

All the almost worst Aliexpress listing sounding projects about Chinese miracle tech things that will not only work as a watch but save the Earth from an asteroid while walking your dog which are 100% scams. Kickfarters reaction... None, after all they cannot say if someone can produce a watch that can walk dogs, maybe there's one.

 

Companies making a successful Kickfart for borderline possible product and once the funding is secured ran away faster than meep meep from the coyote. Kickfarters reaction... None, after all they cannot know if there's production problems or just humane errors and it's not their job to make sure the money goes to where it is said to go.

 

Companies making a successful Kickfarter, finding out they cannot produce the product at the promised price, ran away and never deliver but start to sell the product elsewhere. Kickfarters reaction... At this point do you really think they would do anything? Yeah, they don't.

 

You could probably make a Kickfarter and directly tell that you are not going to do shit with the money but people can fund you in hopes to get fake Air Jordans for $10 and Kickfarter wouldn't do anything to stop you or protect the people on the platform from you.

 

But gosh darn it, an AI project with already some real world progress to show but it might end up helping people to get around the joke that is copyright laws (just a none sarcastic note here: immaterial rights are important and they should be protected but thing like the 75 years after the dead of the original maker and transferable rights and all that corporation bribed shit makes the whole law a joke), "SHUT IT DOWN RIGHT NOW!!!!"

Link to comment
Share on other sites

Link to post
Share on other sites

just remember that fair use is a defense policy, not a green light.  So regardless if you agree or disagree about the CR situation, These guys are going to have to take it to court for resolution.  Also it is quite possible that CR owners will have to take kiskstarter to court before kickstarter can claim damages.  I can see this discussion getting messier than a 3-way between cousins with two pregnancies.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mr moose said:

just remember that fair use is a defense policy, not a green light.  So regardless if you agree or disagree about the CR situation, These guys are going to have to take it to court for resolution.  Also it is quite possible that CR owners will have to take kiskstarter to court before kickstarter can claim damages.  I can see this discussion getting messier than a 3-way between cousins with two pregnancies.

Consider there are no "funds" here, everything was refunded.

 

This is also why "indiegogo" is rife with scams. You can make an indiegogo project for anything, fail to fund all of it, and still get the money, even if that money just goes into your pocket. Kickstarter projects already have to have "something" before they're even started. You can't make a Kickstarter on promises.

 

As far as was it a smart idea for kickstarter to do what they did? No, it probably wasn't because there's nothing different from one software project to another that justifies this. It's what the owner of the project put IN the project that got it banned. At any rate, I'm perfectly fine seeing "Machine Learning" projects not be funded as kickstarters, because these are grifts. If you want to make money from a ML project, either write a new one from scratch, or fork an existing one that permits "fork"ing it and the models used. Which clearly isn't what was done here. What was done here was just taking the existing Stable Diffusion software and model, and doing nothing different.

 

In fact, if you actually look at this image:

8c9fe6183da4c20edce658e8a7c26f6d_original.png?ixlib=rb-4.0.2&w=680&fit=max&v=1670811229&gif-q=50&lossless=true&s=d2ca2e512851555f26270c568688b4c6

All talk about model. So this funding was in fact for cloud computing time. Not developing the software.

 

This was likely one of the big red flags:

Quote

With $25,000 in funding, we can afford to train the new model with 75 million high quality images consisting of ~25 million anime and cosplay images, ~25 million artistic images from Artstation/DeviantArt/Behance, and ~25 million photographic pictures.

So there is no way they got consent from the cosplayers. Likewise Artstation and DeviantArt, though their terms of service likely absolves them of someone ripping the site and doing so.

 

Link to comment
Share on other sites

Link to post
Share on other sites

It’s always interesting looking at so art discussions because people have such extreme opinions with the least actual logic

 

I could use some help with this!

please, pm me if you would like to contribute to my gpu bios database (includes overclocking bios, stock bios, and upgrades to gpus via modding)

Bios database

My beautiful, but not that powerful, main PC:

prior build:

Spoiler

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×