Jump to content

Twitter Profited from Child Porn, Refused to Take it Down

TheReal1980
21 minutes ago, wkdpaul said:

I suggest you read the article, the BBC asked for an official answer as to why some 80% of the illegal material wasn't removed after multiple reports.  FB asked for links, and obviously, it's a requirement to have this reported to the authorities, so why did FB asked to see it ??? The whole situation is out of a 'The Onion' article.

 

I'm not arguing that this shouldn't be reported to authorities, that's not the point at all, the point is, FB didn't removed illegal content after multiple reports, when pressed for comments, they willfully requested links to said illegal content, then acted surprised when they got it. If you can't see the ridiculousness of it all, then I don't know what to say.

It's not about seeing the "ridiculousness", it's about the obvious next step being reporting it. Like obviously that's exactly what happens. There isn't a world in which anything else happens.

 

Name one company (including BBC) willing to take preemptive blame for screwups and not try to cover their asses afterwards.

 

I did read the article before commenting. Just not surprised companies act like companies.

 

Apple still employs Foxconn after all. 

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Kisai said:

There is no bias. Machine learning

In law, people are held accountable, not machines. Bias automated or not is still human bias. Someone wrote the code.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Kisai said:

Easy. They're not allowed to look at it. At the (auction site) there is a department called "prohibited items" and only a handful of people are permitted to look at the horrible content people post there, and it's traumatizing. You don't want your entire staff to be exposed to that.

Fair but in the very least you'd expect content that gets flagged multiple times to get pulled and in this case, the girl spoke to a person and provided proof she was underage.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, StDragon said:

In law, people are held accountable, not machines. Bias automated or not is still human bias. Someone wrote the code.

The code isn't the problem (in the sense that the bias doesn't come from the code). The problem is machine learning learning racism correlates to violence. Or more problematically, structural racism exists and thus perpetuating it via hiring/credit/similar recommendations.

 

It can only see/emphasize the biases already existing in the userbase

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, StDragon said:

In law, people are held accountable, not machines. Bias automated or not is still human bias. Someone wrote the code.

No, this is why AI is such a grey area in law (the whole "if an automated car crashes who's to blame?" conundrum), you don't write code for an AI, you write an algorithm, give it some variables and set it free to make up its own mind.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, StDragon said:

In law, people are held accountable, not machines. Bias automated or not is still human bias. Someone wrote the code.

That's why code need to be audited.

 

Machine learning is a bit of a wild card as it's not trained to have bias, it learns that bias from the input variables, that's why twitter has a problem with identifying black people in it's auto photo cropping. If you only train the face detection on white faces, of course that will happen.

 

It's the same with googles nudity detection, it will frequently identify cartoons as nudity because of the solid colors. Google also has some kind of detection for detecting those under the age of 13 in videos by voice because at least one person I know had their account suspended because they sound like a child despite being over the age of 20. I actually know more than one person with a natural high pitch voice and they would rather not stream on youtube because they don't want their streams suddenly blocked because it thinks they're a child. 

 

You can't "undo" machine learning. You can only continue to train it with better input. So if you get a few false positives, that's still far more efficient than hiring 10,000 people to review content before it goes up. For all we know the algorithm works 99% of the time, and the complaints are all coming from the 1% that it negatively impacts.

 

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Kisai said:

*snip*

People have a hard time understanding basic tech functions, so imagine the average user on the platform (that probably doesn't know that WiFi ≠ internet) ... pretty sure they can't differentiate between actual moderation, and the algorithm that effectively creates echo chambers.

 

16 minutes ago, Kisai said:

Easy. They're not allowed to look at it. At the (auction site) there is a department called "prohibited items" and only a handful of people are permitted to look at the horrible content people post there, and it's traumatizing. You don't want your entire staff to be exposed to that.

And I agree, but that's not the point I'm trying to make, FB doesn't even allow nudity, so multiple reports should have this material flagged for review, even if it's sent to a very specific and limited group of mods.

 

 

14 minutes ago, Curufinwe_wins said:

It's not about seeing the "ridiculousness", it's about the obvious next step being reporting it. Like obviously that's exactly what happens. There isn't a world in which anything else happens.

 

Name one company (including BBC) willing to take preemptive blame for screwups and not try to cover their asses afterwards.

Not the points I was making, FB was aware of the nature of the content. They could've EASILY asked for the content without having to report the journalist (asking for the account that posted it or something similar), the whole thing wasn't handled properly and that's the issue here.

 

 

9 minutes ago, Master Disaster said:

No, this is why AI is such a grey area in law (the whole "if an automated car crashes who's to blame?" conundrum), you don't write code for an AI, you write an algorithm, give it some variables and set it free to make up its own mind.

Best exemple would be the MS AI bot, Tay, that turned racist ... they can throw algorithms at the problem all they want, but the human factor on the other end is difficult to predict, and that makes the algorithm act in ways that isn't predictable.

If you need help with your forum account, please use the Forum Support form !

Link to comment
Share on other sites

Link to post
Share on other sites

An issue with social media, if it's reflected of the region. If your country is divided, it's going to divide even more. If it doesn't quite have those problems, it's going to be less noticable. Like for example the US, right or left, to anti cop or pro cop. You do have a lot of issues, but as seen with US as an example is that the news get so distorted and might just find the same thing over and over, and if you can view both sides you might be able to find the "truth" of the story. It can be really annoying at times, more so when there is a lot of drama around it.

 

Or if you get into the hole of one type of emotion, that might be all you see in your feed. Instead I wish that could be disabled. Either it being too much a focus on race, gender, politics, hate, love, whatever it might be.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, wkdpaul said:

People have a hard time understanding basic tech functions, so imagine the average user on the platform (that probably doesn't know that WiFi ≠ internet) ... pretty sure they can't differentiate between actual moderation, and the algorithm that effectively creates echo chambers.

 

And I agree, but that's not the point I'm trying to make, FB doesn't even allow nudity, so multiple reports should have this material flagged for review, even if it's sent to a very specific and limited group of mods.

 

 

Not the points I was making, FB was aware of the nature of the content. They could've EASILY asked for the content without having to report the journalist (asking for the account that posted it or something similar), the whole thing wasn't handled properly and that's the issue here.

 

 

Best exemple would be the MS AI bot Tay that turned racist ... they can throw algorithms at the problem all they want, but the human factor on the other end is difficult to predict, and that makes the algorithm act in ways that isn't predictable.

LMAO

 

I'd totally forgotten about that. Didn't someone get it to say "Hitler was right" or something similar?

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Master Disaster said:

No, this is why AI is such a grey area in law (the whole "if an automated car crashes who's to blame?" conundrum), you don't write code for an AI, you write an algorithm, give it some variables and set it free to make up its own mind.

The driver or automotive manufacture is to blame. In either case, blame is assessed to either the individual and/or organization. It's not dismissed as an act of nature. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Master Disaster said:

LMAO

 

I'd totally forgotten about that. Didn't someone get it to say "Hitler was right" or something similar?

Can't remember, I just remember how fast internet broke it, it was hilarious.

 

 

1 minute ago, StDragon said:

The driver or automotive manufacture is to blame. In either case, blame is assessed to either the individual and/or organization. It's not dismissed as an act of nature. 

Right, because the Tay bot turned racist because the dev made it? AI is very complex, users do have influence on the end result, that's the whole point of it. If we wanted a specific desired result, then AI and algorithms aren't the way to go. You make specific word and content filter and don't care about context. But I'm pretty sure that's not what anyone wants.

If you need help with your forum account, please use the Forum Support form !

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, StDragon said:

The driver or automotive manufacture is to blame. In either case, blame is assessed to either the individual and/or organization. It's not dismissed as an act of nature. 

You're assuming blame can be apportioned. What happens if the car was operating normally, the driver was driving safely and the AI misses something through equipment malfunction? You can't blame Tesla because a sensor died, you can't blame the driver because they were following instructions.

 

Its a incredibly complex moral and philosophic question that we haven't even begun to tackle yet. At some point in the future its very likely we will see a situation where an automated car crashes and blame cannot be assigned to anybody.

 

You cannot prosecute a car so what happens?

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, wkdpaul said:

Right, because the Tay bot turned racist because the dev made it? AI is very complex, users do have influence on the end result, that's the whole point of it. If we wanted a specific desired result, then AI and algorithms aren't the way to go. You make specific word and content filter and don't care about context. But I'm pretty sure that's not what anyone wants.

 

1 minute ago, Master Disaster said:

You're assuming blame can be apportioned.

Absolutely! Blame can and is apportioned in the event of software that causes harm; either in the creator or operator's negligence.

 

When someone's cow cross the road and you wreck into it, your insurance will go after the farmer for negligence to secure that animal. But if a wild deer crosses the road and you hit it, it's deemed an "Act of God".

 

AI has (yet) to be deemed to be "Acts of God/Nature" in law. If you can cite legal precedent that I'm not aware of that says otherwise, please do so.

Link to comment
Share on other sites

Link to post
Share on other sites

I was about to delete my Twitter account weeks before.  Now I'm definitely deleting it.  Sorry to say but I rarely use my Twitter account unless it's for LTT and Razer stuff but I'll survive without Twitter.  In fact I don't even use Facebook anymore unless it's for family contact. 

"Whatever happens, happens." - Spike Spiegel

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, dizmo said:

To bitch at companies like Translink.

Probably not the same Translink I'm thinking of but if it is, then that's understandable.

Our Grace. The Feathered One. He shows us the way. His bob is majestic and shows us the path. Follow unto his guidance and His example. He knows the one true path. Our Saviour. Our Grace. Our Father Birb has taught us with His humble heart and gentle wing the way of the bob. Let us show Him our reverence and follow in His example. The True Path of the Feathered One. ~ Dimboble-dubabob III

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, Kisai said:

Facebook, Twitter, Youtube all have algorithms that push engagement, but they make no distinction between cesspools, astroturfing or echo chambering.

 

The problem I find is that people have been asking for Twitter to enforce it's own rules against the ex-president since before 2016, and Twitter just didn't. Meanwhile both left-wing and right-wing people have being complaining about being censored on Twitter and Youtube, because again, algorithms bury their content. 

 

There is no secret censorship cabal. It's algorthims, and algorithms learn by what content they were trained on. Right-wing content tends to call for violence, that's why violent statements gets blocked. However the algorithm then uses that information (eg the person posting it, and the people who are following the account) to determine who else to bury. That's what happens on Twitter, Facebook and Youtube. That's why these sites turn into massive echo chambers, you have two or more groups of people all shouting at each other, within their own circles but rarely do those circles overlap.

 

As a personal example, on one twitter account, nearly every left-wing artist I follow has been complaining at some magnitude of the ineptitude of the former president. On the other twitter, which I follow youtubers, the ex-president hasn't even been mentioned in months. You can quite literately have a completely different experience on twitter by not following any celebrity.

 

So most claims of "waa, I'm being censored by (twitter) for my political views" is nonsense. They were being censored because they were harassing someone publicly, like an idiot.

Yes, machine learning algorithms aren't biased, but they can become biased based on the data they are being trained with. To say that there has been some degree of bias in Twitter's algorithms recently wouldn't be entirely accurate IMO. Without trying to get political, an example would be "election fraud" claims from both sides (one in 2016, the other in 2020). Despite both being equally unfounded and largely bs, only one was actively censored and "fact checked" in every tweet. Or the summer riots and the recent ones, twitter seemed to crack down a bit harder on "calls to violence" for one of them.

 

I have no problem with them choosing to not want conspiracy theories and calls to violence on their platform, but those rules should be enforced without bias.

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, StDragon said:

 

Absolutely! Blame can and is apportioned in the event of software that causes harm; either in the creator or operator's negligence.

 

When someone's cow cross the road and you wreck into it, your insurance will go after the farmer for negligence to secure that animal. But if a wild deer crosses the road and you hit it, it's deemed an "Act of God".

 

AI has (yet) to be deemed to be "Acts of God/Nature" in law. If you can cite legal precedent that I'm not aware of that says otherwise, please do so.

You're totally misunderstanding the point...

 

The very fact there is no legal precedence is the problem in the first place. Its never happened as yet but the overwhelming odds are that it is going to happen and when it does a lot of judges have got many hours of discussion ahead of them to work the details out.

 

If the car genuinely misses something then who gets the blame?

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Master Disaster said:

You're totally misunderstanding the point...

 

The very fact there is no legal precedence is the problem in the first place. Its never happened as yet but the overwhelming odds are that it is going to happen and when it does a lot of judges have got many hours of discussion ahead of them to work the details out.

 

If the car genuinely misses something then who gets the blame?

 

This. The issue is existing laws n liability assume that everything is deterministic. If X then Y. So blame can be neatly assigned because somthing bad happened because an IF statement was wrong or missing. There's a clear human fault to assign blame to. With a trained algorithm that isn't true, it's not deterministic and thats going to create hell when liability comes calling.

 

I suspect long term what will happen is that any AI algorithm will have to go through an independent government testing procedure in much the same way humans have to take a driving test to get a driving licence.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Master Disaster said:

 

If the car genuinely misses something then who gets the blame?

Depends on where the fault lays; the driver failing to control the vehicle, or the manufacturer for code that didn't control or allow the operator (driver) to control it.

 

If say a tornado came in and blew it off the road with little warning; that would qualify as "Act of God" regardless of who or what was in control.

Link to comment
Share on other sites

Link to post
Share on other sites

Hosting people's porn against their consent is not cool. Its the same as someone posting nude photos against someones consent but the website host refusing to take it down.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, StDragon said:

or the manufacturer for code that didn't control or allow the operator (driver) to control it.

 

The point here is that an AI algorithm doesn't have that programmed in from the start, it writes that itself from the training data set. If you feed the same algorithm the same data in a different order you will get a different AI out of it, in some cases even the same data in the same order can produce subtly different AI's. You take 30 data sets arrange them each 10 different ways and feed each of those into the same algorithm 5 times you could conceivably get 1500 different AI's out of the same human written code. And many of them would be indistinguishable to an observer in the majority of situations, but they'd still show differences in specific unique situations.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, BuckGup said:

The issue isn't demanding perfection, as I agree that is not really possible, it's the fact Facebook has a track record for extremely inhumane practices like experimenting on suicidal teens, tracking you on other sites even if you don't have a facebook account, or hosting fake political ads because all they care about is $$$$. In more recent news Facebook has been crying because Apple is actually making strides to put a dampen on their business model of selling your entire existence to anyone who's willing to pay. Having proper moderation that doesn't harm the experience is hard but not impossible. We just need law makers to actually hold tech companies to standards and ethics. As of now it's the wild west and they can simply lobby their way into anything. 

I think that's all fine things to worry about and want changed.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, CarlBar said:

The point here is that an AI algorithm doesn't have that programmed in from the start, it writes that itself from the training data set. 

How you rationalize it and what's coded into law are two different things. You're effectively shouting at the clouds.

 

Failure of AI is no different than software that causes harm. A *HUMAN will be at fault when it's not deemed nature.

 

Again, AI has yet (as far as I'm aware) to set legal precedent as being an autonomous entity to that of a natural force (or wildlife).

 

Maybe when AI becomes self aware and demands independence, then we can have that argument. But that's a ways off.

 

* Edit: a human or an organization with personhood status like a corporation.

Edited by StDragon
Clarification
Link to comment
Share on other sites

Link to post
Share on other sites

How likely is it that they never actually spoke to a real person at Twitter and only talked with a bot/AI ?

 

Quote

“Twitter has zero-tolerance for any material that features or promotes child sexual exploitation. We aggressively fight online child sexual abuse and have heavily invested in technology and tools to enforce our policy, a Twitter spokesperson wrote.

“Our dedicated teams work to stay ahead of bad-faith actors and to ensure we’re doing everything we can to remove content, facilitate investigations, and protect minors from harm — both on and offline.”

 

You know what's the funniest thing in all this? They take DMCA takedowns with a lot more aggression than this. Like how it was easy for some random nobody to copyright claim images/gif of Dragon Ball on Twitter in the past month or so, yet so hard to take down child porn...

 

Should've just done that instead of trying to reach out, since with a DMCA notice, they have to pretty much act ASAP. It's their likeness (literally) and they don't want it online, especially not on a random twitter profile... They likely could've done that. Would it have fixed it issue? Probably not. But it would've gotten rid of the offending video a lot sooner instead of going through the twitter bots.

 

All in all, the parents failed their kid. It's important to teach them not to send photos and videos of themselves on the internet to anyone... Not even their boyfriend/girlfriend since that could be used in "revenge porn" later on when/if they break up. The real repercussions of that can be disastrous. 

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

  

9 hours ago, Kylan275 said:

It's hard to comment on this since it's just an allegation right now, but if true, it's pretty damning. How could they possibly find no reason to remove a video of minors engaged in sex acts? It'll be interesting to hear their response to this. 

 

I'd personally be fine if Twitter disappeared entirely forever. But that's just me. 

While I agree that it's just allegations, having just skimmed through the court document...it's pretty bad for Twitter.

 

I've actually reported something illegal on Twitter before, and can attest to the fact that their current procedures are horrible.  I had to report it under a different category, as there wasn't an "this content is illegal" option.  Actually seriously, go to twitter right now and try clicking through to report an imagine as one...here are the options

"It displays a sensitive photo or video"

-What does the media contain:

--Adult

--Violent

--Hateful

--An unauthorized photo or video

"It's abusive or harmful"

--It's disrespectful or offensive

--Includes private information

--Includes targeted harassment

--It directs hate against a protected category

--Threatening violence or physical harm

--They're encourages self-harm or suicide

 

The fact they don't have a category to report an picture as containing illegal content is unacceptable.  I get that there is a form that can be filled out, but why make it more cumbersome to report a crime like this?  While I don't necessarily agree with a "block on first report", there does need to be a quick processes to attempt a block.  (like instant block if it's reported and the account has "red flag" text in it)....otherwise it should be set as first priority to the screeners, so the content can be removes as quickly as possible.

 

I think the point I am trying to make though is, it shouldn't be easier to remove copyrighted material from twitter than this...so at this point I do hope someone gets the book thrown at them.  (And potentially criminal charges against the Twitter employee who reviewed it and found nothing wrong....while I do admit no screener is perfect, the fact is if they receive identification from a person and the account/tweet clearly shows what looks like adolescents that is a serious oversight).

 

Here is hoping that Twitter gets the book thrown at them (but at the same time a bit worried that it might cause a knee jerk reaction where it's easy to get content removed just by reporting it...and having no recourse...I could see it becoming a way to silence people via reporting legit tweets as illegal tweets)

 

-DIFFERENT POINT-

Why isn't the police getting a bad wrap about this...the mother contacted the police, and it's a serious case.  I mean, the local police should have been able to contact Twitter at least.

 

3 hours ago, cj09beira said:

sooo where is the "suspension" from aws?, this is much more of a violation than what parler supposedly did 

Moderation is the key though.  While Twitter is failing horribly, there is at least a guise of attempt.  (i.e. Parler advertised as not willing to do moderation)

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×