Jump to content

Twitter Profited from Child Porn, Refused to Take it Down

TheReal1980
1 minute ago, StDragon said:

How you rationalize it and what's coded into law are two different things. You're effectively shouting at the clouds.

 

 

Currently nothing is coded into law, (in general specific countries laws may differ depending on wording), at the moment on this. Again the law assumes that any software failure will be the result of direct action, (or inaction), on the part of a human. It literally doesn't cover a situation where that isn't the case. If you can't go "this person did/did not do X here this would not have happened" it can't assign blame right now.

 

And all of this is ignoring the question of weather any assigned fault leads to actual legal liability. That isn't cut and dried either. Air Safety has examples where ground or aircrew made serious errors, but they where ultimately held not liable because they made them due to specific circumstances or realities that made such mistakes reasonable errors.

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, CarlBar said:

 

Currently nothing is coded into law, (in general specific countries laws may differ depending on wording), at the moment on this. Again the law assumes that any software failure will be the result of direct action, (or inaction), on the part of a human. It literally doesn't cover a situation where that isn't the case.

 

lol. Please cite a case where someone avoided redress by blaming it on software where case ended faultless. 

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, wanderingfool2 said:

 

 

-DIFFERENT POINT-

Why isn't the police getting a bad wrap about this...the mother contacted the police, and it's a serious case.  I mean, the local police should have been able to contact Twitter at least.

 

Moderation is the key though.  While Twitter is failing horribly, there is at least a guise of attempt.  (i.e. Parler advertised as not willing to do moderation)

Have you ever talked to law enforcement?

 

They still use paper for everything. Most of the people you speak to in law enforcement are not interested in high-hanging fruit that requires work unless it involves a team of people to take down something nefarious. Low hanging fruit like speeders and drug/weapon offences are easy to solve because the physical evidence is usually on the subject.

 

I doubt any city police force has a "internet crimes" division. I know the RCMP/CSIS and FBI/DHS do monitor social media, but it's usually only for reports of crime-in-progress (eg idiots bragging about crimes, and people posting evidence of crimes), but so do news media, so you can't sabotage your own investigation by going "thank you for your report of suspected criminal activity", because then you will be held accountable for any action.

 

Private businesses, do not have these requirements. They can throw content off their site for any reason they want, but it usually has to be pretty clear when it's for not-illegal content otherwise a contract breech is made. If it's illegal, it will be vague, and often involve law enforcement in some shape, and the only reason the end user knows they're in trouble is because the content was removed for a vague 'tos violation" that may mention the part of the TOS violated.

 

When I worked for the (auction company), we were not permitted to edit the outgoing boilerplate emails for infringement, because they had been approved by legal to be exactly that. That's why you get useless emails when your content is removed from platforms, that is what Legal wants sent.

 

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Kisai said:

Have you ever talked to law enforcement?

 

They still use paper for everything. Most of the people you speak to in law enforcement are not interested in high-hanging fruit that requires work unless it involves a team of people to take down something nefarious. Low hanging fruit like speeders and drug/weapon offences are easy to solve because the physical evidence is usually on the subject.

To answer the first question, I've done more than talked.  I've even been on notice of a subpoena, due to evidence I've provided to law enforcement having to be used in court.  While I did find that officers were often slow on certain things, when matters relating to serious crimes they went all out (until it wasn't a critical issue at which point it was put on the backburner).  With that said as well, that's why I know if a request comes from an officer of the law it gets handled a lot quicker and more eyes fall on it.

 

The other fact being, it was reported to the police and if the police didn't do anything they should be punished harder than Twitter is (as that is literally part of their job).

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, StDragon said:

You just made the case for why Twitter should be broken up. Math says they can't do their job. Then law says they should be shut down.

 

That's Twitters problem, not society. Do or do not, there is no try.

Lol that thinking would result in the immediate shuttering of literally every discussion or user-generated content site in the world, and every open access forum physically existing.

 

It literally argues for shutting down the LTT forums. 

 

It is almost 100% societies problem. Society generates the content AND the abusers. 

 

I say almost because enabling is still a thing, but society has SOOO MUCH more responsibility in this situation than any one forum (presuming they are actually following the law).

 

Plus like this is assuming every country on the world even agrees what the age of consent is. Or what constitues indecent behavior/action. (News flash they don't, and not even close)

 

Maybe (no, not maybe, it still wouldn't work but whatever) if literally the entire world got together and agreed what was acceptable censorship of content... you'd have a chance as a host for that content to follow the regulations without either missing signficant quantities of stuff or causing huge collateral damage.

 

Again I'm not against demanding Twitter/FB/everyone does better. But your expectation is simply impossible.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, StDragon said:

lol. Please cite a case where someone avoided redress by blaming it on software where case ended faultless. 

 

Fault and criminal liability are not the same.

 

I don't know about software but i can point to 2 aircraft accidents where the root cause of the accident was determined to be some form of human error, but it did not lead to any criminal liability because the errors where judged reasonable under the circumstances that occurred.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Curufinwe_wins said:

Lol that thinking would result in the immediate shuttering of literally every discussion or user-generated content site in the world, and every open access forum physically existing.

 

<snip>

 

Again I'm not against demanding Twitter/FB/everyone does better. But your expectation is simply impossible.

Well, that's the entire argument over Section 230. "Big Tech" wants their cake and eat it too. There are tomes of papers on that argument, and it's still not over.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, StDragon said:

Well, that's the entire argument over Section 230. "Big Tech" wants their cake and eat it too. There are tomes of papers on that argument, and it's still not over.

It's not a hard thought experiment. Ban FB, Twitter, Reddit and do people stop talking online? No. They just find another platform. That platform becomes (if it isn't already) severely undermoderated as a result... back to square 1.

 

Reforming and in particular codifing reasonable (or even achievable) expectations within Section 230 is something everyone in "Big Tech" wants because this "damned if you do, damned if you don't" shit is getting really frustrating to everyone involved.

 

But either way, your no-tolerance perspective on possible errors or inadequacies in controlling the flow of information you deem impermissible has only one logical end. That's just what ultimatums do.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, CarlBar said:

 

Fault and criminal liability are not the same.

 

I don't know about software but i can point to 2 aircraft accidents where the root cause of the accident was determined to be some form of human error, but it did not lead to any criminal liability because the errors where judged reasonable under the circumstances that occurred.

To be criminal has to have intent. So yeah, you're correct. But fault can still be assessed even if its just negligence.

 

When you're developing AI, you're effectively using it to replace direct human control. IMHO, that's just pure negligence to assume it can substitute human judgement. Any reasonable person would tell you it can't.

 

AI is just a tool.

 

BTW, there's all sorts of regulations with regards to fault tolerance and safety when it comes to aircraft and medical grade equipment. It's the litigation that makes it so expensive. Don't be surprised if similar criteria is required for social media. All it would take is additional legislation.  

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Master Disaster said:

Err, at the age of 13 you don't have the legal authority to authorise a video of that nature in any capacity.

 

There shouldn't have been even the slightest bit of doubt from Twitter, as soon as the phrases "13 years old" and "sexual content" were combined Twitter should have pulled it instantly.

 

Disgusting.

Technically that depends on region. Age of consent is 13 in Japan, but looked down upon. But Twitter being a US company you'd think they'd go for the whole 18+ thing.

 

11 hours ago, Kylan275 said:

I'd personally be fine if Twitter disappeared entirely forever. But that's just me. 

It would probably make the world a much better place. Social media in general needs to die.

 

10 hours ago, PCGuy_5960 said:

unlike Twitter who moderate their website but only when it's convenient.

Or disagrees with their political views.

9 hours ago, Spotty said:

Before the video was removed it had 167,000 views and over 2200 retweets (seriously, what the fuck?).

Didn't all the "MAPs" (Minor attracted persons. Cute pen names make them seem less pedophilic.) go to twitter after being kicked off Instagram or something?

#Muricaparrotgang

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, PCGuy_5960 said:

Yes, machine learning algorithms aren't biased, but they can become biased based on the data they are being trained with. To say that there has been some degree of bias in Twitter's algorithms recently wouldn't be entirely accurate IMO. Without trying to get political, an example would be "election fraud" claims from both sides (one in 2016, the other in 2020). Despite both being equally unfounded and largely bs, only one was actively censored and "fact checked" in every tweet. Or the summer riots and the recent ones, twitter seemed to crack down a bit harder on "calls to violence" for one of them.

 

I have no problem with them choosing to not want conspiracy theories and calls to violence on their platform, but those rules should be enforced without bias.

The irony here is that part of what u just said is proof that 'agendas' driven by MSM and Big tech DO have an impact on what people believe, spread such info around and say it enough to someone and they eventual start to believe it, even if its factually incorrect.

 

But your overall point is correct, what ever their 'rules' they need to be enforced equally and fairly ,otherwise there worthless.

But thats near impossible to do, people will always be biased, and 'algorithms' simply are not intelligent enough to deal with all the variety of comments and content thats present. Dont get me wrong they can certainly do better than what they do now, a lot better, but not with the people in charge and people they have employed currently.

CPU: Intel i7 3930k w/OC & EK Supremacy EVO Block | Motherboard: Asus P9x79 Pro  | RAM: G.Skill 4x4 1866 CL9 | PSU: Seasonic Platinum 1000w Corsair RM 750w Gold (2021)|

VDU: Panasonic 42" Plasma | GPU: Gigabyte 1080ti Gaming OC & Barrow Block (RIP)...GTX 980ti | Sound: Asus Xonar D2X - Z5500 -FiiO X3K DAP/DAC - ATH-M50S | Case: Phantek Enthoo Primo White |

Storage: Samsung 850 Pro 1TB SSD + WD Blue 1TB SSD | Cooling: XSPC D5 Photon 270 Res & Pump | 2x XSPC AX240 White Rads | NexXxos Monsta 80x240 Rad P/P | NF-A12x25 fans |

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Zodiark1593 said:

Not sure what punishment could be imposed that would get Twitter’s attention. Many of the big tech companies have shown to be able to shrug off billion dollar fines. In some cases, share price has actually increased. Raising funds isn’t a problem. 
 

Suppose if you were really intent on making an example, asset forfeiture could be imposed. This is often done in the case of property used in commission of a crime. If this were to extend to intellectual and intangible property as well (as server hardware is easily replaced), you pretty much have Twitter dead to rights. 

The family could absolutely file a civil suit against Twitter. They will likely settle for a large sum of money if they have even a half decent attorney. This is not the kind of case Twitter want's to put in front of a judge.

GPU: XFX RX 7900 XTX

CPU: Ryzen 7 7800X3D

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, SolarNova said:

spread such info around and say it enough to someone and they eventual start to believe it, even if its factually incorrect.

What you described is basically propaganda and yeah, it absolutely works.

11 minutes ago, SolarNova said:

But thats near impossible to do, people will always be biased, and 'algorithms' simply are not intelligent enough to deal with all the variety of comments and content thats present. Dont get me wrong they can certainly do better than what they do now, a lot better, but not with the people in charge and people they have employed currently.

Well a good way to eliminate bias is to have algorithms that are biased in equal and opposite ways, so in the end the bias kind of cancels out. Not to mention that the algorithms can be tweaked by their creators if they don't work as intended, so with some tweaking, making an algorithm that isn't very biased shouldn't be that hard for a company as big as twitter. 

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, valdyrgramr said:

But, there's also a lot of problems with the articles.   I do have to wonder if it even fits the Tech News section considering there's no provided evidence, and one source is a near infowars territory far right outlet.  The other one isn't that credible and bipolar opinionated yet never central either.   So, this is coming off as literal fake news by far righters attacking left heavy.  Point is, this just feels like a bias/agenda thing.

This was discussed earlier in this thread. While the sources may not be the most credible, in one of the articles there is a link to the court filings and the case is basically what the articles are saying, so this isn't fake news.

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Master Disaster said:

Its really not that hard, the second a tweet is reported the moderator should instantly remove it, full stop.

wow, that can't possibly be abused /s

Link to comment
Share on other sites

Link to post
Share on other sites

I left Twitter - it's too toxic.

9900K  / Asus Maximus Formula XI / 32Gb G.Skill RGB 4266mHz / 2TB Samsung 970 Evo Plus & 1TB Samsung 970 Evo / EVGA 3090 FTW3.

2 loops : XSPC EX240 + 2x RX360 (CPU + VRMs) / EK Supremacy Evo & RX480 + RX360 (GPU) / Optimus W/B. 2 x D5 pumps / EK Res

8x NF-A2x25s, 14 NF-F12s and a Corsair IQ 140 case fan / CM HAF Stacker 945 / Corsair AX 860i

LG 38GL950G & Asus ROG Swift PG278Q / Duckyshine 6 YOTR / Logitech G502 / Thrustmaster Warthog & TPR / Blue Yeti / Sennheiser HD599SE / Astro A40s

Valve Index, Knuckles & 2x Lighthouse V2

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

55 minutes ago, valdyrgramr said:

Ya, fun fact, you as a minor can be listed as an offender for just uploading this type of content of yourself in the US.

Even possessing underage selfies of yourself appears to be legally problematic, let alone sharing. 
 

While the crime of child exploitation is abhorrent, the legal means put into place seems to take the form of a sledgehammer that is liable to strike at even those it was meant to protect. Were the laws intended this way to give prosecutors additional tools against those that may fall outside norms, or was this knee jerk lawmaking?😕

 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Zodiark1593 said:

Even possessing underage selfies of yourself appears to be legally problematic, let alone sharing. 
 

While the crime of child exploitation is abhorrent, the legal means put into place seems to take the form of a sledgehammer that is liable to strike at even those it was meant to protect. This area of law is rife with knee jerk lawmaking, with little in the way of leeway or actual targeting of those with malicious intent. 😕

i really don't get that, when a guy got charged by having his own picture of himself from when he was younger, that was on his phone in the US I believe.

that is just so weird... then again, one can be charged with a lot of BS over there.

Link to comment
Share on other sites

Link to post
Share on other sites

I've read a good chunk of the discussion here and my personal takeaways are:

 

1- It's definitely not an excuse that Twitter can't moderate itself properly. To quote my favorite green midget, "Do or do not; there is no try", and if Twitter is gonna do it they better go the distance.

 

2- AI is not the catch-all solution to humanity's problems and, looking at what has happened in the recent years, you can't solve Human problems with AI. You need human solutions to human problems, to misquote some random pop culture thing again.

 

3- I've already distanced myself to the fullest extent from social media, however, it seems like it's here to stay, and it seems like it's still in its infancy. I still feel like it's "fake" social interactions, but I don't think it's going away, so I see in social media companies a responsibility to "humanize" social media, and I think AI is not the way.

 

I think human moderators are the only way (cfr. 2-). Megacorporations are not going to want to do that though, because that means generating jobs, and those hurt your bottom line. I'm a huge fan of automating what can be automated, but moral judgment cannot be automated because it's not a repetitive action. It requires a decision every single time.

 

That's my input on this. Please continue to be civil.

We have a NEW and GLORIOUSER-ER-ER PSU Tier List Now. (dammit @LukeSavenije stop coming up with new ones)

You can check out the old one that gave joy to so many across the land here

 

Computer having a hard time powering on? Troubleshoot it with this guide. (Currently looking for suggestions to update it into the context of <current year> and make it its own thread)

Computer Specs:

Spoiler

Mathresolvermajig: Intel Xeon E3 1240 (Sandy Bridge i7 equivalent)

Chillinmachine: Noctua NH-C14S
Framepainting-inator: EVGA GTX 1080 Ti SC2 Hybrid

Attachcorethingy: Gigabyte H61M-S2V-B3

Infoholdstick: Corsair 2x4GB DDR3 1333

Computerarmor: Silverstone RL06 "Lookalike"

Rememberdoogle: 1TB HDD + 120GB TR150 + 240 SSD Plus + 1TB MX500

AdditionalPylons: Phanteks AMP! 550W (based on Seasonic GX-550)

Letterpad: Rosewill Apollo 9100 (Cherry MX Red)

Buttonrodent: Razer Viper Mini + Huion H430P drawing Tablet

Auralnterface: Sennheiser HD 6xx

Liquidrectangles: LG 27UK850-W 4K HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, valdyrgramr said:

Well, to be fair it technically is.  They're, for one, violating that age restriction federal law here.   It's a nude image, and they are a child.   Then there's' the factor that they live in Florida, like Ohio, punishes the victims more than the actual criminals.  Typically, it's an "it depends" factor.   And, the parents are more likely to be held liable than Twitter.  At the same time, it's kind of a good thing that Twitter doesn't view it as that could have other impacts.   For example, does the child and parent really want some randos at a corporation viewing and moderating something like that?  Or, would it be better if we improved SW while having it taken down by someone they're more willing to trust?   I think, as Kasi said, that is factor.  Plus, the psychological toll of seeing something like that over and over could have even more negative impacts on the person viewing it.   Personally, Twitter and other social media platforms should raise the age restriction as it can, not full proof, help restrict said material from being passed around.

Even if the big companies boot those under 18, verifying age isn’t possible without going the legal ID route, and even then, kids have other, potentially unmoderated platforms they can go to. 
 

The internet in general could probably benefit from an age restriction, though as internet-capable devices are so common (the Raspberry Pi is one of the cheaper examples), this is a practical impossibility. 
 

Given the myriad of bad options, leaving at least the moderated platforms available to kids seems the lesser evil. 😕

 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Energycore said:

I think human moderators are the only way (cfr. 2-). Megacorporations are not going to want to do that though, because that means generating jobs, and those hurt your bottom line. I'm a huge fan of automating what can be automated, but moral judgment cannot be automated because it's not a repetitive action. It requires a decision every single time.

I think one of the problems with this is that it's likely harder to actually find employees to do the job.  They are exposed to the most vile content, and from what I've heard many end up with things like PTSD.  On top of that, you would want to make sure they are vetted correctly (serious background checks) and maybe have them isolated from others.  It really becomes a nightmare to moderate.  Not saying that they shouldn't, just I do understand why they try letting AI do things until there is no other option.  (Which of course in this case, it should have reached a human very quickly).

 

56 minutes ago, valdyrgramr said:

Well, I am doing research on the matter.  One, a court filing doesn't really mean anything in terms of credibility.  Two, from what I found so far, though the only ones reporting it are Fox News(heavy right), Mediate[random rambles that's mostly opinionated], NYPost[near InfoWars], and the ChristianPost[biased for several reasons, but I won't get into all of that due to the rules].  I mean I'll keep reading them, but it's listed on an organization's site not a court's site from what I've seen so far.  That can be faked, but I might have to go back as this doesn't seem legit and even if they were suing even those biased sites are just claiming it's allegations.

...a 79 page document, that contains screenshots and other evidence.  Yes, it could be fake, but realistically it likely isn't (given the amount of work that would be required to make a fake 79 page court document).  On top of which, it brings back to my other post...the points made in the document are actually very very valid.

 

Seriously, look at the options they give when trying to report an image...they seriously make it easier to report copyright infringement.  It wouldn't kill them to have an "this post contains illegal content".  Like seriously, I've had a first hand account of reporting illegal images (not of this kind of thing, but one that is clearly illegal), and it took them nearly a month to do anything

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, wanderingfool2 said:

I It wouldn't kill them to have an "this post contains illegal content".  Like seriously, I've had a first hand account of reporting illegal images (not of this kind of thing, but one that is clearly illegal), and it took them nearly a month to do anything

I feel this was likely a very intentional measure to ensure plausible deniability, unless an actual human were reached. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, wanderingfool2 said:

I think one of the problems with this is that it's likely harder to actually find employees to do the job.  They are exposed to the most vile content, and from what I've heard many end up with things like PTSD.  On top of that, you would want to make sure they are vetted correctly (serious background checks) and maybe have them isolated from others.  It really becomes a nightmare to moderate.  Not saying that they shouldn't, just I do understand why they try letting AI do things until there is no other option.  (Which of course in this case, it should have reached a human very quickly).

I did think about this when I wrote the moderation bit.

 

I feel like humans need to be the ones making the decision, BUT you do need to have a small number of highly trained and carefully selected individuals to do the process, which you mentioned.

 

On the other hand, AI -could- participate in a portion of the process, for instance I'm sure it's realistic to train an AI to identify nudity in videos, and correlate that to words that imply the presence of underage persons in the video. As long as the task is repetitive (a task like "look at this video and determine whether there's naked people in it" is a lot more repetitve than a task like "determine whether the intent of this video is malicious or not"), AI can solve that problem, flag reports as issues or non-issues, and send the few posts that are an issue to the aforementioned team.

 

That's the proposal I wanted to make, and I sort of wish I'd included that in my first comment (I thought it would have made it too long).

We have a NEW and GLORIOUSER-ER-ER PSU Tier List Now. (dammit @LukeSavenije stop coming up with new ones)

You can check out the old one that gave joy to so many across the land here

 

Computer having a hard time powering on? Troubleshoot it with this guide. (Currently looking for suggestions to update it into the context of <current year> and make it its own thread)

Computer Specs:

Spoiler

Mathresolvermajig: Intel Xeon E3 1240 (Sandy Bridge i7 equivalent)

Chillinmachine: Noctua NH-C14S
Framepainting-inator: EVGA GTX 1080 Ti SC2 Hybrid

Attachcorethingy: Gigabyte H61M-S2V-B3

Infoholdstick: Corsair 2x4GB DDR3 1333

Computerarmor: Silverstone RL06 "Lookalike"

Rememberdoogle: 1TB HDD + 120GB TR150 + 240 SSD Plus + 1TB MX500

AdditionalPylons: Phanteks AMP! 550W (based on Seasonic GX-550)

Letterpad: Rosewill Apollo 9100 (Cherry MX Red)

Buttonrodent: Razer Viper Mini + Huion H430P drawing Tablet

Auralnterface: Sennheiser HD 6xx

Liquidrectangles: LG 27UK850-W 4K HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, valdyrgramr said:

So, why is it on an organizations page and not a court hourse?   Why is only being covered by agenda sources that have an agenda against Twitter due to political and other biases?   There's honestly nothing solid in there.   On top of that, a court filing still doesn't mean anything.   It needs to be accepted by a clerk otherwise the filing means nothing, and uploading it to an organization means literally nothing.   Again, there's no solid proof at the moment.

It was submitted 2 days ago...you can't really expect that news outlets would all jump on it.  It does mean something, as the evidence does have more credence to it (as I could be wrong, but submitting falsified evidence would be perjury).  https://www.pacermonitor.com/public/case/37985226/Doe_v_Twitter,_Inc

While it's not like a .gov site, the site itself does show the cases and status of the court that it was filed in, so it would be reasonable to assume that it is a real court filing.

 

There could be plenty of reasons why it hasn't been covered yet...since it's just starting to come out.  The fact is, it's filed in court and if you read the document the allegations are quite serious (and given the evidence being presented was responses from Twitter, it would be fairly easy for Twitter to disprove...which again wouldn't that constitute perjury,? And also put the plaintiff as being liable if they were lying).  As it stands now, I am leaning towards believing the current claims, until Twitter comes out with their response...because again, some of the claims being made actually lines up with the experience I had in reporting on Twitter.

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×