Jump to content

Apple Clarifies New iPhone Child-Protection Features

deblimp
16 minutes ago, RejZoR said:

Who is this protecting then if people can just disable it and be on their way? What's the point of its existence then (other than some other hidden motive they aren't telling us about)? This is why I'm questioning it. 

Stopping that crap before it gets on Apple’s servers? Apple’s not the world police and it is outside of the scope of their tool to single-handendly solve child pornography with a single tool. 

 

And pedos are not always as bright as you think: Facebook reported 20M CSAM incidents last year alone. You’d think they wouldn’t upload those pics to Facebook if they had half a brain. 

Link to comment
Share on other sites

Link to post
Share on other sites

The amount of “conspiracy-like” reasoning this drama is uncovering troubles me. 

 

Some of us think they are better than their anti-vaxxer uncle but then we fall for the same kind of mental traps..

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, RejZoR said:

Especially given how easy it is to disable it by not using iCloud which makes you question its actual purpose. Who is this protecting then if people can just disable it and be on their way? What's the point of its existence then (other than some other hidden motive they aren't telling us about)? This is why I'm questioning it. It's just too easy and it's what I'd do if I'd want to throw a bone to naysayers and make them let it get rolled out. And it reeks exactly of that.

It seems pretty obvious to me that the reason for it is to protect Apple from liability for hosting CSAM images in their iCould servers. Thinking it’s all some grand conspiracy theory (((they))) are trying to cover up with the CSAM angle feels a little far fetched. 

Link to comment
Share on other sites

Link to post
Share on other sites

Clarified what exactly? No matter how you try to dress a pig up, it's still a pig lol.

Even if I trusted Apple to meaningfully stand up to Governments(lmao), what happens when there's another Pegasus tier hack? It's going to be pretty bad when hackers put CSAM on someone's iPhone, enable iCloud and gets Apple to rat on them. It'll be like the Rube Goldberg machine of police raids, and you'll have to somehow prove yourself innocent in court. Imagine trying to explain to some 70 year old judge or a boomer jury that hackers hacked your iPhone, they'll think "this pedo is a liar, Apple said iPhone have the best security 🥴"

And never forget that with the way this system works, the absolute best case scenario is they'll only catch low level consumers dumb enough to still put that filth on their phones. The system only scans for known CSAM, so the people the police should be focusing on(producers) will get off scot-free. (assuming they're even using iPhones to film that evil garbage, which I highly doubt.)

This is like 95% risk for 5% benefit. I don't have much of a problem with the parental control features.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, SeriousDad69 said:

Clarified what exactly? No matter how you try to dress a pig up, it's still a pig lol.

Even if I trusted Apple to meaningfully stand up to Governments(lmao), what happens when there's another Pegasus tier hack? It's going to be pretty bad when hackers put CSAM on someone's iPhone, enable iCloud and gets Apple to rat on them. It'll be like the Rube Goldberg machine of police raids, and you'll have to somehow prove yourself innocent in court. Imagine trying to explain to some 70 year old judge or a boomer jury that hackers hacked your iPhone, they'll think "this pedo is a liar, Apple said iPhone have the best security 🥴"

And never forget that with the way this system works, the absolute best case scenario is they'll only catch low level consumers dumb enough to still put that filth on their phones. The system only scans for known CSAM, so the people the police should be focusing on(producers) will get off scot-free. (assuming they're even using iPhones to film that evil garbage, which I highly doubt.)

This is like 95% risk for 5% benefit. I don't have much of a problem with the parental control features.

This hypothetical hack you’re talking about makes absolutely no sense and is a pretty ridiculous reason to oppose this. 

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, SeriousDad69 said:

Clarified what exactly?

Well, one completely new piece of information we got from Federighi is that the number of yellow flags before you get a red flag is around 30 matches. Before we only knew there was an unspecified threshold.

 

No risk of the system flagging you if you’re tricked into downloading 1 or 2 of these offending pics. (in my 25 years on teh internetz I’ve luckily never come across this kind of picture so even getting 1 “by accident” sounds like a lot, let alone 30)

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, deblimp said:

It seems pretty obvious to me that the reason for it is to protect Apple from liability for hosting CSAM images in their iCould servers. Thinking it’s all some grand conspiracy theory (((they))) are trying to cover up with the CSAM angle feels a little far fetched. 

No one said it's a conspiracy, but allowing them to do all sorts of fancy scanning/monitoring things and at one point some idiot will get a bright idea to use it against political dissidents, publications, reporters or private citizens. Which is why it has to be stopped or questioned at the very least. I mean, just look at PRISM. It has gone so far and public didn't know about it that it's so big that even multibillion corporations HAVE TO participate in it and they can't do anything about it. If there were mass protests and pushback from companies and it would be killed off. Currently talking about it just makes you like a conspiracy look because it's the norm and has been going on for so long people just don't even question it or it's so normalized it's not talked about except by privacy "obsessed" people.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, RejZoR said:

How do you know for sure? The thing isn't open source. There can easily be a mechanism that does something else and detects image gov wants flagged and pings Apple servers among bunch of pings it already does.

The same is true before they released this feature. The adding of this feature has no effect on if there is secret code within the phones that does other scanning for govs. 

 

 

1 hour ago, RejZoR said:

Especially given how easy it is to disable it by not using iCloud which makes you question its actual purpose.

The purpose is not child protection it purpose is to remove the liability of having this content on iCloud. If you disable iCloud then thats nice and easy for apple your not putting this content on iCloud. Not they do not scan images in your lib on your phone they scan as they upload (at the same time as producing small thumbnails etc).

 

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, deblimp said:

This hypothetical hack you’re talking about makes absolutely no sense and is a pretty ridiculous reason to oppose this. 

While I think it really isn't practical the way he said it.  It does stand to reason, someone close to another person could frame you like that (for revenge).  It is a lot harder to prove innocence as well if that were to happen.

 

12 hours ago, deblimp said:

Apple is scanning the images on the phone rather than their servers, but only on images about to be uploaded

For now at least...I would rather processing happen on the server side over the hash being created client-side devices.  Actually, to me, this screams of also wanting to cut down on the server costs (of scanning uploaded images themselves).  If it's done on client-side devices, I could still see this as a slippery slope....as once they have "proven" to be effective, then they could start expanding it to the other aspects (a lot easier than before).  Slowly introduce a few features overtime that eventually leads to the entire device being scanned (to maintain security).

 

12 hours ago, deblimp said:

Explicitly, the list of neural hashes. This allows anyone to verify that Apple is doing what they are claiming to be doing

Actually, it doesn't let you know what Apple is doing.  They don't have to have their servers do the grunt work anymore (why not offload it to the customers).  From there, they could match the hashes anyway they like...also, they still could do what Google does (they can still access the data, as the encryption keys are stored on the Apple Servers).  You can't actually tell what they are searching for (from the video, it seems as though they actually store the hash library that they are comparing it against on their servers, so you can't tell what they are looking for).

 

4 hours ago, Curufinwe_wins said:

The 1 in 10e9 per account false report is much less impressive when you consider that for 30 matches

So here is the tricky part.  Will the police execute search warrants for Apple to reveal ones that have even one match?  While Apple may argue that their system won't "flag" anyone until 30 matches, you would have to wonder if the police will use it the law to get Apple to hand over the data of everyone who had a single match...which then goes to the slippery slope argument again (what would prevent lets say the US gov't from adding in a few images that are legal but then used to get the person in trouble).

 

I am aware that a similar thing can happen with Google.

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, deblimp said:

This hypothetical hack you’re talking about makes absolutely no sense and is a pretty ridiculous reason to oppose this. 

It's not ridiculous, hacking someone and dumping CSAM on their device isn't as uncommon as you think. In the Snowden leak one of the tools had a function to do just that.

Link to comment
Share on other sites

Link to post
Share on other sites

51 minutes ago, huilun02 said:

The whole point is that Apple doesnt need to do this

If someone uploads something bad, Apple can already easily prove it was the user who uploaded it. No one is going after Apple for what a phone user did.

 

They’re pre-labeling (the label being “CSAM” or “not CSAM”) photos that are for all intents and purposes on their way to Apple servers.

 

They are not looking at pics you have not already surrendered to iCloud Photos. What happens on your iPhone stays on your iPhone.

 

The fact they’re doing half of the scanning process on the client side rather than on the server side is just a technicality that allows them to mess with the photos less once they’re already on their servers in an encrypted state. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, SlidewaysZ said:

LIES LIES LIES if the software is on the phone is will be abused as a back door by governments cough china cough. I'm sick of Apple sitting in their high horse of privacy and security and time and time again having exploited mac's, un patched iphone flaws known to their developers, and government agencies hacking their devices meanwhile bending over backwards for China. I wouldn't have as much of a problem with it since obviously other tech companies have security issues and major privacy issues however they continually continually talk about their focus on security and privacy yet violate it over and over. Look I understand this is a wide scale practice which honestly isn't amazing I'm 100% in favor of protecting kids but privacy can't go out the window as well.

 

"Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety"

 

However I'm am sick and tired of Apple getting away with their lies about privacy and security and their feet need to be held to the fire. I trust them no more than any other company period. 

What are you going on about? They clearly explained how it works and honestly at this point, it's actually a pretty decent and smart implementation. All other providers already scan photo by photo on the servers. Apple's is on device which makes things harder for any government to force apple to do anything

1 hour ago, SeriousDad69 said:

Clarified what exactly? No matter how you try to dress a pig up, it's still a pig lol.

Even if I trusted Apple to meaningfully stand up to Governments(lmao), what happens when there's another Pegasus tier hack? It's going to be pretty bad when hackers put CSAM on someone's iPhone, enable iCloud and gets Apple to rat on them. It'll be like the Rube Goldberg machine of police raids, and you'll have to somehow prove yourself innocent in court. Imagine trying to explain to some 70 year old judge or a boomer jury that hackers hacked your iPhone, they'll think "this pedo is a liar, Apple said iPhone have the best security 🥴"

And never forget that with the way this system works, the absolute best case scenario is they'll only catch low level consumers dumb enough to still put that filth on their phones. The system only scans for known CSAM, so the people the police should be focusing on(producers) will get off scot-free. (assuming they're even using iPhones to film that evil garbage, which I highly doubt.)

This is like 95% risk for 5% benefit. I don't have much of a problem with the parental control features.

Your hypotethical scenario is stupidly dumb. One could also as easily as well store some CSAM photos on your phone - and then, just tip the local police. That' honestly sounds like a much easier way than getting 30 different CSAM photos, make sure they are available on the organization database that provides the data, wait for Apple servers to actually flag it, get someone in apple to duble check it and then get it to law enforcement

 

Or one could just get one CSAM photo and upload it to the other person's facebook, google, twitter account, etc to get them in trouble.

 

The lengths you go to bash Apple is quite insane

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Curufinwe_wins said:

It doesn't just apply to iCloud. They also (admittedly with a separate system) scan iMessages (they would have to in order to blur the relevant images if they 'think' it is of a sexual nature). And once this system exists there is nothing to stop countries from demanding the hash database is changed (using the exact same mechanism) to force reporting of dissidence etc. Or from the hash database/reporting servers itself from being hacked.

 

The current argument from Apple has always been they (claim to) lack the ability to enforce various requests from government agencies, and that is the ONLY reason they are not legally compelled to comply, since they do now have the ability to track photos against any arbitrary database of images, deliver the checks to government agencies... that legal argument disappears. 

 

There is nothing from the more recent "clarifications" (there are no clarifications, the whitepaper was rather clear and it wasn't misunderstood) that changes these perspectives. https://www.eff.org/deeplinks/2021/08/if-you-build-it-they-will-come-apple-has-opened-backdoor-increased-surveillance 

 

Reminder that Apple already has completely handed control of data generated in China to the chinese authorities... https://www.nytimes.com/2021/05/17/technology/apple-china-censorship-data.html

And you didnt even bother to watch the video

There are two seperate independant features. CSAM scanning on photos being *voluntarily* prepared to be uploaded to iCloud, and the second one being a completely independant fully on device warning for photos with nudity on iPhone users under the age of 12, and appropraitely giving parents a heads up in case the child ends up ignoring two pages of warning on to not open it.

 

They actually dont have the capability to entertain any such request from goverenement on images outside CSAM. The database has to come from two seperate agencies under different government rule, the said on-device analysis needs at least 30 image matches before a flag is raised. Then Apple will manually verify if this indeed is CSAM and only then the appropriate agencies will know. Also since, it partially happens on-device and partially happens in cloud, it is much more difficult for a government to pressure Apple to scan images say were it only ever server side, just like literally all other services.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, huilun02 said:

As I said earlier, Apple should not be doing the job of the police. There is no onus on Apple to report anything to the police. Uploading of a bad file is done by the user, not by Apple. Apple does need to absolve itself of any wrongdoing.

 

That’s not how this works actually. 

 

Companies are not the police but in some instances they’re required to report illegal stuff.

 

Up to 2020, Apple was the company that did less to address this particular issue. They, with 850M active iCloud users, would report less CSAM incidents than Adobe. And far (we’re talking thousand-fold far) less incidents than Google, MS and Facebook. 

 

Maybe the balance (between user privacy and not being an harbor for CSAM) they struck up to 2020 was not necessarily the best and they’re rebalancing. In a way that’s frankly better for privacy (until slippery slopes are invoked, but those could be invoked for server side searches as well) than server-side searches that equally mess with pics of both innocent and not-so-innocent accounts. The pre-labeling is a filter to let go innocent users unbothered once their data is on the servers. And could lead to stronger on-server encryption. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, RedRound2 said:

What are you going on about? They clearly explained how it works and honestly at this point, it's actually a pretty decent and smart implementation. All other providers already scan photo by photo on the servers. Apple's is on device which makes things harder for any government to force apple to do anything

Your hypotethical scenario is stupidly dumb. One could also as easily as well store some CSAM photos on your phone - and then, just tip the local police. That' honestly sounds like a much easier way than getting 30 different CSAM photos, make sure they are available on the organization database that provides the data, wait for Apple servers to actually flag it, get someone in apple to duble check it and then get it to law enforcement

 

Or one could just get one CSAM photo and upload it to the other person's facebook, google, twitter account, etc to get them in trouble.

 

The lengths you go to bash Apple is quite insane

I pointed out one potential exploit lol, the most likely one is China/Russia/etc saying "start scanning for these images in iCloud or you cant sell here." and giving them a bunch of popular pro-democracy/pro-lgbtq memes. Would Apple agree? Flip a coin and guess what the almighty shareholders would want. Wendel from Level1Techs brought up how the MPAA and RIAA could try to get Apple scanning for pirated movies and music now that Apple is about to set precedent for on-device scanning.

I don't expect anything to happen immediately, but ~5-10 years down the line there is definitely going to be something. Maybe iOS 19 removes the option to disable iCloud, maybe Apple loses a court case and has to start scanning for other illegal content like pirated movies. Maybe there's another Pegasus tier hack and hackers abuse the system in someway.

There's  endless risks to this system, and the only potential reward is catching a few low level pedos dumb enough to still put that filth on their iPhones after this massive commotion. Sorry, not worth it.

(And I like Apple products, I've only owned iPhones and I prefer MacBooks over PC Laptops, but that might have to change.)

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, wanderingfool2 said:

For now at least...I would rather processing happen on the server side over the hash being created client-side devices.  Actually, to me, this screams of also wanting to cut down on the server costs (of scanning uploaded images themselves).  If it's done on client-side devices, I could still see this as a slippery slope....as once they have "proven" to be effective, then they could start expanding it to the other aspects (a lot easier than before).  Slowly introduce a few features overtime that eventually leads to the entire device being scanned (to maintain security).

 

That’s the thing, if they did it that way they would not be able to keep the data truly encrypted server side. What happens now is the data is scanned, then encrypted, then stored on their servers.

 

People get nervous that the encryption key is also stored on their servers, but that is just meant to allow you to access your backup if you loose all of your trusted devices.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, wanderingfool2 said:

While I think it really isn't practical the way he said it.  It does stand to reason, someone close to another person could frame you like that (for revenge).  It is a lot harder to prove innocence as well if that were to happen.

They can also just shoot you.

 

5 hours ago, wanderingfool2 said:

So here is the tricky part.  Will the police execute search warrants for Apple to reveal ones that have even one match? 

 

You know that a judge has to give a search warrant first....which means probable cause...which means legal requirements. Which means government passing laws.

 

The tools exist. If you're worried about how they might be used, stop looking at the tool makers but the people making laws about them.

 

5 hours ago, huilun02 said:

The whole point is that Apple doesnt need to do this

If someone uploads something bad, Apple can already easily prove it was the user who uploaded it. No one is going after Apple for what a phone user did.

Seems more and more service providers are being taken down for copyright violations..ie piracy....uploaded by users...yet you think childpron wouldn't be an issue?

 

🖥️ Motherboard: MSI A320M PRO-VH PLUS  ** Processor: AMD Ryzen 2600 3.4 GHz ** Video Card: Nvidia GeForce 1070 TI 8GB Zotac 1070ti 🖥️
🖥️ Memory: 32GB DDR4 2400  ** Power Supply: 650 Watts Power Supply Thermaltake +80 Bronze Thermaltake PSU 🖥️

🍎 2012 iMac i7 27";  2007 MBP 2.2 GHZ; Power Mac G5 Dual 2GHZ; B&W G3; Quadra 650; Mac SE 🍎

🍎 iPad Air2; iPhone SE 2020; iPhone 5s; AppleTV 4k 🍎

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, SeriousDad69 said:

I pointed out one potential exploit lol, the most likely one is China/Russia/etc saying "start scanning for these images in iCloud or you cant sell here." and giving them a bunch of popular pro-democracy/pro-lgbtq memes. Would Apple agree? Flip a coin and guess what the almighty shareholders would want. Wendel from Level1Techs brought up how the MPAA and RIAA could try to get Apple scanning for pirated movies and music now that Apple is about to set precedent for on-device scanning.

I don't expect anything to happen immediately, but ~5-10 years down the line there is definitely going to be something. Maybe iOS 19 removes the option to disable iCloud, maybe Apple loses a court case and has to start scanning for other illegal content like pirated movies. Maybe there's another Pegasus tier hack and hackers abuse the system in someway.

There's  endless risks to this system, and the only potential reward is catching a few low level pedos dumb enough to still put that filth on their iPhones after this massive commotion. Sorry, not worth it.

(And I like Apple products, I've only owned iPhones and I prefer MacBooks over PC Laptops, but that might have to change.)

How is this any different from what we have today then?

 

It's not really a secret that Google goes through and uses your Google photos to train their neural network (I presume that why they suddenly got really good at photography with the pixel line). At any point all the use cases you are talking about can also be implemented in one fell swoop with every service today. There's nothing new on what is Apple doing. Rather they're doing the same thing what others like google, microsoft, facebook, or any service where you can store user data, except in a better way that maintains user content privacy. Tha't the point of having one-half of it happen on-device.

 

Second, these goverements can force Apple to do the same things today as well. Do it or get banned. Having this feature or not is not going to prevent any scenario you are talking about in the future. And it's next to impossible for any of the governements to sneakily (if that's your concern) use this functionality as the data needs to matched to 30+ datapoints, needs to get verified by Apple themselves, before it ever reaches to the body that wants it

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, RedRound2 said:

How is this any different from what we have today then?

 

It's not really a secret that Google goes through and uses your Google photos to train their neural network (I presume that why they suddenly got really good at photography with the pixel line). At any point all the use cases you are talking about can also be implemented in one fell swoop with every service today. There's nothing new on what is Apple doing. Rather they're doing the same thing what others like google, microsoft, facebook, or any service where you can store user data, except in a better way that maintains user content privacy. Tha't the point of having one-half of it happen on-device.

 

Second, these goverements can force Apple to do the same things today as well. Do it or get banned. Having this feature or not is not going to prevent any scenario you are talking about in the future. And it's next to impossible for any of the governements to sneakily (if that's your concern) use this functionality as the data needs to matched to 30+ datapoints, needs to get verified by Apple themselves, before it ever reaches to the body that wants it

It's a lot easier for Governments to force you to misuse a system that already exists than it is for them to force you to create a new system from scratch. And I don't have a problem with them scanning iCloud, it's their server hardware. I have a problem with them doing the scanning on my device instead of on their server because it sets a dangerous precedent. And this is not a "feature" lmao, it's literally something designed to put iPhone users in jail. Right now it may only be targeting pedos for jail, but it would be exceedingly trivial to expand it to other groups, which is why everyone but diehard Apple fanboys have a problem with it. Also the detection threshold doesn't matter, Apple can just set it as low as they want, there's literally nothing stopping them from setting it to 1 other than "lol just trust me bro".

And finally, two wrongs don't make a right, Google's shady behavior doesn't excuse Apple's shady behavior.

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, deblimp said:

And that is still end-to-end encrypted, unlike other options for backing up data…

iCloud doesn't use E2E encryption. It's encrypted in transit and possibly encrypted at rest with encryption keys apple has but otherwise no it's not E2E encrypted.

 

Law enforcement routinely gain access to iCloud through subpoenas and search warrants which would yield next to nothing if E2E encryption was used.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

Also sidenote whenever a company "clarifies" a policy or sormthing they said, it's usually re-writing what they originally said entirely because of backlash they didn't expect to receive.

 

In this case one of the problems with the whole Apple CSAM thing that I don't see being addresser by Apple or those who agree with Apple on this is: what is the limiting principal?

 

In order words, what's stopping Apple from using this technology but re-purposing it for other things like scanning local files to find out if you dislike Apple or if you're planning to switch to Android or if you are pro or against privacy or what type of pron you might be into etc.

 

One of the problems I and others who disagree with Apple on the CSAM issue see is that there is no limiting principle. In theory there is nothing stopping Apple from expanding spying on your phone to a greater and greater level using this tech for other purposes despite claiming to support privacy.

 

Another major issue people have with this whole Apple CSAM situation is that people own their phones or so we thought. If Apple continues with their path to implement this it will become crystal clear that you don't own your phone. This is an idea that is deeply disturbing to many people.

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, SeriousDad69 said:

It's a lot easier for Governments to force you to misuse a system that already exists than it is for them to force you to create a new system from scratch. And I don't have a problem with them scanning iCloud, it's their server hardware. I have a problem with them doing the scanning on my device instead of on their server because it sets a dangerous precedent. And this is not a "feature" lmao, it's literally something designed to put iPhone users in jail. Right now it may only be targeting pedos for jail, but it would be exceedingly trivial to expand it to other groups, which is why everyone but diehard Apple fanboys have a problem with it. Also the detection threshold doesn't matter, Apple can just set it as low as they want, there's literally nothing stopping them from setting it to 1 other than "lol just trust me bro".

And finally, two wrongs don't make a right, Google's shady behavior doesn't excuse Apple's shady behavior.

How is it 'easier' for governments to force anything? Do you have any examples? You're just making this up as you go for argument sake

On device scanning is done as a part of the pipeline to upload to iCloud. It does not happen if iCloud backup is disabled. The whole matching thing just gets attached to the image while it gets uploaded to iCloud. That's how the tech works. 

 

Also, having just scanning on the server side also means Apple will always have access to the iCloud data. I have a strong feeling that this implementation is so that they can encrypt iCloud while still avoid protecting people of the nature posessing CSAM. But that's just a theory, so it holds no water for now.

 

On device scanning also means that Apple will have to issue specific updates and instructions to all billions of active Apple devices, which actually makes it much harder for any government to convice or force Apple to force and deploy a change, as opposed to server side change that nobody will know anything about.

 

Apple changing thresholds to one image or adding more than CSAM is at this point cannot really be argued because it's just like saying that Google can release an android update that always keeps microphone and camera turned on, or Microsoft is going log every single thing you do on windows, etc. All these companies are not stupid to obliterate their reputation they've built up over the years, so this argument is just grasping straws

 

If it were the case where Apple started on device scanning and automatic flagging of any photos stored only locally on your iPhone, then yes this is worrying. Processors and NPUs on phones have been really good for a while now and everyone knows that on device scanning is not an impossible task. So all this time, if any governements wanted to, they could've forced companies to do their dirty work. This protection does not change anything in that regard. 

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, RejZoR said:

How do you know for sure? The thing isn't open source. There can easily be a mechanism that does something else and detects image gov wants flagged and pings Apple servers among bunch of pings it already does. Who will ever know? And before anyone says "but muh Android is open source". Android is, Google's crap bolted on top of every single one of them out of factory isn't. And it's the same shit. It's closed source and no one but Google can know what they are scanning, tagging and pinging back. Unless you're one of those masochists who use Android that they inspected and compiled themselves and don't use Gapps. In which case one might just use a brick because it'll be about as convenient as such phone. Source: been there, done that. Which is why this needs to be fought proactively. Kids protection can and is done in other different ways and this just isn't one of them because it sets a dangerous precedent for mass surveillance.

 

Especially given how easy it is to disable it by not using iCloud which makes you question its actual purpose. Who is this protecting then if people can just disable it and be on their way? What's the point of its existence then (other than some other hidden motive they aren't telling us about)? This is why I'm questioning it. It's just too easy and it's what I'd do if I'd want to throw a bone to naysayers and make them let it get rolled out. And it reeks exactly of that.

didn't someone found that every time they opened a image file a few packets would go to microsoft with the image's name, pretty sure that happened.

Link to comment
Share on other sites

Link to post
Share on other sites

interesting

 

This would mean Apple has never messed with (i.e. decrypted and scanned for CSAM) users’ photos on iCloud Photos.  (other companies did exactly that)

The few pics they scanned were from that very specific scenario (emailed out of iCloud webapp).

Hence the ridiculously low CSAM incidents reported (265 in 2020).

They never spontaneously mass scanned iCloud Photos for CSAM.

They never decrypted iCloud Photos to look for CSAM.

And now thanks to the on-device pre-labeling they will continue to respect law abiding accounts’ privacy by not decrypting/scanning their pics once they’re on Apple servers.

Makes sense.

 

Link to comment
Share on other sites

Link to post
Share on other sites

  • Unlike Microsoft, Google, and Facebook which performs CSAM detection solely on the cloud, Apple performs on-device detection where a hash matching is performed on every photo to a local CSAM database stored in the phone.
  • CSAM hashes comes from NCMEC, 
  • A safety voucher holds both the matched hash and is uploaded to iCloud.
  • If an account matches to 30 safety vouchers, then the said Apple ID is flagged and is reviewed by a human moderator.
  • If the uploaded safety vouchers contain child porn, then the owner of the Apple ID is reported to authorities.

If you ask me, the child porn protection feature in iMessage is actually a good parental control feature if the detection is performed after the message is received. If Apple just launched that and not the iCloud CSAM detection, the blowback would’ve been less. Nonetheless I still stand that the iMessage thing is a good protection against disgusting pedophiles and incels. 

There is more that meets the eye
I see the soul that is inside

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×