Jump to content

Apple Clarifies New iPhone Child-Protection Features

deblimp

Summary

Shockingly it wasn't as bad as people fear-mongored last week. 

 

Quotes

Quote

This is in no way a backdoor. 

 

The way this is being described in media, including last weeks WAN show is totally misrepresenting how the feature is being implemented. The scanning for CSAM only happens to images stored on iCloud. Google, Facebook, and Microsoft already scan for the same images on their servers. Apple is scanning the images on the phone rather than their servers, but only on images about to be uploaded. The reason Apple is doing it this way is that it requires the database of images that are being searched for is on the device. Explicitly, the list of neural hashes. This allows anyone to verify that Apple is doing what they are claiming to be doing. Contrary to the approaches of Google et al. where the scanning is done server side and it is impossible to verify what they are searching for. 

 

Importantly, none of this applies if you choose not to use iCloud. 

 

 

Sources

https://www.wsj.com/video/series/joanna-stern-personal-technology/apples-software-chief-explains-misunderstood-iphone-child-protection-features-exclusive/573D76B3-5ACF-4C87-ACE1-E99CECEFA82C?mod=hp_lead_pos7

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Saying it is not a backdoor doesn't change the fact it 100% is a backdoor and the whitepaper on how it works demonstrates exactly that.

 

 

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, Forbidden Wafer said:

Says who? Apple? 
 

  Reveal hidden contents

Nelson (simpsons) - Haha - Placa - R$ 20,00 em Mercado Livre

 

And anyone who understands how the technology is going to work.

Link to comment
Share on other sites

Link to post
Share on other sites

Not that I'm particularly concerned about them scanning files that are marked for upload anyway - but it's great that this only applies to people who use iCloud to store images.

 

For myself, I use OneDrive to backup all my photos from my iPhone because I pay for O365 so I get 1TB of Cloud Storage (vs the 5GB you get for free with iCloud). I do use iCloud to backup the phone itself - just not the photos/videos.

 

39 minutes ago, Curufinwe_wins said:

Saying it is not a backdoor doesn't change the fact it 100% is a backdoor and the whitepaper on how it works demonstrates exactly that.

This is a bold claim. Can you elaborate more about this?

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, dalekphalm said:

Not that I'm particularly concerned I do use iCloud to backup the phone itself - just not the photos/videos.

 

And that is still end-to-end encrypted, unlike other options for backing up data…

Link to comment
Share on other sites

Link to post
Share on other sites

The main issue is client side scanning which is now used for "think of the children" and can later be selectively enabled to tag content and ping back when it's found (which couldn't be done with server side unless users actually uploaded images). Given that Apple has to participate in PRISM makes it all even more concerning. Maybe not so much for users outside US, but still, it's creepy as f**k. And for what? That they might maybe, possibly catch a single pedo that wasn't careful enough? Meanwhile pedo rings run circles around agencies that actually try to keep kids safe and we should believe Apple is going to save the day now? Come on.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, jaslion said:

It's not a backdoor.

 

It's a back sliding door. Much more fancy and the apple way.

 

Nobody has ever had a sliding door before Apple. Apple invented sliding doors. 

Corps aren't your friends. "Bottleneck calculators" are BS. Only suckers buy based on brand. It's your PC, do what makes you happy.  If your build meets your needs, you don't need anyone else to "rate" it for you. And talking about being part of a "master race" is cringe. Watch this space for further truths people need to hear.

 

Ryzen 7 5800X3D | ASRock X570 PG Velocita | PowerColor Red Devil RX 6900 XT | 4x8GB Crucial Ballistix 3600mt/s CL16

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, RejZoR said:

The main issue is client side scanning which is now used for "think of the children" and can later be selectively enabled to tag content and ping back when it's found (which couldn't be done with server side unless users actually uploaded images). Given that Apple has to participate in PRISM makes it all even more concerning. Maybe not so much for users outside US, but still, it's creepy as f**k. And for what? That they might maybe, possibly catch a single pedo that wasn't careful enough? Meanwhile pedo rings run circles around agencies that actually try to keep kids safe and we should believe Apple is going to save the day now? Come on.

Apple likely doesn't believe it will save the day... rather, this is to both cover its ass and improve its reporting figures.

 

Also, if you've read the WSJ piece, you know that Apple chose methods that would make abuses both unlikely and easier to catch. I'm sure people will remain skeptical, but the notion that Apple will suddenly start flagging, say political dissent isn't supported by the evidence.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, dalekphalm said:

Not that I'm particularly concerned about them scanning files that are marked for upload anyway - but it's great that this only applies to people who use iCloud to store images.

 

For myself, I use OneDrive to backup all my photos from my iPhone because I pay for O365 so I get 1TB of Cloud Storage (vs the 5GB you get for free with iCloud). I do use iCloud to backup the phone itself - just not the photos/videos.

 

This is a bold claim. Can you elaborate more about this?

It doesn't just apply to iCloud. They also (admittedly with a separate system) scan iMessages (they would have to in order to blur the relevant images if they 'think' it is of a sexual nature). And once this system exists there is nothing to stop countries from demanding the hash database is changed (using the exact same mechanism) to force reporting of dissidence etc. Or from the hash database/reporting servers itself from being hacked.

 

The current argument from Apple has always been they (claim to) lack the ability to enforce various requests from government agencies, and that is the ONLY reason they are not legally compelled to comply, since they do now have the ability to track photos against any arbitrary database of images, deliver the checks to government agencies... that legal argument disappears. 

 

There is nothing from the more recent "clarifications" (there are no clarifications, the whitepaper was rather clear and it wasn't misunderstood) that changes these perspectives. https://www.eff.org/deeplinks/2021/08/if-you-build-it-they-will-come-apple-has-opened-backdoor-increased-surveillance 

 

Reminder that Apple already has completely handed control of data generated in China to the chinese authorities... https://www.nytimes.com/2021/05/17/technology/apple-china-censorship-data.html

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Curufinwe_wins said:

It doesn't just apply to iCloud. They also scan iMessages (they would have to in order to blur the relevant images if they 'think' it is of a sexual nature). And once this system exists there is nothing to stop countries from demanding the hash database is changed (using the exact same mechanism) to force reporting of dissidence etc. Or from the hash database/reporting servers itself from being hacked.

But here's the thing: you've just made the very mistake Apple was trying to head off with this piece: you conflated the iCloud scanning with the iMessage action. Two different things.

 

The iMessage action is an opt-in approach that uses AI to notify parents if their kids are sending images that might be sexual in nature. It only happens on-device, and Apple doesn't see anything.

 

And for the hash database... well, it's the product of multiple child safety groups, so unless those governments can demand that multiple groups change the system without Apple balking, it's going to stick to CSAM.

Link to comment
Share on other sites

Link to post
Share on other sites

There is more that meets the eye
I see the soul that is inside

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, RejZoR said:

The main issue is client side scanning which is now used for "think of the children" and can later be selectively enabled to tag content and ping back when it's found (which couldn't be done with server side unless users actually uploaded images). Given that Apple has to participate in PRISM makes it all even more concerning. Maybe not so much for users outside US, but still, it's creepy as f**k. And for what? That they might maybe, possibly catch a single pedo that wasn't careful enough? Meanwhile pedo rings run circles around agencies that actually try to keep kids safe and we should believe Apple is going to save the day now? Come S

Only with a large update to what happens client side and a change in how it communicates to the server this is not something you can sneak through without security researches detecting.

Currently when uploading an image it scans that image (in the upload code) and attached the flag to the image as it uploads the image. There is not way it can tag an image that is not uploaded since the tag it directly attached to the image it is uploading (it is not hitting a different endpoint with a flag).

 And change that would make it possible to flag other content on the phone would require a cost change (that is a software update) and require a completely new way for the phone to inform apple of a flag. That would be detected within minutes by third party security teams that audit what data the phone sends to apple every time apple ship updates.  It is impossible for apple to `hide` such an update from them.

Again since it would require apple to write new code apple can use the same defence they used before that stood up in court to not do this. 
 

As to why apple is doing this? simple they want to get ahead of a law that would require them to scan for this content, such a law would likly be worded very badly and thus possibly give the gov the power you are worried about. By moving in advance apple can show how to do it without giving that power to govs and thus stoping the law being passed in the eventable way, getting ahead of legislation like this is a common practice in tec and tends to product much better outputs that the result of waiting for legislators to be stupid. 

Link to comment
Share on other sites

Link to post
Share on other sites

I've watched the whole WSJ video.

So now we know for sure it's two separate "feature".

One for images. One for messaging.

 

And since OP only talked about CSAM, here's a recap of the video.


For the image one (CSAM)

Spoiler

it's only if you use iCloud and have it set up so it backups your images, they will scan your images before they are uploaded and after. It's a two step process.

If you reach a threshold of something like 30 "bad images", only then does it notify Apple and only then does Apple have access to the images that were flagged. They do not have access (according to Apple) to your others images, only the flagged ones.

For an image to be flagged, they need to be of "known" CSAM images in the NCMEC database.

The way scanned images on your device works, is that they are compared to a neural hash database that is supposedly ON your device and not online. The phone doesn't know if the image is flagged or not from the hash alone, it just has a "safety voucher" tacked on the image and uploaded in icloud. The second half of this process is done in the cloud and only after a certain number of "vouchers" (30 from what they claim) does Apple manually review the images with the vouchers on them. If they determine the images are indeed bad, your account is reported to the authorities. 

If you don't use icloud at all, this feature doesn't affect you in any way shape or form (as of now, who knows if they change that later).

In theory, it shouldn't flag your pictures of your kid in the bathtub or even new picture of actual predators who take child abuse pictures but don't share them online with their "buddies".
Kind of pointless feature if you ask me. It will only catch the dumbasses who share videos and pictures of their victims in whatever underground forum for other deranged individuals.

 

 

Now for the messaging feature.

Spoiler

Basically, it's a feature with on-device machine learning to filter out content that may feature nudity.

This result in the image being blurred out. You can ignore the warning and still view the image if you so wish to. It is all "on device" and nothing is uploaded anywhere.

It's a child safety feature aimed at kids. If the kid is under 12 and they decide to still view the image, the parents will be notified (if said parents actually configured the damn thing properly).

It helps protect against grooming and is 100% different from the other CSAM feature.
They did not mention what would happen if the child tried to send a sensitive picture. If it would prevent them or not. They did not say.

Spoiler

image.png.a0e464e244f94743ea0a56f140051d83.pngimage.png.83408dbcf65cc867110553c95343541c.pngimage.png.b5fd627383164b8fe2e804d44858f5a3.pngimage.png.217798b9a81a26db7cb26d0f59ecefab.png

 

This feature, I am fine with.

And I hope people can enable it without needing parental setting.

Women everywhere would love this to get rid of unwanted dickpics.

 

 

Overall, not as bad I as initially thought. The iCloud one is frankly pointless and will only catch the most moronic pedos out there. While the iMessage one, I really hope they also prevent kids from sending nudes and not just from receiving them.

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, TetraSky said:

The iCloud one is frankly pointless and will only catch the most moronic pedos out there.

It's not about catching pedos (apple does not require you to provide ID to create an iCloud account after all). Its about apple not wanting these images stored on their servers (liability) and whiting to get ahead of future laws that will mean even if it is end to end encrypted unless you can show you have done readable effort you are liable. 

Key to remember is the scanning does not take place against your photo lib but rather as part of the upload to iCloud code path. other things also take place during the process of uploading to iCloud (like producing smaller thumbnail images etc) it is never scanning your photo lib.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, hishnash said:

Only with a large update to what happens client side and a change in how it communicates to the server this is not something you can sneak through without security researches detecting.

Currently when uploading an image it scans that image (in the upload code) and attached the flag to the image as it uploads the image. There is not way it can tag an image that is not uploaded since the tag it directly attached to the image it is uploading (it is not hitting a different endpoint with a flag).

 And change that would make it possible to flag other content on the phone would require a cost change (that is a software update) and require a completely new way for the phone to inform apple of a flag. That would be detected within minutes by third party security teams that audit what data the phone sends to apple every time apple ship updates.  It is impossible for apple to `hide` such an update from them.

Again since it would require apple to write new code apple can use the same defence they used before that stood up in court to not do this. 
 

As to why apple is doing this? simple they want to get ahead of a law that would require them to scan for this content, such a law would likly be worded very badly and thus possibly give the gov the power you are worried about. By moving in advance apple can show how to do it without giving that power to govs and thus stoping the law being passed in the eventable way, getting ahead of legislation like this is a common practice in tec and tends to product much better outputs that the result of waiting for legislators to be stupid. 

It would be a miniscule change client side and an even smaller change to the server communication. Adding a second flag and a second database with the same hashing mechanism is quite a trivial change if there is not already more than one flag in place for CSAM (which you really expect there would be). Plus it is highly self-evident that the databases themselves will need to be maintained given that as the current system is described, changing even the color balance or adding a sharpening filter for example would change the hash and thus allow evasion of the check. Given the database needs updating regardless, I have severe doubts the same defense in court would hold water. That is a serious issue, in order to take apple at their word, you have to believe Apple wouldn't do anything wrong, Apple will fight governments that try to force the use for other purposes, and that Apple will win those fights. None of these conditions are reasonably believable.

 

The statement that you need two sovereign jurisdictions to add to the list is not even helpful. 5 eyes is not 1 eye, and you can bet China would be happy to exchange database items with Russia (and reverse) if that let both countries track their 'undesirables'. China already has the encryption keys to all Apple data stored in the country, so that ship sailed (oh yes, Apple wouldn't possibly cave to gov pressure though...)

 

Finally 'auditing' does not prevent abuse. It doesn't even limit abuse. It merely makes *potentially* public the abuse, and even then that is highly unlikely. Most auditing disclosures do not expose all the details of the audits themselves to the public (for obvious reasons).

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Curufinwe_wins said:

It would be a miniscule change client side and an even smaller change to the server communication. 

Any change is a code change. And making changes to how it communicates to the server would be trivial for secret researches to detect.

 

 

9 minutes ago, Curufinwe_wins said:

changing even the color balance or adding a sharpening filter for example would change the hash and thus allow evasion of the check.

No these types of hashes are designed to match content even after that type of modification, they are not hashes of the rate bytes but rather based on image content, its not like a password has and more like a similarity index. They are explicitly designed to match content with quite large alterations.

 

 

11 minutes ago, Curufinwe_wins said:

Finally 'auditing' does not prevent abuse. It doesn't even limit abuse. It merely makes *potentially* public the abuse, and even then that is highly unlikely. Most auditing disclosures do not expose all the details of the audits themselves to the public (for obvious reasons).

Im not talking about apple auditing things im talking are third party researches using phones in the wild to audit what the phone does. This is not under any contract with apple and if it were exposed apple was not doing what they said apple would become legally liable (massive court cases that would cost apple billions of $). Apple could not alter what they do on the users device without informing users, doing that would result in a massive legal issues for them. And apple can't hide changes they make to algorithms that run on a users device (they could hide changes if they did this server side, hence why a clint side scanning is much less likely to be abused in that way). 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, deblimp said:

And that is still end-to-end encrypted, unlike other options for backing up data…

This is false, iCloud backups aren't E2EEd. 
https://www.howtogeek.com/710509/apples-imessage-is-secure...-unless-you-have-icloud-enabled/

The only things that are E2EEd are certain sensitive pieces of data from your backup.
https://support.apple.com/en-us/HT202303
 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, hishnash said:

Any change is a code change. And making changes to how it communicates to the server would be trivial for secret researches to detect.

Not if it goes to the same place? Under the same processing exchanges? Maybe I'm wrong there, but I have a very hard time seeing sec researchers having access to the detail level of packet sniffing and base code to be able to check what exactly is being done here vs an expanded version of the process. We don't know every line of Apple's current code afterall, and details of surveillance being detected at the cutting edge of research is no where near sophisticated enough to be able to tell the difference. We can tell what type of data (to a certain extent), and we certainly can detect where it goes....

 

6 minutes ago, hishnash said:

No these types of hashes are designed to match content even after that type of modification, they are not hashes of the rate bytes but rather based on image content, its not like a password has and more like a similarity index. They are explicitly designed to match content with quite large alterations.

Okay, but the same idea is still present, these databases will be modified and expanded regardless. Whatever the matching threshold may be, the process of evading it will be a constant tug of war. If Google hasn't gotten copyright automated matching down afterall these years, I highly doubt that Apple has found some immutable solution themselves. The 1 in 10e9 per account false report is much less impressive when you consider that for 30 matches that then means they expect approximately 50/50 shot at any individual match being accurate (obviously that isn't 50% chance a random photo is false positived, but still, only expecting 50% shot that each individual match is correct is not a statement of high faith. .5^30=~9e-10.)

 

12 minutes ago, hishnash said:

Im not talking about apple auditing things im talking are third party researches using phones in the wild to audit what the phone does. This is not under any contract with apple and if it were exposed apple was not doing what they said apple would become legally liable (massive court cases that would cost apple billions of $). Apple could not alter what they do on the users device without informing users, doing that would result in a massive legal issues for them. And apple can't hide changes they make to algorithms that run on a users device (they could hide changes if they did this server side, hence why a clint side scanning is much less likely to be abused in that way). 

I don't think any of those rights are enshrined in law whatsoever, even in the US, let alone most of the world, where the actual liability legally would be failure to comply with said governmental enforced changes. Again note the saga of Apple caving to PROC control over all Chinese data. First claiming to hold the keys and then giving those over as well. https://www.nytimes.com/2021/05/17/technology/apple-china-censorship-data.html

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

LIES LIES LIES if the software is on the phone is will be abused as a back door by governments cough china cough. I'm sick of Apple sitting in their high horse of privacy and security and time and time again having exploited mac's, un patched iphone flaws known to their developers, and government agencies hacking their devices meanwhile bending over backwards for China. I wouldn't have as much of a problem with it since obviously other tech companies have security issues and major privacy issues however they continually continually talk about their focus on security and privacy yet violate it over and over. Look I understand this is a wide scale practice which honestly isn't amazing I'm 100% in favor of protecting kids but privacy can't go out the window as well.

 

"Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety"

 

However I'm am sick and tired of Apple getting away with their lies about privacy and security and their feet need to be held to the fire. I trust them no more than any other company period. 

Link to comment
Share on other sites

Link to post
Share on other sites

Windows Search local files indexing is a backdoor because once the system is in place it could be abused to send info about my data out to some government. Windows is not an open source OS, we don’t what Microsoft could be pressured into by Xi. We need to ban any form of indexing, metadata, hashing, daemon, local background process ‘cause we can’t be sure about what they’re actually doing or what they could be doing in 10, 50 or 100 years. It’s better to have everything happen on the cloud, where notoriously no abuse can ever take place. 

 

Moreover, on-device AI and neural engines are getting too powerful, I’m not comfortable with that because it’s an hardware backdoor that could allow our data to be sliced, diced, deep ANAL-ized, etc. in a split second by using computing power on OUR devices instead of at least giving these companies the inconvenience of using their servers to comply with dictators and govt agencies requests. We totally need to have dumber devices and stifle innovation in this field. Who wants scifi glasses that can do amazing contextual things for you without necessarily relying your queries to the cloud? I sure don’t! Ban NPUs now before it’s too late! Even CPUs and GPUs should be dumbed down for good measure, ‘cause their computational power could be used for those same mass surveillance goals. I say: let’s go back to the Pentium II and feature phones! 

 

/s

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, hishnash said:

Only with a large update to what happens client side and a change in how it communicates to the server this is not something you can sneak through without security researches detecting.

Currently when uploading an image it scans that image (in the upload code) and attached the flag to the image as it uploads the image. There is not way it can tag an image that is not uploaded since the tag it directly attached to the image it is uploading (it is not hitting a different endpoint with a flag).

 And change that would make it possible to flag other content on the phone would require a cost change (that is a software update) and require a completely new way for the phone to inform apple of a flag. That would be detected within minutes by third party security teams that audit what data the phone sends to apple every time apple ship updates.  It is impossible for apple to `hide` such an update from them.

Again since it would require apple to write new code apple can use the same defence they used before that stood up in court to not do this. 
 

As to why apple is doing this? simple they want to get ahead of a law that would require them to scan for this content, such a law would likly be worded very badly and thus possibly give the gov the power you are worried about. By moving in advance apple can show how to do it without giving that power to govs and thus stoping the law being passed in the eventable way, getting ahead of legislation like this is a common practice in tec and tends to product much better outputs that the result of waiting for legislators to be stupid. 

How do you know for sure? The thing isn't open source. There can easily be a mechanism that does something else and detects image gov wants flagged and pings Apple servers among bunch of pings it already does. Who will ever know? And before anyone says "but muh Android is open source". Android is, Google's crap bolted on top of every single one of them out of factory isn't. And it's the same shit. It's closed source and no one but Google can know what they are scanning, tagging and pinging back. Unless you're one of those masochists who use Android that they inspected and compiled themselves and don't use Gapps. In which case one might just use a brick because it'll be about as convenient as such phone. Source: been there, done that. Which is why this needs to be fought proactively. Kids protection can and is done in other different ways and this just isn't one of them because it sets a dangerous precedent for mass surveillance.

 

Especially given how easy it is to disable it by not using iCloud which makes you question its actual purpose. Who is this protecting then if people can just disable it and be on their way? What's the point of its existence then (other than some other hidden motive they aren't telling us about)? This is why I'm questioning it. It's just too easy and it's what I'd do if I'd want to throw a bone to naysayers and make them let it get rolled out. And it reeks exactly of that.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×