Search the Community
Showing results for tags 'privacy'.
-
Why is TikTok CEO Shou Chew claiming they have never provided data to the Chinese government when just a couple of months ago Linus screamed- -when they talked about that ByteDance has handed over data to the CCP? Or rather: why are Ted Cruz asking him this when it already public knowledge that they do? I'm not following this super closely so sorry if I've missed something and am being stupid here.
-
Bitwarden has opened their annual privacy survey: https://forms.bitwarden.com/privacy Results from previous years: 2023 2022 2021 Here's some of my predictions: ProtonMail increases their previous 73% share on the email list Aegis overtakes Authy for #1 authenticator app Joplin overtakes OneNote on the notes app list DuckDuckGo gets within 15 points of Brave on the browser list Opera falls off the browser list Kagi makes the search engine list
-
Hey everyone! I wanted to set up some security cameras outside my house, But I had some security and privacy concerns… - What should I need to consider when choosing a security camera firm? - Which type of camera should I buy? (wifi or wired) - Should I connect those cameras to a switch and set up a vlan, and isolate them from the rest of the network? Any tips or recommendations would be appreciated! (^_^)
-
- security camera
- privacy
-
(and 2 more)
Tagged with:
-
I currently have a Samsung Galaxy A20, and I absolutely despise it, but due to circumstances beyond my control ive been forced to have a smartphone the last few years. But next year ill finally have the freedom to chose for myself so I have been looking for phones that actually suit the minimal amount of features I want. I have been looking at ones like the Nokia 110, but the 4G version, which is the only one that really has a hope of working in North America beyond a year, still for some reason has a Camera, Internet, GPS, and plenty of other things which I absolutely do not want. I had thought of just opening it up and ripping the hardware required for those functions out, but lack the requite tools and skill. Is there anything ive perhaps missed? and why aren't there truly basic phones available outside of Europe? Only if i had the money to buy a bunch of SDR's to setup my own 2G cell network, then i could just use one from 20 years ago.
-
Summary Meta is no facing another privacy hit for EU by the crack down no Behavioral Advertising Quotes My thoughts Think this a good thing that Meta/Google and others in the same category, finaly get controlled by rules/laws. Sources https://www.bloomberg.com/news/articles/2023-10-31/meta-faces-european-privacy-crackdown-on-behavioral-advertising#xj4y7vzkg https://nrkbeta-no.translate.goog/2023/10/31/forbud-mot-meta-om-bruk-av-persondata-utvides-til-hele-eos/?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=en-US&_x_tr_pto=wapp
-
Summary The USA's Federal Trade Commission (FTC) have started a probe on OpenAI in terms of data use, data security, and liability of ChatGPT's outputs. Questions like copyright of the data input, and how potentially harmful the outputs are to the users, are some of the topics that the probe is investigating [0, 1, 2]. OpenAI have said they will work with the FTC on this, and that GPT-4 was built using "years of safety research" [1]. Quotes My thoughts Slowly but surely, the legislative machinery is beginning to approach LLMs. This could be highly impactful for both OpenAI but also other LLM companies, as it will likely set a precedent on these things. In terms of harm, we've seen numerous examples of everything from defamation [3] and making up court cases [4], to death [5]. Although OpenAI seemingly tries their best to keep ChatGPT "safe" (for a suitable definition of that word), communities have sprung up to "jailbreak" it and get around these restrictions [6-8]. It's an arms race, but one which also requires a lot of philosophical thought in terms of what is safe and in what context. (And the question of "truth" can depend on what language you are currently chatting to it in, see e.g. [12]). Another interesting thought is whether this is even a valid question to ask in terms of LLMs like ChatGPT. Another user on the forum, Sauron, pointed out [13] that the systems weren't necessarily designed with safety/alignment in mind: they were made to believably predict and generate the next word given the existing text, and at that they are excellent! [14]. But this design-goal doesn't specify anything about safety. Obviously LLM companies have started caring about this, there was a whole section (Section 6) in the GPT-4 technical report dedicated to how they filtered harmful prompts [11], but it slightly seems to be something which has now been added to the spec, rather than intended from the get-go (at least to the extent we're now seeing). LLMs and copyright is also an interesting question. Copyright and the internet/digital age has always been awkward, and now that companies are building (highly profitable) systems using potentially copyrighted data, the situation is unlikely to get better; there is a lot of money at stake. (A completely naive question, but could ChatGPT output (and maybe even input) be considered "fair use", given that it is extremely transformative?) Finally there's the question of data privacy. ChatGPT ran afoul of Italy some months ago, due to GDPR concerns [9], and various companies have tried their best to keep employees from leaking company secrets by using ChatGPT [10]. However, LLMs need this extra interactive data; it is what improves them and makes them more "natural" to interact with. I don't think what OpenAI (and other LLM developers) are doing is any worse than social media and targeted ads, people post secret information on Discord these days [15], not to mention the myriad of War Thunder leaks, and all of this data is being used to train/improve some sort of algorithm. If more privacy regulations come of this however, I will be very glad to see that. Sources [0]: https://www.washingtonpost.com/technology/2023/07/13/ftc-openai-chatgpt-sam-altman-lina-khan/ [1]: https://www.reuters.com/technology/us-ftc-opens-investigation-into-openai-washington-post-2023-07-13/ [2]: https://www.theverge.com/2023/7/13/23793911/ftc-openai-investigation-consumer-ai-false-information [3]: https://www.theverge.com/2023/6/9/23755057/openai-chatgpt-false-information-defamation-lawsuit [4]: https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt [5]: https://www.brusselstimes.com/430098/belgian-man-commits-suicide-following-exchanges-with-chatgpt [6]: https://www.digitaltrends.com/computing/how-to-jailbreak-chatgpt/ [7]: https://github.com/0xk1h0/ChatGPT_DAN [8]: https://www.jailbreakchat.com/ [9]: https://techcrunch.com/2023/03/31/chatgpt-blocked-italy/ [10]: https://www.axios.com/2023/03/10/chatgpt-ai-cybersecurity-secrets [11]: https://arxiv.org/abs/2303.08774 [12]: https://thediplomat.com/2023/03/will-asian-diplomacy-stump-chatgpt/ [13]: https://linustechtips.com/topic/1517888-33-46-of-amazons-mechanical-turk-workers-estimated-to-use-llms-to-automate-their-work/?do=findComment&comment=16026532 [14]: https://fortune.com/longform/chatgpt-openai-sam-altman-microsoft/ [15]: https://www.polygon.com/23683683/discord-classified-documents-leak-thug-shaker-central-jack-teixeira
-
I've been using Pi-Hole for a while but do experience some issues with sites like bungie and other that are not bad. I put my network on Ad-Guard DNS and did a dynamic dns and noticed that it doesn't block like Pi-Hole did on good sites that it considered bad. I know how to whitelist on Pi-hole but not on every site i browse everyday that becomes a pain. I been on Ad-Guard for a while with no issues. so should i keep Pi-Hole or stay on Ad-Guard for privacy and safety reason on my home network. Anyone else done this?
-
Summary In a recent blog post, Discord announced that they'll be moving away from the current system of numeric discriminators, i.e. your username followed by a # and 4 digits, to a system of unique usernames. The official argument is that it will make it easier to identify your friends, and that the old system with numbers was running into issues and so needed an update. However, a lot of users are unhappy since this introduces a sudden race to get unique usernames, similar to Twitter or Instagram, where there previously was none and your desired handle was always available, just with some numbers on the end. Nitro users will be getting priority access to migrating to the new system, making some call this a "pay to win" migration. Concern has also been raised about potential impersonation (by sniping famous handles), account selling, and phishing and harassment to get people with short or famous handles to hand them over. Quotes Thoughts I much preferred the numeric discriminators, since they avoided the whole issue of "Sorry, this username isn't available" common on so many social media. The numbers were never an issue, since you could easily copy it by clicking on your account in Discord, and then PM it to a friend on Steam or similar. It also meant that your display name always had a nice default value, with the option to customise it per server if need be. The new system still has display names, but now your default will have to be some mangled username, à la __l33t-sp34k__ , unless you're lucky and grab the first instance of your preferred handle. The change makes Discord's system similar to more mainstream social media like TikTok, Instagram, and Twitter, but I personally quite liked how Discord wasn't one of these. Of course Discord is still different, posts are not public facing, but given that these other platforms have had problems with phishing and harassment over rare and unique usernames, I absolutely believe that it'll happen on Discord as well. The argument that this is done due to system limitations also doesn't sit right with me. Surely, hashing usernames would solve the uniqueness problem on the server side, and something like Steam's friend codes could be used to connect with people. The discriminator could also be changed to the last parts of your account hash, similar to Git short hashes (for those unfamiliar, Git allows you to refer to a specific version by the final 7 characters of the hash value, instead of the full 40), which would keep the ability to distinguish accounts by clicking on them (I would even argue it would make it easier, since you would be more likely to have a differing letter (hashes are hexadecimal) vs just having differing digits). Unfortunately, both on Reddit and Twitter, there seems to be little discussion or acknowledgement of the concerns raised by the community, which is always promising. So I expect that they'll push through with the changes regardless... Sources Discord's blog: https://support.discord.com/hc/en-us/articles/12620128861463-New-Usernames-Display-Names The Verge: https://www.theverge.com/2023/5/6/23711332/discord-username-changes-community-backlash-handles PCMag: https://www.pcmag.com/news/discords-planned-username-change-is-causing-serious-annoyance Reddit discussion about privacy concerns: https://old.reddit.com/r/discordapp/comments/13azn6c/discord_usernames_privacy_issues_a_warning_to/ Discord feedback forum: https://support.discord.com/hc/en-us/community/posts/14337329256983--Do-NOT-add-The-new-discord-usernames-system Downvoted response from Discord on Reddit: https://old.reddit.com/r/discordapp/comments/136urpb/discord_is_removing_tags_and_replacing_them_with/jiqdwyq/?context=3 Darknet Diaries on harassment over unique handles: https://darknetdiaries.com/episode/97/ https://darknetdiaries.com/episode/106/
- 29 replies
-
- discord
- social-media
-
(and 1 more)
Tagged with:
-
I just noticed a serious privacy issue with Facebook. When just viewing someone profile on Android app it's automatically sent requests to them. At first I thought it was a human error then I couldn't locate the "friend request" or the message to cancel it. Other users on reddit have reported the issue. Now I have all these unsavoury friend suggestions. Check your Facebook hasn't been affected, hope this information helps. Android Authority: https://www.androidauthority.com/facebook-send-request-bug-3324254/ Reddit post:
-
I was interested in Wave Inc.'s ClarityVx VST audio plugin for its excellent noise-cancellation ability. Waves gave people the ability the option to demo it by downloading their plugin manager and creating an account. However, when I saw their privacy policy (https://www.waves.com/legal/privacy-policy), I immediately got cold feet. They collect all kinds of data on you, and share/sell it liberally, including to Meta! When I saw that they have a "checkout as guest" option on their website, I actually felt a little hopeful- a direct exchange of money for software. Instead, I was directed to download the exact same #@$@#!$ installer as before, create an account, and sync the license on my machine. All for a 50KB VST plugin. But all was not lost. Being a resident of California, I have the CCPA at my disposal (California Consumer Privacy Act). The CCPA is great, because it grants me several rights over my personal data, including the right to have it deleted, the right to prevent its sale, and the right to not be discriminated for using these rights. I hoped I could use the CCPA to undo any damage the software had wrought. According to their privacy policy, they wanted me to send a form letter to privacy@waves.com to invoke my CCPA rights, so I let ChatGPT do the typing: After a few days, I received a response that I never expected- They want me to cancel the license I've already paid for. They did not satisfy *any* of the CCPA rights that I requested. My followup letter was brief, but to the point. This time, it was personally written. I honestly can't believe they would add CCPA rights to their privacy policy without actually fulfilling them if requested. TL;DR: I told them to delete my personal data; they told me to delete my account (along with the license that I paid for.)
-
UPDATE: Thankfully I'm a pack rat and I found an old Huawei phone from three upgrades ago that had my authenticator app setup. I'm back in but still not impressed with the whole ordeal and technically this is still an issue for many others. I apologize for the bait-clicky title but I need eyes on this from any of you who can help or may have contacts at the right people at Meta/Facebook who can resolve this issue. TLDR: A factory reset on my phone caused me to lose access to my Google Authenticator 2FA. You can not recover your authenticator codes because they are not tied to your Google Account, they are tied to the device (which now no longer exists due to factory reset). While I was able to restore and set up new Authenticator codes for all other websites and apps, Facebook's recovery process is bugged and there is no way to recover my account. This has caused me to lose access to my 16-year personal FB account as well as Facebook Ads management on the several businesses I oversee and manage. FB self-help articles, troubleshooting, and even FB Business Meta Pro team have been unable to resolve my issue. The ASK: I need to contact a real person from Facebook login team who can Identify my identity and remove my 2FA authentication from my account so that I can restore it. Hopefully, this can also serve as a PSA to anyone out there who either loses their phone or doesn't transfer their Google Authenticator data to a new device At this point I'm reaching out to the community in the odd chance that some of you know someone at Facebook in the Login/security team who can help. Or maybe the team @LinusTech can shed some light on this glaring issue. I can't be the only person who's ever lost access to their Google 2FA EDIT: SO F**** Ironic, YESTERDAY April 24th Google annoucned that Google Authenticator is finally adding account syncing for 2FA Codes. This still sadly doesn't help me resolve my issue today. Details Personal account of 16 years: https://www.facebook.com/xXHappyJonXx/ This account has Facebook 2FA as well as SMS and email verification setup. This account also manages the following business accounts and pages: https://www.facebook.com/HPAMotorsports - Currently has Ads running, Our company's Customer service ticketing program is also tied to the FB Messenger chat which needs to be reauthorized. https://www.facebook.com/cotybuilt https://www.facebook.com/GoVADpro https://www.facebook.com/www.moosh.media - My personal Freelance business https://www.facebook.com/LUSAsoccer - The local non-profit community club As you can see, there's a lot of history, data, time, and money tied up to this personal account and I have essentially been locked out of my personal data. The Cause Of The Issue I brought my phone to a Samsung service centre to get my screen fixed. I anticipated needing to factory reset my phone so I did a cloud backup at the time to save my data. What I failed to do was transfer my Google Authenticator app to another device. While I was able to restore my other Google Authenticator accounts using other methods of identificationSMS/email etc. Facebook's account recovery codes via SMS and email only allow you to reset your password. If you have 2FA enabled, it still asks for the 2FA code, even if you have access to your email and phone number. See below. View Full Size Image: https://drive.google.com/file/d/1q3-UmtYXXsXscjDfz80hJFhD4orMr4Zg/view?usp=share_link On PC Full size image: https://drive.google.com/file/d/1K0k7qww-mFqtgibCztkl2qVjJS8Tiltc/view?usp=share_link I have tried every article to recover my account but all of those lead to setting up a new password. It does not remove your 2FA access so logging in with a new password just presents you with the request for a code as you see above. In my efforts to troubleshoot I have also tried "My Account is Hacked" to see if I can re-secure my account. The closest I got was to This page here where you are asked to provide a different email than the one tied to your facebook account to secure it. I provided the email, got a security code to input, and I was greeting with this fantasitc /s page: I've seen other's people online have options to upload an ID at this point to provide identity confirmation, however for me, i have NO OPTIONS to choose from. I have reached out to our Meta Pro Team that handles the advertising where I was actually able to speak to someone on the phone. They said they would "open a case" and get back to me in 48-72 hours. I never heard back. I've tried multiple times again via email explaining everything above and I keep getting the same responses. At this point I'm reaching out to the community in the odd chance that some of you know someone at Facebook in the Login/security team who can help. Or maybe the team @LinusTech can shed some light on this glaring issue. I can't be the only person who's ever lost access to their Google 2FA
-
Summary A group of Tesla employees shared videos and images recorded by customers’ car cameras on an internal messaging system between 2019 and 2022, according to interviews with nine former employees by Reuters. Some recordings featured Tesla customers in compromising or embarrassing situations, while others were of crashes, road rage incidents, dogs, and funny road signs. Several former employees said that the program they used at work could show the location of recordings, potentially revealing where a Tesla owner lived. Tesla did not respond to Reuters’ detailed questions. Quotes My thoughts Holy FUCK. Though I can't say I'm that shocked by this, it's expected they are collecting data/footage to train the AI model however some of the footage is from inside people's garages while the car is 'off'... I'm not sure how much training data could really be gathered from those situations... Also, Tesla claims the footage is 'anonymous' however it's pretty simple for someone to figure out who the footage belongs to if it has the location (assuming it's parked in their own home). This is just another reason I hate putting cameras on everything, especially something that's inside your home and always connected to the internet. Sources https://www.reuters.com/technology/tesla-workers-shared-sensitive-images-recorded-by-customer-cars-2023-04-06/ https://petapixel.com/2023/04/06/tesla-workers-shared-sensitive-images-captured-by-owners-cars-report/
-
Hello everyone, I'll be getting a 2-in-1 soon (probably a Zenbook Flip) and I'll be using it for school and home use. I learn by handwriting and drawing diagrams, but keeping THICC binders of notes and work are weighing me down. Not to mention needing to screw with a printer to scan to my computer. The easy option is OneNote, but I don't like it for a few reasons: I don't want a Microsoft account I want to be offline without issues I don't want to pay a subscription (I don't mind one time fees) I DEFINITELY don't want to use OneDrive or any other cloud service Where does that leave me? I won't be purchasing hardware until I know which software to use. Thanks for your time
- 1 reply
-
- handwriting
- notes
-
(and 3 more)
Tagged with:
-
Summary Apple redirects Google Safe Browsing traffic through its own proxy servers to prevent disclosing users’ IP addresses to Google in iOS 14.5 Apple’s privacy push is much more widespread than it seems at the surface. A perfect example is the new privacy feature in iOS 14.5 Beta 1 (V2) which redirects Google Safe Browsing traffic through Apple’s own proxy servers to enhance users’ privacy and to not let Google see your IP address. Quotes My thoughts Well, if this increases user’s privacy I’m all for it. Apple already has my ip, among other private information, and preventing third parties for getting that info is good, I recon. How would Google take this “intervention” will definitely be interesting Sources https://the8-bit.com/apple-proxies-google-safe-browsing-privacy/
-
Hi, The neighbor offers me his internet over a Lan cable. If I connect a separate router after it, is there a security risk?
-
Tim Berners-Lee’s plan to save the internet: give us back control of our data World Wide Web inventor Tim Berners-Lee takes on Google, Facebook, Amazon to fix the internet That sounds like a silly conclusion, to me. Yes, personal data shouldn't be looked-at firstly as monetary value, because it isn't firstly a monetary thing and it has no monetary value if it isn't for sale per the will of the person who owns it. But no, personal data isn't owned by society, each individual person's personal data is owned by them. Suggesting it is owned by society sounds like a way to concede that it doesn't belong to corporations while rationalizing that industries should have some means of access to it all the same. In general, I think that Tim Berners-Lee's plan is a positive idea, compared to where things are. But I think that most of that data should be prohibited by law from being collected and stored in the first place. And what happens if Inrupt's data-stores are hacked? There goes the privacy the privacy and personal control of one's data that the idea was meant to protect. I think the issue needs to be addressed at its source rather than with a coping mechanism which I think would be destined to fail. At some point, likely even from the outset, governments would have gained access to that vault of information and the public likely won't know about it when it happens. The business of harvesting data is a dirty, illegitimate, predatory, and hypocritical one. It is making money through the exploitation and manipulation of people and is a crime - and not just a moral one (though, it is definitely a moral one): What do you think would happen if you were to hook a Bitcoin mining operation up to the electricity supply of some business you don't own, without their permission and without compensating them? If they found out, they would have you arrested and if the operation was significant, they'd sue you, and would probably get to seize any profits you'd made while using their electricity. There's not really even a need to frame things in cryptocoin-mining terms. Imagine that you decided to start using various businesses computers, electricity, employee activities, software, housing, as data farms for your own project, just like they're doing with our PCs. Same thing's going to happen: You'll be arrested and charged, probably sued, and any profits you made will probably be seized and given to the corporation. But tech companies are doing the same thing to us and they're not being punished for it in any way. In generating and harvesting data from our particular usage and via interaction with our devices, tech companies are using our electricity, our hardware, our storage and management of our hardware, our software, our time, our personal activity, for their own commercial purposes, and all without a commercial license. They're stealing. And it's crazy that it's been allowed to progress this far, that the public is in a stupor and doesn't understand that this isn't right. Somehow, the public, governments, and regulators have been lured into a stupor and coma regarding the topic just because tech companies started doing these things before there was any understanding of them, and so now people feel like it's just the way things are. But that's like thinking that stealing what isn't yours and slavery are just the way things are. Tech companies whose business is mining and selling data are stealing from us in the same way that a politician who steals millions of dollars out of the treasury is stealing from their constituents. Even though the millions of dollars they stole amounts to a few dollars, or even less than a dollar per person, the smallness of the stealing from each individual doesn't make it not stealing. Some methods to reduce the amount of data being stolen from you and used for commercial and manipulative purposes include: - Using DuckDuckGo for web searches. They don't share or store any personal data. DuckDuckGo also has a tracker-blocking privacy plugin for Chrome, FireFox, and Safari, as well as mobile browsers. - Using only an Enterprise or LTSC edition of Windows 10 as they afford for lowering the amount of data Microsoft takes from you beyond what Home and Pro allow. And Microsoft is tracking every mouse-click you make in Windows 10. - Using ProtonMail for you email. It has end-to-end encryption and your inbox is encrypted with a user encryption key so that ProtonMail can't view it, either. - Installing Electronic Frontier Foundation's browser plugin Privacy Badger [2] [3], which blocks a lot of tracking scripts. - Possibly using an ad-blocker to reduce the amount of tracking and advertisement scripts websites can run when you browse their website. - Making use of FireFox browser's built-in Facebook-tracking-blocking feature. - Using your iPhones built-in option to block all tracking by apps. - Setting your DNS resolver to Cloudflare's free 1.1.1.1 service. This prevents your ISP from recording your activity and searches and selling it. Cloudflare doesn't collect or sell any of your data and doesn't record any IPs. Cloudflare also has a mobile app that sets your mobile internet usage to its 1.1.1.1 service. Cloudflare say of their 1.1.1.1 service: If you know of additional methods to secure your data and privacy, please share them.
- 19 replies
-
Hi, if you go to your profile, you can see in "about <your user>" section the current site you are visiting, I wonder if these little snippets are logged ? And how can I hide these ?
-
I'm trying to decide on whether I wanna stick to android for privacy or go to apples iphone Please tell me what what you know they do to support privacy. Thanks in advance!
-
I've been daily driving the Teracube 2e running /e/ OS (degoogled) for a month now and thought I'd share my experience. Background: So I was running a Blackberry Z30 for 7 years and since they've pulled the plug on the servers, I needed a new phone. I'm a very privacy-focused kind of person, so I didn't want to just hand over all my data to google or apple. @adarw suggested /e/ OS (among others). After a bit of research and reading reviews, I settled on /e/ as it seemed like the best fit. The idea behind a "degoogled" phone is that they take the Android Open Source Project and strip out the code that sends info back to google, so you're running Android but not sending all your data back to google. I bought my Teracube 2e directly from the /e/ Foundation pre-loaded with /e/ OS, but I understand trying to install /e/ OS yourself can be a bit of a pain. My experience: In a nutshell, if you use your phone primarily as a phone (talk, text, some web browsing), then the Teracube 2e running /e/ OS is great. Nearly all of my gripes come from the fact that I've been using BB10 for the better part of a decade, so I'm just not used to using a different UI. However, if you're used to a more fully featured phone and rely extensively on apps, games, and a good camera, then it's probably not for you. As I said, I use my phone primarily as a phone and nearly all of my data usage comes from using the browser (scrolling teh meme sites, watching youtube, logging into email, etc). I know Anthony poo-pooed the phone in his Short Circuit unboxing and I see why; it's certainly not a phone that most people would want, but the limitations of the hardware don't really bother me and I understand the limitations of the software, especially with such a small team behind it. I can live with these, but whether you can is something only you can answer. But, keep in mind that I am a weirdo, largely shunned by the day-walkers. Pros: -cost (I paid $300 CAD for mine, shipped) -dual sim -unlocked -removable battery -microSD card slot (up to 128gb) -privacy-focused (just *how* private /e/ OS is is something that I can't really speak to. In theory it sounds great, if the /e/ Foundation is to be believed) -headphone jack -physical buttons -minimal holepunch camera in display -has its own app store that gives each app a privacy score -it's not fast, but it's fast enough (for me at least) Cons: -battery life is mediocre at best (light use results in about 40%-50% battery drain over 24 hours) -double tap to wake can't be disabled (it's a known bug, hopefully it gets fixed) -camera has some serious lag time before it opens the camera app -camera takes a few seconds after pressing button to take photo -camera picture quality is meh -speakers are meh -app store has limited selection of apps, most require some tracking Conclusion: The Teracube 2e running /e/ OS fills the privacy-focused niche pretty well. I wouldn't expect it to see large scale adoption, but I don't think it was ever intended to. But, if you want to try and reduce your digital footprint and can live with a more bare-bones mobile experience, then it's worth checking out. I, for one, can live with its limitations; it does what I need it to do. **note: I have ordered the Pinephone Pro to try as well. I'll be posting my thoughts on that later this year. It'll be interesting to see how it compares.
-
As the Five Eyes (the governments of Australia, Canada, New Zealand, the United Kingdom, and the US) continue to push for the abolishment of encryption, a group of companies and organisations are pushing back, under the collective called the Global Encryption Coalition. And October 21st is what they are calling Global Encryption Day. Partners of the Global Encryption Coalition include the EFF, American Civil Liberties Union, Cloudflare, Mozilla, Open Media, Digital Rights Watch, various VPN providers, Proton Mail, and many others. Global Encryption Day - Global Encryption Coalition EFF: On Global Encryption Day, Let's Stand Up for Privacy and Security Privacy and security are essential not just for technological reasons, but also for societal and personal ones, including mental health. And if you're not already using privacy-friendly means to do things online, maybe today's a good day to check out what the possibilities are. Open Media petition telling the Five Eyes to stop attacking encryption: https://action.openmedia.org/page/53247/petition/1
- 27 replies
-
- encryption
- privacy
-
(and 2 more)
Tagged with:
-
I don't exactly like it when other people see what websites I visit, it gives a lot of my interests and personality away. I'm a pretty privacy focused guy overall. So made a homemade VPN server using PPTP on a tplink router. Everything is working on it every time. Except when my School's public network blocks Point-to-Point Tunneling Protocol. Is there any way I could hide the connection or do anything to fix it? Could a proxy server fix the problem? Here's a screenshot of the configuration I use:
-
Verizon rolls out massive opt-out data collection scheme.
OrangeSunshine posted a topic in Tech News
Summary Verizon has rolled out a massive data collection scheme called "Verizon Custom Experience." Its goal is to create a custom user profile for everyone on their service, and it does so by collecting information stored locally, such as call and text records, websites that are visited, and the apps on your phone and what you use them for. The service is opt-out rather than opt-in. However, one can opt-in to the higher tier of the program ("Custom Experience Plus") and have their location constantly monitored, as well as information about the private networks that you may use. They only notified customers in easy to miss emails or a single text message. Quotes My thoughts Just when my hatred of Verizon wanes to a dull ember, they do something completely inane, from a customer service perspective at least, that sparks ire. This program that they are marketing as a "service" is nothing but a thinly veiled attempt to track literally everything you do. The only thing that you get in return is an offer from Verizon for a service, such as Disney plus, from companies that work with them. Essentially, this looks like a move to get a slice of that sweet data selling pie. The fact that they are not just handing over the information gleaned to a third party implies they are trying to carve out their own niche in the market. One where they are the gatekeepers of data, much like Google or Facebook, that other companies have to work with to sell you ads or promote deals. This makes me way more uneasy than Facebook or Google's data collection because it is much harder to mitigate. Fortunately, it is still possible to opt-out. I urge everyone that remotely cares about privacy to do so. Sources https://www.verizon.com/support/verizon-custom-experience-programs-faqs/ https://www.wired.com/story/verizon-user-privacy-settings/ https://www.inputmag.com/culture/verizon-customers-might-want-to-check-their-privacy-settings-asap https://www.howtogeek.com/772439/verizons-custom-experience-is-data-collection-in-disguise/ https://www.msn.com/en-us/news/technology/verizon-tries-to-defend-collecting-browsing-data-on-its-network/ar-AARVsJK -
Summary Last week, the Australian Parliament passed the Surveillance Legislation Amendment (Identify and Disrupt) Bill 2020, rushing the bill through in only 24 hours. This revision to the Australian government's data surveillance laws allows law enforcement to modify, add to, copy, or delete data of anyone, so long as they are a suspect in an investigation, as well as quietly taking control over the person's online accounts. In addition, it compels Australian businesses, sysadmins, etc. to comply with any such requests, who could face up to 10 years imprisonment if they fail to do so. While these new powers do require a warrant to use in most cases, the warrants are not issued by judges; additionally, there is an "emergency authorization" procedure that allows these requests to be executed without any warrant at all. Quotes My thoughts Everything about this law screams 'bad idea', especially since the parliament went to the trouble of rushing it through in only 24 hours. The warrants required to execute the powers of this law can be authorized outside of the judicial system, and can be circumvented altogether by an "emergency authorization". I see almost no way that bad actors won't take advantage of these powers: they would enable the destruction or planting of evidence, as well as the ability to stalk someone at a level impossible in much of the world. The powers afforded by this law are far too broad and have nowhere near enough checks on them. Sources https://tutanota.com/blog/posts/australia-surveillance-bill/ https://thenextweb.com/news/new-surveillance-laws-authorities-power-change-social-media-posts-syndication
- 61 replies
-
- privacy
- privacy rights
-
(and 2 more)
Tagged with:
-
Summary Well that was fast. Security researchers are already blowing barn door sized holes in apples "secure" CSAM. Princeton University wrote their own CSAM tool designed exactly like Apple with the same end goal of privacy. They said the system is extremely easy to manipulate even going so far as to say it's as easy as swapping a file. Quotes My thoughts First off I'm all for protecting kids 100%. However this is not the answer. To allow a backdoor this wide open for hackers and foreign governments is asking for trouble. I would advise people tell Apple to immediately scrap their plans for this. If they feel this is what needs to be done then they need to immediately drop the secure and privacy focused aspects of their marketing and warn users of the security implications of using their phones in areas with less than savory governments looking to inject malicious data into or extract from users phones Sources https://www.engadget.com/princeton-university-researcher-apple-csam-oped-162004601.html