Jump to content

Do We… WANT Google to Win? - May 10, 2023

 

 

NEWS SOURCES:

 

THE EMPIRE STRIKES BACK

Story 1 Google AI stuff

https://www.youtube.com/watch?v=cNfINi5CNbY

Gmail/google workspace / Ai plugins

https://www.theverge.com/2023/5/10/23718301/google-ai-workspace-features-duet-docs-gmail-io

Vertex Business partnership

https://techcrunch.com/2023/05/10/google-brings-new-generative-models-to-vertex-ai-including-imagen/

Emphasis on ethics

https://www.youtube.com/live/cNfINi5CNbY?feature=share&t=5105

Wendy’s FreshAI

https://www.theverge.com/2023/5/9/23716825/wendys-ai-drive-thru-google-llm

 

DEJA-VIEW

Story 2 Pixels

https://www.youtube.com/watch?v=cNfINi5CNbY

Pixel 7a

https://www.theverge.com/23716677/google-pixel-7a-review-screen-camera-battery

Pixel tablet

https://www.xda-developers.com/pixel-tablet-launch-post/

Pixel Fold ad leaks just before I/O

https://www.theverge.com/2023/5/10/23718344/google-pixel-fold-leak-promo-delay-dual-screen-multitasking

claims “Dual screen support coming Fall 2023” in fine print https://twitter.com/_snoopytech_/status/1656303098791378946

https://9to5google.com/2023/05/09/pixel-fold-nba-ad/

 

OUTING THE OPEN

Story 3 - India bans several open source messaging apps for “security reasons”, citing risks of supporting terrorism in Jammu and Kashmir

https://indianexpress.com/article/india/mobile-apps-blocked-jammu-kashmir-terrorists-8585046/

already restricted the region to only 2G cellular services, on ground of preventing separatists from organizing 

https://lmg.gg/GD0DZ

Meanwhile India suffered a natural disaster every day for 66% of 2022[RM1] [JP2] 

https://timesofindia.indiatimes.com/readersblog/sahil-razvii/india-has-faced-natural-disaster-every-day-in-the-last-nine-months-report-46435/

 

QUICK BITS

 

MISFITS WASN’T THE SAME AFTER SEASON 2

Fossil drops support for Misfit watches – dropping such features as “changing the time”

https://www.reddit.com/r/LinusTechTips/comments/13dobzd/fossil_has_killed_support_for_misfit_smart_and/

Misfit is indeed missing from Fossil’s brand list

https://www.fossilgroup.com/brands/

I confirmed this with customer service:

 

SPOOFY MUSIC

Spotify purges thousands of ai-generated songs

https://gizmodo.com/spotify-ai-music-generator-purges-bots-listening-songs-1850419032

Accusation is that bots are posing as listeners

https://www.ft.com/content/b6802c8f-50e7-4df8-8682-cca794881e30

Songs made through “Boomy” reinstated

https://musictech.com/news/industry/spotify-reinstates-songs-from-ai-music-generator-boomy/

LMAO their tweet got barely any engagement https://twitter.com/boomy/status/1654833390963638273

 

I HEARD YOU LIKE WATCHING ON YOUR ROKU…

Roku announces $99 smart home monitoring system

https://www.engadget.com/roku-unveils-a-99-smart-home-monitoring-system-130002352.html

2 entry sensors, one motion sensor, wire-free keypad and central station with siren

https://techcrunch.com/2023/05/10/roku-smart-home-monitoring-system-security/

Will allow you to monitor your home from your TV if you don’t wanna look around https://www.cnet.com/home/security/rokus-new-100-home-security-system-can-send-video-to-your-tv/

 

THE ONLY BATTERY THAT MEASURES MOUTH-FIELD

Researchers have made an edible battery

https://arstechnica.com/science/2023/05/researchers-craft-a-fully-edible-battery/

Could be used to power edible electronics https://www.jpost.com/business-and-innovation/all-news/article-739463

Which sounds stupid, but could be a big deal for child and pet toys https://lmg.gg/5mw9o

 

GPUs - S

Men facing jail time for smuggling GPUs/Live Lobsters into China, driving a van with no papers

280 kg of lobsters…oh and also 70 GPUs https://www.tomshardware.com/news/smugglers-hid-70-graphics-cards-among-280kg-of-live-lobsters

Total value estimated by Hong Kong customs to be $76,500, divided between the lobsters and GPUs

https://wccftech.com/70-nvidia-quadro-gpus-attempted-to-be-smuggled-among-live-lobsters-in-china/

Based on $160 retail for the Nvidia Quadro K2200s, that means most of the value is lobster-based https://lmg.gg/rqYAN


Tech news? [RM1] [RM1]

Dunno, I didn't add this. [JP2] [JP2]

 

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for the great video Riley! 

 

However, a quick note about Google and AI safety.

 

Every time they talk about their concerns about AI safety and ethics, it is important to keep in mind that they consistently sacked their ethics and safety teams over the last 2.5 years, pretty much removing anyone who would be doing actual work to make AI safer and more ethical. Here are just a couple of major stories about it.

 

 

Basically, Google has an abysmal record on AI safety and ethics research, and is one of the big reasons they have struggled hiring researchers in LLMs, which in turn is the reason that despite massive capabilities they felt behind on the LLM tech to the point of starting to loose market in their core sector - search. 

 

Citing Google's PR on ethics and safety without providing context might indeed lead to a feeling of hope, but it is, unfortunately, a misplaced one...

 

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Andrei Chiffa said:

Thanks for the great video Riley! 

 

However, a quick note about Google and AI safety.

 

Every time they talk about their concerns about AI safety and ethics, it is important to keep in mind that they consistently sacked their ethics and safety teams over the last 2.5 years, pretty much removing anyone who would be doing actual work to make AI safer and more ethical. Here are just a couple of major stories about it.

 

 

Basically, Google has an abysmal record on AI safety and ethics research, and is one of the big reasons they have struggled hiring researchers in LLMs, which in turn is the reason that despite massive capabilities they felt behind on the LLM tech to the point of starting to loose market in their core sector - search. 

 

Citing Google's PR on ethics and safety without providing context might indeed lead to a feeling of hope, but it is, unfortunately, a misplaced one...

 

Thanks Andrei! Your point is a good one - Google is not to be "trusted" any more than OpenAI or the rest. But I think watching I/O, I saw way more talk about things like tagging AI images with metadata to clearly identify them, reducing bias in models, and mentions of "humans still being important" (lol). And it just reminded us that we haven't seen as much talk about that from the other guys, beyond things like "we're going to be really careful, because this could kill us". We could have gone with a more explicit lesser-of-two-evils angle 😕 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...

Thanks for taking the time to respond, Riley! 

 

While I absolutely agree that no corporation is to be "trusted" on their word (and that's why I am looking forwards to more data coming out of LTT labs), some companies do a better job at aligning their actions with their statements.

 

Some companies do extremely well. Currently, among large corporations, it's Hugging Face putting their money where their mouth is; but before Musk Twitter AI/ML safety team, for all the flack they were getting, were doing a pretty good job and not only letting external researchers look at their data for free, but also acting on the reports they were receiving.

 

While OpenAI has it's issues, they do have team that works on negative impacts of LLMs and that has impact into what goes into production. For all the jokes the "we are not releasing GPT-2 due to the risk of misinformation" got in 2019, retrospectively, they were not wrong. This continues to this day - GPT-4 paper contained 50 pages of description of critical failure modes and what they did to mitigate it; and they are accepting external collaborators who want to break their products and publish their findings, after OpenAI patches systems in production. 

 

Unfortunately Google until now has shown a callous disregard for ethics and safety of AI/ML. Google search top blurb has been broken for years in dangerous ways, and nothing was done to mitigate even widely reported errors. YouTube algorithm has been hijacked without them noticing or doing anything about it until it grew into a major scandal with legal ramifications (ElsaGate/DistrubedElsa).

 

If we focus just on recent LLM developments, the LaMDA paper (which is the architecture behind BARD) only has 1.5 pages on safety, with 0 evaluation examples or results provided. Sparrow - the next iteration of conversational search LLM Verge recently confirmed they were working - does barely better, by bumping it to 3.5 pages and showing some experimental results. To the best of my knowledge, no one external is let anywhere near their production models or algorithms, nor do they act on reports based on user interfaces (but I think you noticed it already with spam in Youtube comments).

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×