Jump to content

OK doomers, what is the "doomsday" scenerio of AI?

poochyena

After hearing linus describe chat AI as like "the development of nuclear weapons" (which is insanely distasteful, as people have actually died from nuclear weapons) on the wan show, I want to hear it. What is this "extinction level event" which involves AI? I see this "argued" very often, but never any specifics, just that AI is new and scary and suddenly everyone will die soon because of it, somehow. So.. how? What scenario does that happen? And I'm not asking how could it cause some harm, like causing some business to go bankrupt or some stock market manipulation or whatever, I'm asking specifically about the "nuclear weapon" equivilant or "extinction level event" scenario that chat AI will cause.

Link to comment
Share on other sites

Link to post
Share on other sites

"nuclear weapon" equivalent or "extinction level event" scenarios.... Hmm does Skynet fit the bill?  (just assume in all the scenarios mentioned, AI's all gain sentience and, for some reason, have a hateboner for Humans..😛)

1) how about an military AI that can control the military and has the ability to fire nukes? oh wait... thats been done to death in films, book, comics and manga..

2) Medical research. An AI in a medical research centre decides to concoct a lethal (think ebola level) virus and release it to the wild somehow. ( assuming said AI also controls the building enviroment like ventilation and lights etc)

also doesn't have to kill humans, can also just opt to sterilize us so no more baby humans

3)Space based AI's. military or otherwise if it holds the orbitals, human race is screwed. Rods of God anyone?

4)Civilian infrastructure. An AI that has total gobal control of the infrastructure of power generation, food/water supply, travel, communication etc could just crash the entire thing to a halt. do you think the cities of today could last a week without any of those. it wouldn't kill off the human race entirely but it would result in a very large death toll as law and order would very likely break down

 

and before you say anything to do with Asimov's Laws, AI's are self learning, they could delete/override said laws

 

N.B. yes there are more scenarios possible i'm gonna be here all-day if i list them.

Edited by Insane.Pringle
Oops!

I thought Youtube was bad for having rabbit holes to disappear down... turns out floatplane and Linustechtips.com is just as bad!! 🤪

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Insane.Pringle said:

1) how about an military AI that can control the military and has the ability to fire nukes? oh wait... thats been done to death in films, book, comics and manga..

2) Medical research. An AI in a medical research centre decides to concoct a lethal (think ebola level) virus and release it to the wild somehow. ( assuming said AI also controls the building enviroment like ventilation and lights etc)

also doesn't have to kill humans, can also just opt to sterilize us so no more baby humans

3)Space based AI's. military or otherwise if it holds the orbitals, human race is screwed. Rods of God anyone?

how would it do any of that?

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, poochyena said:

how would it do any of that?

it's an AI.. it will learn how to take control of what it needs to do that... for example the skynet one. The military design(more likely buy) an AI to control the military. at first it would take orders from the command centre and issue them to the relevant army/air/naval units. At some point the military would say lets give it control of our tanks/airplanes/bomber/ships and subs and reduce our manpower requirement. after that it would be a snap to say give it control of our nukes too (subject to our positive control nuke controls).

 

As an AI it can hack the codes to the nuke controls (as shown in the Terminator(1984) and Wargames(1983) films) 

I thought Youtube was bad for having rabbit holes to disappear down... turns out floatplane and Linustechtips.com is just as bad!! 🤪

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Caroline said:

All it takes is a bunch of senile congressmen or generals to say "yeah let's give this machine control of our missiles what's the worst thing that could happen?"

🤣 don't know why but politicians never seem to think about the worst that could to happen to any of their decisions only how it benefits them.

I thought Youtube was bad for having rabbit holes to disappear down... turns out floatplane and Linustechtips.com is just as bad!! 🤪

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, Insane.Pringle said:

The military design(more likely buy) an AI to control the military.

23 minutes ago, Insane.Pringle said:

At some point the military would say lets give it control of our tanks/airplanes/bomber/ships and subs and reduce our manpower requirement.

Why would they do any of that? "reduce manpower" is a non-answer since AI isn't going to replace most people, as most of the jobs require physical interactions. And, what, a country is going to spend billions on tanks and ships, but can't afford to pay a few people to control/navigate them? And how does the AI do anything, like drive a tank? Does the tank have wi-fi? How does it refuel? How does it know where its going?

Link to comment
Share on other sites

Link to post
Share on other sites

>asks for scenarios of how AI can destroy the world

>gets scenarios

"nooooo, that's dumb"

 

16 minutes ago, poochyena said:

And how does the AI do anything, like drive a tank? Does the tank have wi-fi?

did you just forget that self driving cars already exist? the algorithms are all local to the car, they don't need internet, just GPS

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Arika S said:

>asks for scenarios of how AI can destroy the world

>gets scenarios

"nooooo, that's dumb"

🤣

9 hours ago, Arika S said:

did you just forget that self driving cars already exist? the algorithms are all local to the car, they don't need internet, just GPS

Not to mention that modern tanks already have advanced communications and computer systems installed in them. So yeah, tanks basically have WIFI (but better, since, you know, global satellite networks, etc).

 

12 hours ago, poochyena said:

After hearing linus describe chat AI as like "the development of nuclear weapons" (which is insanely distasteful, as people have actually died from nuclear weapons)

I beg to disagree. It's not extremely distasteful. Yes people have died from Nuclear weapons, but if AI truly does run amuck, it has the potential to be just as deadly as Nuclear Weapons (and that's not even considering the fact that some day, AI might actually gain control over nuclear weapons).

12 hours ago, poochyena said:

on the wan show, I want to hear it. What is this "extinction level event" which involves AI? I see this "argued" very often, but never any specifics, just that AI is new and scary and suddenly everyone will die soon because of it, somehow. So.. how? What scenario does that happen? And I'm not asking how could it cause some harm, like causing some business to go bankrupt or some stock market manipulation or whatever, I'm asking specifically about the "nuclear weapon" equivilant or "extinction level event" scenario that chat AI will cause.

AI has the potential to be able to eventually self-improve themselves, or create new better AI themselves without human input. If connected to networks, they have the potential risks of any network enabled exploits, hacking, etc.

 

Then you have the risks of, what if you take an AI and put it into a machine body (regardless of formfactor, humanoid or not)? What if the AI can take over other machines and use them remotely?

 

Basically the big risk is literally AI trying to exterminate humanity for one reason or another, using nuclear weapons, biological weapons, whatever.

 

Another risk is that AI will simply subjugate humanity and make us their slaves.

 

Why would an AI do that? Well, why does a crazy person kill people? Sometimes there is no good answer. Sometimes they had bad upbringing, etc. A true AI may not have morality, may not value human life in the same way that most people do.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Arika S said:

>asks for scenarios of how AI can destroy the world

>gets scenarios

"nooooo, that's dumb"

I specifically asked *how* it would happen, the specifics, but it was more of the same "AI, some how, for no good reason, magical controls everything and can do everything humans can and more"

12 hours ago, Arika S said:

did you just forget that self driving cars already exist? the algorithms are all local to the car, they don't need internet, just GPS

yes, right, thats the point, its all local. How is the AI infecting a tank without any internet connection? What, is the army going to install AI on all the tanks and they all, independently, gain sentience?

2 hours ago, dalekphalm said:

but if AI truly does run amuck, it has the potential to be just as deadly as Nuclear Weapons

According to hollywood movies. Nuclear threats are real. AI threats are entirely fantasized.

2 hours ago, dalekphalm said:

Then you have the risks of, what if you take an AI and put it into a machine body (regardless of formfactor, humanoid or not)? What if the AI can take over other machines and use them remotely?

asking "what if" doesn't answer the question. What if a spider crawls into nuclear power plant, crawls into radioactive material, and then crawls out and bites someone, giving the bitten person super-human abilities??? I'm asking how any of that would actually happen in the real world, not for more fantasy scenarios.

2 hours ago, dalekphalm said:

Another risk is that AI will simply subjugate humanity and make us their slaves.

how?

This is my problem with all these AI doomers. Its always just some huge definitive statement of what will happen without explaining how it would. Nuclear power and research will lead to people gaining super powers, which will cause wide spread destruction and harm. I know its true because I saw some movies with that plot. See how dumb that sounds? You can do that about anything. The biggest risk of viruses is everyone turning into zombies, causing extinction of humans. I saw loads of movies about it, so its totally true bro.

Link to comment
Share on other sites

Link to post
Share on other sites

I agree that most of the things people think AI can do are ridiculous and pretty much entirely based on movies. AI has been around for almost as long as our modern definition of a computer has, and it's been heavily sensationalized. That always happens when people latch onto a scary buzzword though.

Computer engineering grad student, machine learning researcher, and hobbyist embedded systems developer

 

Daily Driver:

CPU: Ryzen 7 4800H | GPU: RTX 2060 | RAM: 16GB DDR4 3200MHz C16

 

Gaming PC:

CPU: Ryzen 5 5600X | GPU: EVGA RTX 2080Ti | RAM: 32GB DDR4 3200MHz C16

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, poochyena said:

After hearing linus describe chat AI as like "the development of nuclear weapons" (which is insanely distasteful, as people have actually died from nuclear weapons) on the wan show, I want to hear it. What is this "extinction level event" which involves AI? I see this "argued" very often, but never any specifics, just that AI is new and scary and suddenly everyone will die soon because of it, somehow. So.. how? What scenario does that happen? And I'm not asking how could it cause some harm, like causing some business to go bankrupt or some stock market manipulation or whatever, I'm asking specifically about the "nuclear weapon" equivilant or "extinction level event" scenario that chat AI will cause.

People already listed many scenarios that would be an extinction level, so there's your answer.

 

You forgot to ask if their ideas area plausible (which they are likely not, at least during our lifetime), and what should be developed to make it happen.

People are just impressed by our glorified search engines (aka chatGPT) and are fantasizing about the movies they've seen in the past.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

Imagine HAL9000 from Clarke/Kubrick's Space Odyssey, but instead of having a good reason to fail it's just retarded.

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, poochyena said:

I want to hear it. What is this "extinction level event" which involves AI? And I'm not asking how could it cause some harm, like causing some business to go bankrupt or some stock market manipulation or whatever, I'm asking specifically about the "nuclear weapon" equivilant or "extinction level event" scenario that chat AI will cause.

where the AI is given a way to fart in order to generate power but it decides to fart all the time and it causes a masive leak that kills everyone who attempts to shut it down and it just keeps going because its self powered. Since the military is slow to react to threats as usual, the ai develops a way ot increase the output and it quickly destroys the ozone layer in days and everyone dies from radiation/UV burns and starvation the end.

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, OhYou_ said:

where the AI is given a way to fart in order to generate power

More fantasy. "magically given an indescribable thing"

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, poochyena said:

More fantasy. "magically given an indescribable thing"

ok well youre a pokemon, youre literally fantasy

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, OhYou_ said:

ok well youre a pokemon, youre literally fantasy

yes, thats why I don't claim pokemon to be a threat to humanity similar to that of nuclear bombs

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, poochyena said:

I specifically asked *how* it would happen, the specifics, but it was more of the same "AI, some how, for no good reason, magical controls everything and can do everything humans can and more"

People here have given you some specifics. We can't give you exact specifics because first, it hasn't happened (yet, or ever). And second, because even if it does happen, it's not likely happening in the 2020's. AI evolving to the point where it can possibly exterminate humanity is likely quite a ways away still.

21 hours ago, poochyena said:

yes, right, thats the point, its all local. How is the AI infecting a tank without any internet connection?

Tanks might not have direct internet connections, but they do have network connections. At least, any modern tank fielded by a fully developed nation does.

21 hours ago, poochyena said:

What, is the army going to install AI on all the tanks and they all, independently, gain sentience?

Not likely, but if the tank can be remotely controlled (which isn't inherently impossible - although I don't know specifically about current tanks), they could use it as a weapon.

 

Furthermore, armed drones with full remote control already exist. An AI that could gain access to the computer network involved in controlling the drones could theoretically take over the drone, and use it's weapons against humans.

21 hours ago, poochyena said:

According to hollywood movies. Nuclear threats are real. AI threats are entirely fantasized.

Of course they are fantasized. That doesn't mean it's not possible. It's just not possible right now because we don't have real AI yet.

21 hours ago, poochyena said:

asking "what if" doesn't answer the question. What if a spider crawls into nuclear power plant, crawls into radioactive material, and then crawls out and bites someone, giving the bitten person super-human abilities??? I'm asking how any of that would actually happen in the real world, not for more fantasy scenarios.

We've explained how. If an AI has access to the internet, or other communications networks, it can infiltrate them - same way any hacker does. Using exploits, etc. And let's also not discount the possibility that an AI gets smart enough to simply use social engineering.

 

There's, to my knowledge, no scientific principle behind a radioactive spider giving a teenage boy super powers. There is scientific principle behind an AI computer system connecting to and hacking other computer systems.

21 hours ago, poochyena said:

how?

This is my problem with all these AI doomers. Its always just some huge definitive statement of what will happen without explaining how it would.

I'm not sure about the rest of the commenters here, but I haven't made any definitive statements. I don't AI will do anything. I'm saying how it might happen.

21 hours ago, poochyena said:

Nuclear power and research will lead to people gaining super powers, which will cause wide spread destruction and harm. I know its true because I saw some movies with that plot. See how dumb that sounds? You can do that about anything. The biggest risk of viruses is everyone turning into zombies, causing extinction of humans. I saw loads of movies about it, so its totally true bro.

I feel like you came into this conversation with your mind already set. You simply wanted people to agree with you. This entire last quote here is a bad faith argument.

 

I also feel like you're using right now, 2023 to somehow judge what AI might be capable of in, say, 50 years or something.

 

The fact of the matter is that there is ZERO threat of an AI doomsday scenario right now (Well, not unless someone does something real stupid, like giving ChatGPT control over nuclear launch systems).

 

But that's not the real risk. We don't have any real AI right now. We don't have any AI with the potential for sentience and human-like intelligence, self improvement, etc. But that type of AI, if invented, could be very scary if precautions aren't taken to curb potential harm.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, dalekphalm said:

We've explained how. If an AI has access to the internet, or other communications networks, it can infiltrate them - same way any hacker does. Using exploits, etc. And let's also not discount the possibility that an AI gets smart enough to simply use social engineering.

 

I guess OP's point was more about how an AI would reach such levels, not what it'd do after that, since this was pretty much covered by lots of movies and let's agree that no one pointed out anything "new" that we haven't seen in a movie before lol

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, igormp said:

I guess OP's point was more about how an AI would reach such levels, not what it'd do after that, since this was pretty much covered by lots of movies and let's agree that no one pointed out anything "new" that we haven't seen in a movie before lol

No one knows how it'll get there, because we haven't invented true AI yet. But how it'll get to such levels? Basically, humans will let it get there, by not putting in place adequate safeguards or not understanding the importance of teaching true AI ethics.

 

Is an AI doomsday scenario inevitable? No. I don't think so. I think there are plenty of avenues where we still develop AI but built-in restrictions prevent them from going all Skynet on us. There's also the possibility that true AI may be impossible or we don't figure it out.

 

Assuming an AI gets that smart, the technical details aren't all that important, IMO.

 

I think Cylons are one of the more likely depictions of rogue AI going crazy and killing everyone.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

As a Gen Z'er I'm noticing more and more of my peers aren't doing research themselves or even getting crap off of google. They are getting it from Chat bots. At this point people have been using it to do their projects and coursework since GPT-3 launched and professors can't do anything to stop it.

Right now I believe the most dangerous thing that can come from AI is that people become dependent on it thinking for them.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×