Jump to content

Stephen Hawking Says A.I. Could Be Our 'Worst Mistake In History'

minimalist

Stephen Hawking is arrogant. He knows people have labeled him as a genius and will therefore take whatever he says for granted. 

 

He's been cocky lately and crying out for attention to reach headlines by saying absurd things that he clearly knows nothing about. Maybe it's time for him to accept his ignorance, because the highest form of ignorance is when you reject something you know nothing about.

 

Just my 2cents because I don't like arrogant scientists. 

Mobo: Z97 MSI Gaming 7 / CPU: i5-4690k@4.5GHz 1.23v / GPU: EVGA GTX 1070 / RAM: 8GB DDR3 1600MHz@CL9 1.5v / PSU: Corsair CX500M / Case: NZXT 410 / Monitor: 1080p IPS Acer R240HY bidx

Link to comment
Share on other sites

Link to post
Share on other sites

@One Who Craves Souls Does my Kanye quote have any relevance to this or are you trying to judge me somehow?

"It seems we living the American dream, but the people highest up got the lowest self esteem. The prettiest people do the ugliest things, for the road to riches and diamond rings."- Kanye West, "All Falls Down"

 

Link to comment
Share on other sites

Link to post
Share on other sites

@One Who Craves Souls Does my Kanye quote have any relevance to this or are you trying to judge me somehow?

"Judge me"... interesting choice of words, not interested in contributing any more to this thread.

Link to comment
Share on other sites

Link to post
Share on other sites

@One Who Craves Souls "Interesting" like those words you just said to me on your first post. Like, "Nevermind, saw the Kanye quote on your sig... Oh my..."

 

If I could take a guess, I'd say you were beginning to start a conversation with me, and then backed off because you saw that I liked Kanye West? Uh...

 

Whatever, then.

"It seems we living the American dream, but the people highest up got the lowest self esteem. The prettiest people do the ugliest things, for the road to riches and diamond rings."- Kanye West, "All Falls Down"

 

Link to comment
Share on other sites

Link to post
Share on other sites

How about we not talk about Stephen hawking and just discuss the topic? He raised a valid discussion topic, regardless of his competency in the topic.

An opinion is an opinion, my opinion is not yours, nobody asked you to let his option get to you

i7-4790k | Asus Z97i-Plus     | Kingston HyperX Fury 16gb | MX100 256gb     | Seidon 120XL | Silverstone SFX 600w Gold | Node 304 White
G3258    | Asus Z97i-Plus     | Kingston HyperX Fury 16gb | 4 x 3TB WD Reds | Seidon 120XL | Silverstone SFX 600w Gold | Node 304 Black

i7-965EE | Rampage II Extreme | Kingston HyperX Fury 16gb | CM M2 700w | Sapphire Nitro 380 4GB

Link to comment
Share on other sites

Link to post
Share on other sites

How about we not talk about Stephen hawking and just discuss the topic? He raised a valid discussion topic, regardless of his competency in the topic.

An opinion is an opinion, my opinion is not yours, nobody asked you to let his option get to you

I got seduced by the dark side... I wish to be cleanse of all darkness. :(

Link to comment
Share on other sites

Link to post
Share on other sites

well, atleast our generation will be gone when this gets anywhere :P

i9 11900k - NH-D15S - ASUS Z-590-F - 64GB 2400Mhz - 1080ti SC - 970evo 1TB - 960evo 250GB - 850evo 250GB - WDblack 1TB - WDblue 3TB - HX850i - 27GN850-B - PB278Q - VX229 - HP P224 - HP P224 - HannsG HT231 - 450D                                                         
Link to comment
Share on other sites

Link to post
Share on other sites

well, atleast our generation will be gone when this gets anywhere :P

i doubt it. by the time our generation is gone it'd be 50 years. that a lot of time. Like a lot

i7-4790k | Asus Z97i-Plus     | Kingston HyperX Fury 16gb | MX100 256gb     | Seidon 120XL | Silverstone SFX 600w Gold | Node 304 White
G3258    | Asus Z97i-Plus     | Kingston HyperX Fury 16gb | 4 x 3TB WD Reds | Seidon 120XL | Silverstone SFX 600w Gold | Node 304 Black

i7-965EE | Rampage II Extreme | Kingston HyperX Fury 16gb | CM M2 700w | Sapphire Nitro 380 4GB

Link to comment
Share on other sites

Link to post
Share on other sites

We don't need Stephen Hawking to tell us that, we already know. But that doesn't stop us from doing it. We do it cause we can. We are only beings on planet who consumes more than they need.

Link to comment
Share on other sites

Link to post
Share on other sites

i doubt it. by the time our generation is gone it'd be 50 years. that a lot of time. Like a lot

 

i ment it more like the machines killed us already, but yes

i9 11900k - NH-D15S - ASUS Z-590-F - 64GB 2400Mhz - 1080ti SC - 970evo 1TB - 960evo 250GB - 850evo 250GB - WDblack 1TB - WDblue 3TB - HX850i - 27GN850-B - PB278Q - VX229 - HP P224 - HP P224 - HannsG HT231 - 450D                                                         
Link to comment
Share on other sites

Link to post
Share on other sites

i ment it more like the machines killed us already, but yes

oh :P

i7-4790k | Asus Z97i-Plus     | Kingston HyperX Fury 16gb | MX100 256gb     | Seidon 120XL | Silverstone SFX 600w Gold | Node 304 White
G3258    | Asus Z97i-Plus     | Kingston HyperX Fury 16gb | 4 x 3TB WD Reds | Seidon 120XL | Silverstone SFX 600w Gold | Node 304 Black

i7-965EE | Rampage II Extreme | Kingston HyperX Fury 16gb | CM M2 700w | Sapphire Nitro 380 4GB

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

Yes, we can hit limits. But, as you said, the human brain is incredibly powerful and complex, so who can say that we won't push past those limitations somehow? Possible? Yes, but who knows? Unless you can magically predict the future, you can't. I'll just go along and cache that AI information in my head along with some of the stuff you talked about. Are the genocides and stuff caused by humans more likely to occur than sentient AI? Sure, it's even happened before. But, it seems the real gripe you have is when people only focus on the AI stuff (as an example) and get complacent with other things that may prove more dangerous. That's not right, either. But, you have to remember, people aren't going to care about the other stuff because they don't know about it, and if they ever discover it, they soon forget unless something forces them to remember. But, what is there to be done about it? Become famous, highlight those issues, and hope it catches on. Maybe there are other ways, like making a movie that asks "What if?"

Myself, I believe that theoretical physics is just an important part of science that inventing is. There might no be immediate tangible benefits like with inventing, but we can use those theories to gain an understanding of the universe. What can we do with that kind of information? I dunno. I'm not sure if it's even possible to know yet. Maybe there aren't even any benefits for us earthly beings at all, but it would sure be amazing if we could do something, something fantastic that would enrich our lives. Or maybe I'm just dreaming and we should just give up theoretical stuff until we are in more control of our planet and aren't at risk of overpopulating it and starving ourselves. Also, what was Tesla saying about errors in Einstein's theories? Could you possibly highlight them?

You really do bring up good points; why should we worry about stuff in the future that could possibly happen more than bigger stuff now that is more likely to happen?

Mm, experiencing another point of view is rather refreshing. It's funny; One Who Craves Souls is on one side of the fence, supporting the scientists' views, you're on the other side of the fence, denying any credibility theoretical scientists have, while I'm somewhere in the fence, trying to determine which side I should waver toward. I'm still learning.

And, Nikola Tesla is awesome. I wish more people would view him positively, because he did some amazing stuff, and he could have done more amazing stuff. Kind of a shame.

Why is the God of Hyperdeath SO...DARN...CUTE!?

 

Also, if anyone has their mind corrupted by an anthropomorphic black latex bat, please let me know. I would like to join you.

Link to comment
Share on other sites

Link to post
Share on other sites

After we invent Holodecks, we'll go extinct anyway.

The stone cannot know why the chisel cleaves it; the iron cannot know why the fire scorches it. When thy life is cleft and scorched, when death and despair leap at thee, beat not thy breast and curse thy evil fate, but thank the Builder for the trials that shape thee.
Link to comment
Share on other sites

Link to post
Share on other sites

I do believe that if one day we create AI it will possibly kill us all, but I think it will be our fault. People will be "racist" against AI and that will cause it.

Link to comment
Share on other sites

Link to post
Share on other sites

After we invent Holodecks, we'll go extinct anyway.

 

I thought the same about twinkies :P Jk

Link to comment
Share on other sites

Link to post
Share on other sites

I do believe that if one day we create AI it will possibly kill us all, but I think it will be our fault. People will be "racist" against AI and that will cause it.

 

 

I agree, this is a possibility that 'chavs' or 'robot racists' will bully AI's to go corrupt, but I don't think AI will happen for a good couple of decades.

 

Sure scientists are saying they are close, but really, all they are close to is having a realistic voice for the 'AI' instead of actually working on the AI. Still, there are many possibility's.

 

He practically is AI.

 

My told my mum about what Stephen Hawking said, and she practically said the same thing as you. She found it quite ironic about what Hawking said.

Link to comment
Share on other sites

Link to post
Share on other sites

-

Nikola Tesla was a wonderful man, his contribution to science is still surprising today's scientists... however you're right, I think he needed more recognition for his works.

 

I'm glad some people can understand that "not inventing" something physically or immediately does not mean it's not contributing to our society(that's hollywood fiction for ya). For centuries, scientists work have been based on other scientists theories, that's how science work. However I must add that I don't see them scientists as "gods", they're human just like any of us, and they do commit errors. This is were AI comes in precisely, because Hawking is aware of human error. 

 

A good scientist admits when one of his theories are brought down by more evidence, after all that's what the scientific method is there for. Most of these scientists know that for science, there's no limit, some theories last for centuries before being proved/disapproved.

Link to comment
Share on other sites

Link to post
Share on other sites

Why should I care about the future? well, for a start we, humans are a collective group and live in a society that depends on the choices made by our ancestors. Where you live, what food you're eating, what you study at school, ect... 

 

Caring about our future is caring for our specie, caring for our future is contributing to our society. A lot of people are blinded by the "it doesn't affect me now, why should I care?" mentality, and to those I ask.

 

What good that has brought us? We look at our past generations, so many mistakes that could have been amended by taking the proper precautions... petroleum is a good example were some scientists warned that generation, decades ago that it wasn't going to last that long and they recommended taking precautions. What did that generation do? laugh at those "crazy" scientists for their theories. Today's generation is now looking at that "crazy" scientists theories because guess what? they were right.

 

Now a lot of those people from that generation are gone and we're stuck with the scenario they left us, we, shall think about the future more often... we, shall care for our society.

Link to comment
Share on other sites

Link to post
Share on other sites

well, atleast our generation will be gone when this gets anywhere :P

 

Don't be so sure mate, we are advancing pretty bloody quickly in every field

DESKTOP - Motherboard - Gigabyte GA-Z77X-D3H Processor - Intel Core i5-2500K @ Stock 1.135v Cooling - Cooler Master Hyper TX3 RAM - Kingston Hyper-X Fury White 4x4GB DDR3-1866 Graphics Card - MSI GeForce GTX 780 Lightning PSU - Seasonic M12II EVO Edition 850w  HDD -  WD Caviar  Blue 500GB (Boot Drive)  /  WD Scorpio Black 750GB (Games Storage) / WD Green 2TB (Main Storage) Case - Cooler Master 335U Elite OS - Microsoft Windows 7 Ultimate

Link to comment
Share on other sites

Link to post
Share on other sites

Don't be so sure mate, we are advancing pretty bloody quickly in every field

In every field...:P

Link to comment
Share on other sites

Link to post
Share on other sites

I've read these kind of nonsense in other forums too

"AI will surpass human intelligence"

"There will be some skynet system in the future"

"We will have to fight terminators"

"AI is dangerous"

and it's like hearing kindergarden kids talking about how dangerous and

frightening monsters are.

 

Well, let me try to answer some questions...

 

1. So, is Hawkins correct that AI could be very good or or very bad for us?

Well yes, because this statement (without the exaggeration he used) means the obvious

about every human invention that it can be beneficial or harmful (or both).

 

2. So was his statement nessesary?

No, because he said something obvious but the exaggeration could (and did) make some

people be afraid of AI because they do not know much about computer science and artificial intelligence

except for what they see in sci-fi movies and youtube.

 

3. But he is a great scientist and he said "or the worst" so there may be a chance (half/50 per cenc or whatever) that AI

will be smarter and kills us all! Is that right?

No, for god's sake guys stop watching too much sci-fi :lol:. Yeah Hawking is a great physicist and cosmologist but If he believes (I doubt it)

that AI will turn into skynet and terminators or something like that then he has no idea what he is talking about. He is not a computer 

scientist so if you want the answer to the question "how dangerous AI could be" then Alan Turing has given some answers (together with Alonzo Church)

that can help you. Then you will learn that all the computers we made till now, the ones we make as we speak and the ones we will make in the

future even the quantum ones  always operate according to the model Turing made and it's called "Turing Machine" which captures all

the power computers could ever have (infinte processing power, infinite memory). And if you study about this ideal computer model then you'll see that

computers (and every possible computer we could make) is so stupid that the only thing it can do is make lots of "stupid" things in very little time

and that's all. It can never become as smart (decision making, recognition, etc) as a human being because it can not do something "smarter" that what

it's programmed to do by programmers. Of course to fully understand you'll have to study computer science theory and artificial intelligence theory.

 

4. And who are you or this Turing guy you talk about?

Well, I am just a lowly infarmatics and telecommunications postgraduate and currently a programmer

and Turing is just one of the greatest computer scientists of all times and one of those guys we have to thank for having

computers today.

 

5. So, is AI totally safe?

Is nuclear technology totally safe, is stupidity totally safe, is human intelligence totally safe. I think... no. So if

whatever human thinks and does is NOT totally safe and AI-computers have to be programmed by humans and do exactly as

they're programmed to do then this means that they can do something not totally safe. So if you ask me, i think human intelligence

is much more frightening than artificial intelligence could ever be.

 

So I think the only scary thing about AI is the name "artificial intelligence" and its sci-fi usage.

Link to comment
Share on other sites

Link to post
Share on other sites

I've read these kind of nonsense in other forums too

"AI will surpass human intelligence"

"There will be some skynet system in the future"

"We will have to fight terminators"

"AI is dangerous"

and it's like hearing kindergarden kids talking about how dangerous and

frightening monsters are.

 

 

1. So is Hawkins correct that AI could be very good or or very bad for us?

Well yes, because this statement (without the exaggeration he used) means the obvious

about every human invention that it can be beneficial or harmful (or both).

 

2. So was his statement nessesary?

No, because he said something obvious but the exaggeration could (and did) make some

people be afraid of AI because they do not know much about computer science and artificial intelligence

except for what they see in sci-fi movies and youtube.

 

3. But he is a great scientist and he said "or the worst" so there may be a chance (half/50 per cenc or whatever) that AI

will be smarter and kills us all! Is that right?

No, for god's sake guys stop watching too much sci-fi :lol:. Yeah Hawking is a physicist and cosmologist but If he beleives (I doubt it)

that AI will turn into skynet and terminators or something like that then he has no idea what he is talking about. He is not a computer 

scientist so if you want the answer to the question "how dangerous AI could be" then Alan Turing has given some answers (together with Alonzo Church).

Then you will learn that all the computers we made till now, the ones we make as we speak and the ones we will make in the future even the

quantum ones. Computers always operate according to the model Turing made and its called "Turing Machine" which captures all the power

computers could ever have (infinte processing power, infinite memory). And if you study about this ideal computer model then you'll see that

computers (and every possible computer we could make) is so stupid that the only thing it can do is make lots of "stupid" things in very little time

and that's all. It can never become as smart (decision making, recognition, etc) as a human being because it can not do something "smarter" that what

it's programmed to do by programmers. Of course to fully understand you'll have to study computer science theory and artificial intelligence theory.

 

4. And who are you or this Turing guy you talk about?

Well, I am just a lowly infarmatics and telecommunications postgraduate and currently a programmer

and Turing is just one of the greatest computer scientists of all times and one of those guys we have to thank for having

computers today.

 

5. So AI is totally safe?

Is nuclear technology totally safe, is stupidity totally safe, is human intelligence totally safe. I think... no. So if

whatever human thinks and does is NOT totally safe and AI-computers have to be programmed by humans and do exactly as

they're programmed to do then this means that they can do something not totally safe. So if you ask me, i think human intelligence

is much more frightening than artificial intelligence could ever be.

 

So the only scary thing about AI is the name "artificial intelligence" and its sci-fi usage.

Something tells me you didn't read through this thread. Not many people bought into any of what you're saying. In a way, on the wan show, they sensationalized the story more than we did.

Case: NZXT H500i. Motherboard: Asus Prime Z390-A. CPU: i7 9700k OC @ 5.0GHz. GPU: EVGA 2080 FTW3 CPU Cooler: NZXT X62. Memory: G. Skill Ripjaws 32Gb 3200mhz. Storage: 1TB Samsung 840 EVO SSD /  120GB Samsung 840 EVO SSD  /  WD Caviar Black 3TB / WD Caviar Green 4TB. . PSU: Corsair AX760. Monitor: 2x Acer XB270HU. Keyboard: Corsair K70 RGB. Mouse: Corsair Glaive. 

Link to comment
Share on other sites

Link to post
Share on other sites

Something tells me you didn't read through this thread. Not many people bought into any of what you're saying. In a way, on the wan show, they sensationalized the story more than we did.

 

I read that previous pages and it's true that you don't believe those things but nobody said exactly why that is. And of course the reason is not that it sounds sci-fi so it's not

possible but the true reason is because we had great scientists that gave answers about computer science and its limits. And I didn't want to offend anybody beacuse not

everybody is suposed to know these scientific facts without ever studying about computer science theory. I didn't want this to become another sci-fi thread as others in other forums

about AI.

Link to comment
Share on other sites

Link to post
Share on other sites

I think that this is almost like some kind of mad made natural selection. Humans have become so technologically advanced that we can create a machine that is better than us in almost every way, thus replacing the organic life form with a digital one. It may be a 'mistake' for the human race, as they could be ultimately replaced, but maybe its just the evolution of intelligent life forms moving on, so on the whole it could be a good thing?

My PC:


4670k      GTX 760 ACX      CoolerMaster Hyper 412s      Fractal Design Node 804      G1 Sniper M5      Corsair RM 650      WD Red 1TB     Samsung 840 Evo 120GB

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×