Jump to content

It's time to panic! No not you, the AI itself! - new paper says that homeostasis improves machine behavior

williamcll

ai-stringface-shutterstock_cover-for-mac

Two scientists have suggested that by teaching artificial intelligence the fragility of its own existence, it will operate with better urgency, therefore improving results.

Quote

Artificial intelligence is already making great strides forward, but taking it to the next level might require a more drastic approach. According to two researchers, we could try giving AI a sense of peril and the fragility of its own existence.

 

For now, the machines we code don't have a sense of their own being, or the need to fight for life and for survival, as we humans do. If those feelings were developed, that might give robots a better sense of urgency.

The idea is to instil a sense of homeostasis – that need to balance conditions, whether that's the temperature of an environment, or the need for food and drink, that are required to ensure survival.

That would in turn give AI engines more of a reason to improve their behaviours and better themselves, say neuroscientists Kingson Man and Antonio Damasio from the University of Southern California.

"In a dynamic and unpredictable world, an intelligent agent should hold its own meta-goal of self-preservation, like living organisms whose survival relies on homeostasis: the regulation of body states aimed at maintaining conditions compatible with life," write Man and Damasio in their published paper.

In short, we're talking about giving robots feelings. Making them care might make them better in just about every aspect, and it would also give scientists a platform to investigate the very nature of feelings and consciousness, say Man and Damasio.

 

Given the improvements that are being made in fields like soft robotics, this idea of a more self-aware robot might not be such a fanciful one: if an AI can use inputs like touch and pressure, then it can also identify danger and risk-to-self.

"Rather than up-armouring or adding raw processing power to achieve resilience, we begin the design of these robots by, paradoxically, introducing vulnerability," write the researchers.

If an AI-powered robot is invested in its own survival, it might start making more advanced leaps of intelligence, Man and Damasio argue. It could also make robots better able to deal with challenges it hasn't been specifically coded for, and become more human-like – because they'd have more human-like feelings.

By combining soft robotics and deep learning neural networks – which are designed to mimic the brain's patterns of thought – machines with a sense of jeopardy might not be too far away, according to the researchers.

Let's just hope their sense of self-preservation doesn't eventually overwhelm their respect for their human creators – we've all seen how the Terminator movies pan out.

This is something Man and Damasio have thought about too, and they argue that as robots get better at feelings in general, they'll also get better at feeling empathy – which should be enough to ward off an AI uprising anytime soon.

The idea of making AI more human-like, whether that's with feelings or the ability to dream, may be just what's needed to make these systems even more useful.

"Ultimately, we aim to produce machines that make decisions and control behaviours under the guidance of feeling equivalents," write Man and Damasio.

"We envision these machines achieving a level of adaptiveness and resilience beyond today's 'autonomous' robots."

Source:https://www.sciencealert.com/giving-ai-a-sense-of-peril-will-make-it-better-at-problem-solving-say-researchers
https://www.nature.com/articles/s42256-019-0103-7
Thoughts: How do you even program emotions in the first place?

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 9 5900X          Case: Antec P8     PSU: Corsair RM850x                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition

Link to comment
Share on other sites

Link to post
Share on other sites

Scientists: Movies like the Matrix and Terminator will NEVER happen...

Also Scientists: We're gonna make the Robot think they'll die if they dont wise up. 

"Put as much effort into your question as you'd expect someone to give in an answer"- @Princess Luna

Make sure to Quote posts or tag the person with @[username] so they know you responded to them!

 RGB Build Post 2019 --- Rainbow 🦆 2020 --- Velka 5 V2.0 Build 2021

Purple Build Post ---  Blue Build Post --- Blue Build Post 2018 --- Project ITNOS

CPU i7-4790k    Motherboard Gigabyte Z97N-WIFI    RAM G.Skill Sniper DDR3 1866mhz    GPU EVGA GTX1080Ti FTW3    Case Corsair 380T   

Storage Samsung EVO 250GB, Samsung EVO 1TB, WD Black 3TB, WD Black 5TB    PSU Corsair CX750M    Cooling Cryorig H7 with NF-A12x25

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah, let's make AI have the utmost survival abilities, let's make it as human as possible... 

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, williamcll said:

In short, we're talking about giving robots feelings.

That is quite distinctly not what we're talking about. It probably makes more headlines to say "we want to give them feelings" but to be honest, a machine trying to optimize its chances of survival isn't any different from it trying to optimize recognition of dogs in a picture. It has a stated purpose, adding more things to that purpose isn't suddenly "feelings".

 

Also I don't see any performance comparisons in that paper - seems just like a random idea these people got that may or may not make any difference whatsoever. Pop science, pretty much, until we have anything close to an implementation we can test.

11 minutes ago, williamcll said:

This is something Man and Damasio have thought about too, and they argue that as robots get better at feelings in general, they'll also get better at feeling empathy – which should be enough to ward off an AI uprising anytime soon.

Yeah, 'cause humans have never been known to rebel against slave masters. Not that they will develop any "empathy" without being specifically told to, anyway.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

So their point is... write a better objective function to achieve better results?

 

Spoiler

Also:

 

"First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws."
 
:P

 

Link to comment
Share on other sites

Link to post
Share on other sites

As a programmer,  it drives me crazy how everyone is talking about A.I. but no one talks about the difficultly involved. It drives me insane how people talk like it's just around the corner, but we cannot qualify what consciousness is, let alone how to develop or emulate it.  The first part of programming is understanding the problem, and we cannot yet understand consciousness. All of this is so far off that I'd be shocked if anything comes anywhere close  to a general intelligence during my life time.  I view these articles as wishful thinking at best. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

So... the third law in the Three Laws of Robotics introduced by science fiction author Isaac Asimov in 1942?

 

Also, the idea is already used in current AI's to some extent. Particularly a variant of machine learning where essentially AI's are generated at random, then the best is picked while the rest are deleted, the best is further mutated at random and the cycle repeats until a few thousand iterations later you have a pretty darn efficient AI which no one understands how they work, but they do. I'm basically summarizing this video, it's by CPG Grey, it's pretty good.

Link to comment
Share on other sites

Link to post
Share on other sites

image.png.856ea11fb5c0ec7bb309fb94d22b0a17.png

PLEASE QUOTE ME IF YOU ARE REPLYING TO ME

Desktop Build: Ryzen 7 2700X @ 4.0GHz, AsRock Fatal1ty X370 Professional Gaming, 48GB Corsair DDR4 @ 3000MHz, RX5700 XT 8GB Sapphire Nitro+, Benq XL2730 1440p 144Hz FS

Retro Build: Intel Pentium III @ 500 MHz, Dell Optiplex G1 Full AT Tower, 768MB SDRAM @ 133MHz, Integrated Graphics, Generic 1024x768 60Hz Monitor


 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, crystal6tak said:

So... the third law in the Three Laws of Robotics introduced by science fiction author Isaac Asimov in 1942?

 

Also, the idea is already used in current AI's to some extent. Particularly a variant of machine learning where essentially AI's are generated at random, then the best is picked while the rest are deleted, the best is further mutated at random and the cycle repeats until a few thousand iterations later you have a pretty darn efficient AI which no one understands how they work, but they do. I'm basically summarizing this video, it's by CPG Grey, it's pretty good.

I appreciate Grey's explanation, but when he speaks about not being able to understand how it works, that's completely false. You can learn how it works, it just takes a lot of time to do so. It's far from magic. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LeSheen said:

So basically just some speculation (daydreaming) by two scientists. They have not proven/tested anything. Is it still news then?

I think they just want to take credit for saying that reinforcement learning is the future of AI..

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Stroal said:

As a programmer,  it drives me crazy how everyone is talking about A.I. but no one talks about the difficultly involved. It drives me insane how people talk like it's just around the corner, but we cannot qualify what consciousness is, let alone how to develop or emulate it.  The first part of programming is understanding the problem, and we cannot yet understand consciousness. All of this is so far off that I'd be shocked if anything comes anywhere close  to a general intelligence during my life time.  I view these articles as wishful thinking at best.

image.thumb.png.e56f384adb7f4c36e9d024dc11758657.png

 

This kind of thinking is on the same level as those who said we'd have flying cars "any time now" in the '80s.

 

Have you seen the Musk thread?

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/18/2019 at 10:28 AM, Sauron said:

 

 

This kind of thinking is on the same level as those who said we'd have flying cars "any time now" in the '80s.

 

Have you seen the Musk thread?

I have not seen the Musk thread. Where is it? 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Stroal said:

I have not seen the Musk thread. Where is it? 

 

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/18/2019 at 3:46 PM, Stroal said:

As a programmer,  it drives me crazy how everyone is talking about A.I. but no one talks about the difficultly involved. It drives me insane how people talk like it's just around the corner, but we cannot qualify what consciousness is, let alone how to develop or emulate it.  The first part of programming is understanding the problem, and we cannot yet understand consciousness. All of this is so far off that I'd be shocked if anything comes anywhere close  to a general intelligence during my life time.  I view these articles as wishful thinking at best. 

 

 

Thank you. Thank you for being the one gem in the mountain of rocks who does not go "in 2 years we will be simulating an entire brain and person and living in the matrix". Even real programmes fall for the snake oil so much. :(

Link to comment
Share on other sites

Link to post
Share on other sites

Please do not use biological terms when you don’t understand them :(((((((((

Link to comment
Share on other sites

Link to post
Share on other sites

"Tell me android, do you feel fear?"

CPU - Ryzen 7 3700X | RAM - 64 GB DDR4 3200MHz | GPU - Nvidia GTX 1660 ti | MOBO -  MSI B550 Gaming Plus

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×