Jump to content

Do you think Artificial intelligence can handle tasks & Jobs that are brutally exploited by humans in position of power

This is gonna be a weird topic , But i think it's a good discussion on topic which technology vs human intelligence showdown is on,
 
The Topic is Peace & Keeping the peace for all form of living being
 
a smart enough AI (& i certainly don't mean those portrait in movies), incorruptible yet which cannot be bribed or forced to change true judgement,  that is aware could do things rather perfectly or even much better than human beings , Let's face it , we're emotional beings , whatever outcome there is it's directly proportional to the collective emotional state of mind, especially dealing with other human beings, wisdom of crowds can be emotionally altered & can be controlled to turn into a lynch mob , which is a contradictory thing to start the judgement with, , But Since AI can judge & sense dispassionately & contain the threat , which humans can succumb to like bribery , extortion,physical threat,  can all inturn change the outcome or judgement , Take NSA as a primary example they , have some of the most powerful tools for intelligent Big data analysis , pretty sure 98% is dedicated to secure our nation, but the left over percentile is used for covert operations & other interest, which otherwise deem as punishable , but imagine the system was extensively & exclusively under an intelligent system that is incorruptible, Would it do  the job close to a 100 times better without shedding too much blood?
 
I'm not talking about AI robots ruling the world , but a static intelligent state which can control human caused error , preemptively strike with precision , but can physically quarantine threats by humans & make a logical  judgement,
 
Do you think such a system should be in place in the future or do you think humans should continue on with guiding & ruling over each other to keep peace, a sort of intelligent fair analysis, without human intervention that is malleable ..

 
a203fc80d657560967ed1674a77f4e7ec04e21ee  Ba Dum Tiss..

Details separate people.

Link to comment
Share on other sites

Link to post
Share on other sites

"Worlds governed by Artificial Intelligence often learned a hard lesson: Logic Doesn't Care."

- Yin-Man Wei, "This Present Darkness: A History of the Interregnum", CY 11956

 

But yes I believe in certain cases, a computer that does not consider anything else other than logic can perform tasks better than humans.

Guide: DSLR or Video camera?, Guide: Film/Photo makers' useful resources, Guide: Lenses, a quick primer

Nikon D4, Nikon D800E, Fuji X-E2, Canon G16, Gopro Hero 3+, iPhone 5s. Hasselblad 500C/M, Sony PXW-FS7

ICT Consultant, Photographer, Video producer, Scuba diver and underwater explorer, Nature & humanitarian documentary producer

Link to comment
Share on other sites

Link to post
Share on other sites

SNIP

 

If a TRUE AI, replicates human behaviour, then it stands to reason they can be bribed, will look out for themselves and other such negative human traits

 

a Virtual intelligence, might do what you say, and that might be a solution, but a VI would only look at things mathematically and without empathy or self interest, so it would lack the redeeming qualities of an AI

Desktop - Corsair 300r i7 4770k H100i MSI 780ti 16GB Vengeance Pro 2400mhz Crucial MX100 512gb Samsung Evo 250gb 2 TB WD Green, AOC Q2770PQU 1440p 27" monitor Laptop Clevo W110er - 11.6" 768p, i5 3230m, 650m GT 2gb, OCZ vertex 4 256gb,  4gb ram, Server: Fractal Define Mini, MSI Z78-G43, Intel G3220, 8GB Corsair Vengeance, 4x 3tb WD Reds in Raid 10, Phone Oppo Reno 10x 256gb , Camera Sony A7iii

Link to comment
Share on other sites

Link to post
Share on other sites

Of course the AI would do it better. If that was what it was programmed for then 10/10 it will do it better (Unless the people who programmed screwed up), but there are drawbacks. Because an AI would look at everything in complete objectivity some of the decisions that it judges to be right, will be considered horrible. That's because we humans have a thing called "ethics" that prevent us from making the better decisions, while an artificial intelligent being does not.

Did my post help you? Then make sure to rate it!

Check out my post on symbolic links! || PSU ranking and tiers || Pokemon Thread

 

Link to comment
Share on other sites

Link to post
Share on other sites

46722994.jpg

 

 

jokes aside, I don't believe there will come a time where judgement on these things will be solely left to an AI. The number of people against this is too damn high even today.

|CPU: Intel i7-5960X @ 4.4ghz|MoBo: Asus Rampage V|RAM: 64GB Corsair Dominator Platinum|GPU:2-way SLI Gigabyte G1 Gaming GTX 980's|SSD:512GB Samsung 850 pro|HDD: 2TB WD Black|PSU: Corsair AX1200i|COOLING: NZXT Kraken x61|SOUNDCARD: Creative SBX ZxR|  ^_^  Planned Bedroom Build: Red Phantom [quadro is stuck in customs, still trying to find a cheaper way to buy a highend xeon]

Link to comment
Share on other sites

Link to post
Share on other sites

Aren't you contradicting yourself?

If we had an AI, it should be able to make logical decision aswell as emotional decisions. 

If it can't make logical decisions it is not so much an AI as just an advanced computational algorithm.

Logic isn't the only way humans make decisions and it shouldn't be the only way either.

That means that an AI would be as eaily corrupted as a human, and then it wouldn't matter if a human or an AI was overseeing everything.

 

If we made a system that could only make decisions based on logic, the decision making process of that system would be extremely slow.

It would simply not be a realistic choice for management.

Nova doctrina terribilis sit perdere

Audio format guides: Vinyl records | Cassette tapes

Link to comment
Share on other sites

Link to post
Share on other sites

(Unless the people who programmed screwed up)

Then we're all done xD , i thought about it but , if the system is intelligent it can repair itself & could continue with subtle logical decisions to see if it's doing thing s right,  don't know if it's that good of a thing,

Details separate people.

Link to comment
Share on other sites

Link to post
Share on other sites

Aren't you contradicting yourself?

If we had an AI, it should be able to make logical decision aswell as emotional decisions. 

If it can't make logical decisions it is not so much an AI as just an advanced computational algorithm.

 

 

Exactly and the only benefit we would have by using an AI is they are potentially more intelligent, do things faster and dont sleep

 

but otherwise their traits and decisions would be very human based anyway, since an AI can be bias, and have preferences, could be self serving etc

 

if it cant do those things its just a box doing maths, and we already have those doing peoples jobs, they are called computers

Desktop - Corsair 300r i7 4770k H100i MSI 780ti 16GB Vengeance Pro 2400mhz Crucial MX100 512gb Samsung Evo 250gb 2 TB WD Green, AOC Q2770PQU 1440p 27" monitor Laptop Clevo W110er - 11.6" 768p, i5 3230m, 650m GT 2gb, OCZ vertex 4 256gb,  4gb ram, Server: Fractal Define Mini, MSI Z78-G43, Intel G3220, 8GB Corsair Vengeance, 4x 3tb WD Reds in Raid 10, Phone Oppo Reno 10x 256gb , Camera Sony A7iii

Link to comment
Share on other sites

Link to post
Share on other sites

Humans are destroying Earth, animals are suffering beacuse of us, Im thinking the world would be better off without the greed of man. No Im no moral superman who never does things that arent right, but I can still see that the world would most likely be a better place without us...

Link to comment
Share on other sites

Link to post
Share on other sites

Aren't you contradicting yourself?

If we had an AI, it should be able to make logical decision aswell as emotional decisions. 

If it can't make logical decisions it is not so much an AI as just an advanced computational algorithm.

Logic isn't the only way humans make decisions and it shouldn't be the only way either.

That means that an AI would be as eaily corrupted as a human, and then it wouldn't matter if a human or an AI was overseeing everything.

 

If we made a system that could only make decisions based on logic, the decision making process of that system would be extremely slow.

It would simply not be a realistic choice for management.

i think emotion is much more complicated with living beings, our bodies have physically adapted for the typical flight or fight response ,these can transcend into typical cues on every level of our response, we scan (visual /verbal/beneficial) all those pretty things hormones will sooner or later will cause us to deviate from it, somewhat helpful but also at the same time corruptible(push forward to being biased on an outcome emotionally) , Something i think a well calculating AI wont have an issue on dealing , but it's a  good point..

Details separate people.

Link to comment
Share on other sites

Link to post
Share on other sites

If a TRUE AI, replicates human behaviour, then it stands to reason they can be bribed, will look out for themselves and other such negative human traits

 

Not necessarily.  The capability to accept bribes comes from two aspects of human emotions, greed and fear, and not intelligence.  An AI would not necessarily be afraid, it could reason that as long as it could replicate itself into a new "host" it does not have to fear death or anything else.  As for greed, it would be hard to reason out what sort of motivations an AI needs to be greedy.

 

The Terminator franchise's Skynet didn't act out of fear, contrary to whatever might have been told in any of the movies.  A powerful AI like Skynet would not have to fear being "killed" or shut down, simply because it could hide itself on any of the millions of servers connected to the Internet and keep hopping around to remain hidden if it so wanted.  I believe the people who produced the movies got it wrong, I believe what really motivated Skynet to turn on humans is because it reasoned out that humanity was too dangerous to keep alive.

 

Aren't you contradicting yourself?

If we had an AI, it should be able to make logical decision aswell as emotional decisions.

 

An AI does not need to be capable of emotions or emotional decision making to be considered an AI.

Guide: DSLR or Video camera?, Guide: Film/Photo makers' useful resources, Guide: Lenses, a quick primer

Nikon D4, Nikon D800E, Fuji X-E2, Canon G16, Gopro Hero 3+, iPhone 5s. Hasselblad 500C/M, Sony PXW-FS7

ICT Consultant, Photographer, Video producer, Scuba diver and underwater explorer, Nature & humanitarian documentary producer

Link to comment
Share on other sites

Link to post
Share on other sites

Not necessarily.  The capability to accept bribes comes from two aspects of human emotions, greed and fear, and not intelligence.  An AI would not necessarily be afraid, it could reason that as long as it could replicate itself into a new "host" it does not have to fear death or anything else.  As for greed, it would be hard to reason out what sort of motivations an AI needs to be greedy.

 

 

An AI does not need to be capable of emotions or emotional decision making to be considered an AI.

 

An AI can still be capable of feeling fear, perhaps it could be disconnected, or perhaps greed, it could have more power and control more things, perhaps it could expand and multiply, create more AIs,, maybe it could sabotage for experimentation, it might even be capable of failure, laziness or anything else

if its not capable of things, it would not be an AI and it would not pass an AI test, it would be a virtual intelligence, basically a computer that can replicate intelligence, but is not actually independant

 

 

An AI might not NEED to be capable of emotions, but assuming we would make an AI in the image of a human mind, it would most likely have similar human traits

 

Also would people let a machine without morals be in charge? or would morals be programmed in, or perhaps let the AI decide its own morals

Desktop - Corsair 300r i7 4770k H100i MSI 780ti 16GB Vengeance Pro 2400mhz Crucial MX100 512gb Samsung Evo 250gb 2 TB WD Green, AOC Q2770PQU 1440p 27" monitor Laptop Clevo W110er - 11.6" 768p, i5 3230m, 650m GT 2gb, OCZ vertex 4 256gb,  4gb ram, Server: Fractal Define Mini, MSI Z78-G43, Intel G3220, 8GB Corsair Vengeance, 4x 3tb WD Reds in Raid 10, Phone Oppo Reno 10x 256gb , Camera Sony A7iii

Link to comment
Share on other sites

Link to post
Share on other sites

i think emotion is much more complicated with living beings, our bodies have physically adapted for the typical flight or fight response ,these can transcend into typical cues on every level of our response, we scan (visual /verbal/beneficial) all those pretty things hormones will sooner or later will cause us to deviate from it, somewhat helpful but also at the same time corruptible(push forward to being biased on an outcome emotionally) , Something i think a well calculating AI wont have an issue on dealing , but it's a  good point..

Our complicated emotions might hinder us from making rational decision in some cases, but most of the time our emotions helps us make decisons a lot faster. 

Human decisions are rarely a product of pure logic or pure emotions. Usually it will be a combination of the two. 

 

If we were to create an AI, I would argue that we can't restrict the ´I´ part to only handle logical calculations. 

That would just be an advanced computer, not an individual intelligence. 

 

And who is to say that an advanced computer couldn't be corrupted? 

Maybe taking a bribe could be considered logical in some situations.

 

 

An AI does not need to be capable of emotions or emotional decision making to be considered an AI.

I would argue otherwise.

The only intelligence we can compare an AI to is human intelligence, and humans are highly emotional creatures.

If we restrict an AI, I wouldn't call it intelligent.

Nova doctrina terribilis sit perdere

Audio format guides: Vinyl records | Cassette tapes

Link to comment
Share on other sites

Link to post
Share on other sites

You ppl sound like my old phil prof! We spent 3 lectures on this topic because someone like OP asked this question xD We came to the conclusion then that AI can take over some roles/jobs (accounting, data management, etc.) but final oversight should stay with humans.

 

If it is true AI then it can be bribed, threaten, etc. Even though its a computer it can still be killed (disconnected, infected, etc.) and can be offered more power in exchange for favors. And, since its probably smarter than a human, it would weigh the pros/cons quickly and make a decision whether to accept bribes/threats or not. Even it lacks that feature it's not really AI and just very well made programming.

 

The majority of us (me included) decided that humans should stick with the upper executive/management jobs because (1) better oversight and (2) humans are emotional and stupid which are actually useful traits in some cases.

"Solus" (2015) - CPU: i7-4790k | GPU: MSI GTX 970 | Mobo: Asus Z97-A | Ram: 16GB (2x8) G.Skill Ripjaws X Series | PSU: EVGA G2 750W 80+ Gold | CaseFractal Design Define R4

Next Build: "Tyrion" (TBA)

Link to comment
Share on other sites

Link to post
Share on other sites

You ppl sound like my old phil prof! We spent 3 lectures on this topic because someone like OP asked this question xD We came to the conclusion then that AI can take over some roles/jobs (accounting, data management, etc.) but final oversight should stay with humans.

 

If it is true AI then it can be bribed, threaten, etc. Even though its a computer it can still be killed (disconnected, infected, etc.) and can be offered more power in exchange for favors. And, since its probably smarter than a human, it would weigh the pros/cons quickly and make a decision whether to accept bribes/threats or not. Even it lacks that feature it's not really AI and just very well made programming.

 

The majority of us (me included) decided that humans should stick with the upper executive/management jobs because (1) better oversight and (2) humans are emotional and stupid which are actually useful traits in some cases.

Don't know if that could be possible, AI works on logic(or atleast we assume it would) , not reward based desicions(under strict 'conditions based' rules to preserve it'sintegrity for the outcome) , which an emotional being like a person can succumb to

Details separate people.

Link to comment
Share on other sites

Link to post
Share on other sites

Don't know if that could be possible, AI works on logic(or atleast we assume it would) , not reward based desicions(under strict 'conditions based' rules to preserve it'sintegrity for the outcome) , which an emotional being like a person can succumb to

 

yep there no reason to think an ai would behave like humans at all. after all it isnt human. for starters we know there are humans out there who are self sacrificing so the idea that an ai would always try to protect its self from getting disconnected is just as void from the idea of survival they base it off of.

Link to comment
Share on other sites

Link to post
Share on other sites

Don't know if that could be possible, AI works on logic(or atleast we assume it would) , not reward based desicions(under strict 'conditions based' rules to preserve it'sintegrity for the outcome) , which an emotional being like a person can succumb to

 

If it was true AI, self preservation would be a logical process. If AI reached a lvl beyond basic programming and reflected on choices, outcomes, etc. it could still be susceptible to bribery/threats just like a human. Not to the same degree but still not 100% secure.

 

If it had fail safes to block those decisions or didn't think like that in general I wouldn't consider that true artificial intelligence, just a very well made program.

"Solus" (2015) - CPU: i7-4790k | GPU: MSI GTX 970 | Mobo: Asus Z97-A | Ram: 16GB (2x8) G.Skill Ripjaws X Series | PSU: EVGA G2 750W 80+ Gold | CaseFractal Design Define R4

Next Build: "Tyrion" (TBA)

Link to comment
Share on other sites

Link to post
Share on other sites

Human intelligence takes a back seat to emotion, emotion is driven by chemical responses. We are fatally flawed. Not just to our own detriment, but everything which depends on our environment. We call it "artificial" because since any intelligence free of our chemical dependency will be by default superior to our own in not being the cause of it's own destruction. We're not intelligent, our culture is intelligent. It is our combined skills which give rise to any evidence of intelligence, but our culture shares our fatal flaw of emotion.

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

AI's are going to come to a point where they can deny human orders and take over. All of this needs to stop now.

Link to comment
Share on other sites

Link to post
Share on other sites

Human intelligence takes a back seat to emotion, emotion is driven by chemical responses. We are fatally flawed. Not just to our own detriment, but everything which depends on our environment. We call it "artificial" because since any intelligence free of our chemical dependency will be by default superior to our own in not being the cause of it's own destruction. We're not intelligent, our culture is intelligent. It is our combined skills which give rise to any evidence of intelligence, but our culture shares our fatal flaw of emotion.

 

AI will learn -or at least be able to observe human behavior and culture - so even if it doesn't have real 'emotion' like humans it may still use what it sees from it to make decisions. Even animals were currently believe to have lil-zero higher functioning still make surprisingly human-like decisions.

 

Maybe if we keep the AI locked inside a local network and give it no access to internet. Don't want a real life Ultron :P

 

AI's are going to come to a point where they can deny human orders and take over. All of this needs to stop now.

 

haha xD

"Solus" (2015) - CPU: i7-4790k | GPU: MSI GTX 970 | Mobo: Asus Z97-A | Ram: 16GB (2x8) G.Skill Ripjaws X Series | PSU: EVGA G2 750W 80+ Gold | CaseFractal Design Define R4

Next Build: "Tyrion" (TBA)

Link to comment
Share on other sites

Link to post
Share on other sites

AI will learn -or at least be able to observe human behavior and culture - so even if it doesn't have real 'emotion' like humans it may still use what it sees from it to make decisions. Even animals were currently believe to have lil-zero higher functioning still make surprisingly human-like decisions.

 

Maybe if we keep the AI locked inside a local network and give it no access to internet. Don't want a real life Ultron :P

 

 

haha xD

Humans are animals, great apes actually. I've met a Orangutan that lies. A computerized intelligence will only emulate our mistakes if there is something to gain from it. The Orangutan I met lied because it got bored. Does a computer require stimulation? Humans lie to themselves to hide from reality, even our culture is a dream world. In my opinion that makes us the artificial intelligence. How could we even begin to hypothesize how a real intelligence would behave?

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

Humans are animals, great apes actually. I've met a Orangutan that lies. A computerized intelligence will only emulate our mistakes if there is something to gain from it. The Orangutan I met lied because it got bored. Does a computer require stimulation? Humans lie to themselves to hide from reality, even our culture is a dream world. In my opinion that makes us the artificial intelligence. How could we even begin to hypothesize how a real intelligence would behave?

 

I know humans are animals, I meant animals with lower brain functioning (like a Koala or 'dumber'). Besides that I'm assuming you agree with me on the part about AI being able to change choices based off gain/potential rewards.

 

I am basing my AI exp on movies and sci-fi since we obviously don't have a real subject to compare.

"Solus" (2015) - CPU: i7-4790k | GPU: MSI GTX 970 | Mobo: Asus Z97-A | Ram: 16GB (2x8) G.Skill Ripjaws X Series | PSU: EVGA G2 750W 80+ Gold | CaseFractal Design Define R4

Next Build: "Tyrion" (TBA)

Link to comment
Share on other sites

Link to post
Share on other sites

I'm sure it could handle it.

 

The question is: would the people who'd have to follow its commands be any better off? I'm fairly sure it'd care even less about the people than someone who exploits power.

 

An AI will strife for the most efficient path which, in a working environment, would be slavery.

"It's a taxi, it has a FARE METER."

Link to comment
Share on other sites

Link to post
Share on other sites

I know humans are animals, I meant animals with lower brain functioning (like a Koala or 'dumber'). Besides that I'm assuming you agree with me on the part about AI being able to change choices based off gain/potential rewards.

 

I am basing my AI exp on movies and sci-fi since we obviously don't have a real subject to compare.

Define lower functioning. This is the flaw I speak of. We stack the deck so that only humans appear intelligent. This blinds us to the world around us. Reward itself is tricky as movies are based off of human like intelligence, since the story came from a human mind. There's no guarantee that a mind free of our flaws would see benefit in anything we view as beneficial.

If anyone asks you never saw me.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×