Jump to content

Philosophers are building ethical algorithms to help control self-driving cars

2 minutes ago, DildorTheDecent said:

*whoosh* well there it goes...the meme, right over your head...

 

Someday you'll understand.

I get the meme, but he was being serious, and this topic is a serious topic for discussion so I will respond assuming people are being serious first.

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

I've always found it interesting how people are like 'Well, I wouldn't want a MACHINE to make life and death decisions, you need a human to do that!' when humans will, many times per year, wreck their cars trying to avoid squirrels.  The mushy human brain is surprisingly good at making decisions that narrowly focus on limited facts and ignore other critical factors.

 

http://www.ajc.com/news/crime--law/decatur-driver-swerves-avoid-squirrel-hurts-kids/ag19aUmkd4rF1Qzow3LgxL/

 

This case is great, she had four passengers, including children, and flipped her SUV upside down into a ravine trying to avoid a squirrel.  Mushy human brain only thought 'OH SNAP, SQUIRREL!  MUST NOT KILL SQUIRREL, SWERVE!' and ignored everything else.  We like to think that humans are observant, critical creatures, but so frequently we aren't.  At least your robot car won't smash a road construction crew because it wanted to look at it's Instagram notifications.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, AshleyAshes said:

I've always found it interesting how people are like 'Well, I wouldn't want a MACHINE to make life and death decisions, you need a human to do that!' when humans will, many times per year, wreck their cars trying to avoid squirrels.  The mushy human brain is surprisingly good at making decisions that narrowly focus on limited facts and ignore other critical factors.

 

http://www.ajc.com/news/crime--law/decatur-driver-swerves-avoid-squirrel-hurts-kids/ag19aUmkd4rF1Qzow3LgxL/

 

This case is great, she had four passengers, including children, and flipped her SUV upside down into a ravine trying to avoid a squirrel.  Mushy human brain only thought 'OH SNAP, SQUIRREL!  MUST NOT KILL SQUIRREL, SWERVE!' and ignored everything else.  We like to think that humans are observant, critical creatures, but so frequently we aren't.  At least your robot car won't smash a road construction crew because it wanted to look at it's Instagram notifications.

It's like when people told my friend he was dumb for writing off his car instead of hitting the pedestrian on the sidewalk and having the car to drive another day.

CPU: Intel i7 7700K | GPU: ROG Strix GTX 1080Ti | PSU: Seasonic X-1250 (faulty) | Memory: Corsair Vengeance RGB 3200Mhz 16GB | OS Drive: Western Digital Black NVMe 250GB | Game Drive(s): Samsung 970 Evo 500GB, Hitachi 7K3000 3TB 3.5" | Motherboard: Gigabyte Z270x Gaming 7 | Case: Fractal Design Define S (No Window and modded front Panel) | Monitor(s): Dell S2716DG G-Sync 144Hz, Acer R240HY 60Hz (Dead) | Keyboard: G.SKILL RIPJAWS KM780R MX | Mouse: Steelseries Sensei 310 (Striked out parts are sold or dead, awaiting zen2 parts)

Link to comment
Share on other sites

Link to post
Share on other sites

Whats to decide? I Paid for the car, so it should save me and my dog 

Link to comment
Share on other sites

Link to post
Share on other sites

Guess I'll walk.

ASUS X470-PRO • R7 1700 4GHz • Corsair H110i GT P/P • 2x MSI RX 480 8G • Corsair DP 2x8 @3466 • EVGA 750 G2 • Corsair 730T • Crucial MX500 250GB • WD 4TB

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, GabD said:

Having people agree on things isn't the immediate point of this. People's differing opinions are also partly caused by many things other than rational justification. Here the point is finding the most rationally justified answer(s). The part about convincing people to change their mind is a whole different thing.

- We have no reason for believing that finding a "most justified answer" would somehow be impossible. The principles of rationality aren't exactly subjective.
- We only have reason to doubt that people would initially unanimously accept it, if there were to be such an answer. No reason to dismiss the very possibility of it being widely accepted.

Also, this is not at all "asking again". It's not even "asking people".

It is indeed a question.  Multifaceted, but a question none the less.  It has been asked many times for thousands of years.  The evidence is pretty clear, no one will agree on the morally correct outcome in such situations.  And the bit in bold,  that is exactly what I am addressing, you will not find the most rationally justified answers because the question centres on moral standing, not rational logic. 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, DildorTheDecent said:

Too soon.

 

Mods pls ban.

It is a legitimate example of how people see decisions and outcomes that effect life, More poignant than ever because it wasn't even a human life that was taken.  People couldn't even agree on what was more important (the child or the APE), how is this AI going to be any better accepted when it is choice between humans?

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, GabD said:

I mean, there are reasons why individual consumers aren't allowed to buy artillery ordinance for home defense, and still would not be allowed to, even if that's what all consumers wanted.

If the car can't be trusted to value the life of its operator than there is no reason to continue the conversation as it won't exist as a product thus the discussion is mute (this is the opposite of your example without demand there is no reason to supply), regardless the car should not be considering any of this it should simply attempt to avoid a collision regardless if it can't do that there is no way it will have the maneuverability or the time to even factor in the trolley problem.

 

The issue with philosophy is it never lives in the realm of practicality which is why it is not science, realistically anyone who even considers the trolley problem when designing an AI car is doing more harm than good as it is a pointless scenario to consider programing for and would be detrimental to the reliability and ultimately the safety of the vehicle.

https://linustechtips.com/main/topic/631048-psu-tier-list-updated/ Tier Breakdown (My understanding)--1 Godly, 2 Great, 3 Good, 4 Average, 5 Meh, 6 Bad, 7 Awful

 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...
On 20/02/2018 at 4:36 PM, AresKrieger said:

If the car can't be trusted to value the life of its operator than there is no reason to continue the conversation as it won't exist as a product thus the discussion is mute (this is the opposite of your example without demand there is no reason to supply), regardless the car should not be considering any of this it should simply attempt to avoid a collision regardless if it can't do that there is no way it will have the maneuverability or the time to even factor in the trolley problem.

 

The issue with philosophy is it never lives in the realm of practicality which is why it is not science, realistically anyone who even considers the trolley problem when designing an AI car is doing more harm than good as it is a pointless scenario to consider programing for and would be detrimental to the reliability and ultimately the safety of the vehicle.

The realm of practicality is technics, not science. Technical fields aren't properly sciences either. It's the difference between biology and medicine, or between physics and engineering, or between sociology and social work.
Philosphy is a whole other thing, it's meta-scientific. Epistemology is meta-scientific because it's what directs science. Logic is meta-scientific because it's what directs resoning. Ethics is meta-scientific because it is a discourse on how things should be improved, not about how they are. They're not scientific because they're not discourses on facts about the world, which is what scientific discourse is. The level of philosophical discourse is that of a discourse on discourses about facts. Merely "not being a science" isn't a problem if the goal isn't to do science's job of obtaining facts about the world.
Philosophy uses scientific knowledge, it doesn't produce it, and does not claim to produce it either.

This whole thing isn't about obtaining facts about what self-driving cars are.

 

On 20/02/2018 at 3:02 PM, mr moose said:

you will not find the most rationally justified answers because the question centres on moral standing, not rational logic

Ethics is precisely a discipline of rational justification. Opinions, knee-jerk reactions, sophistry and implicit biases may be major factors in popular opinion, but they aren't significant factors in ethics research, and should not be. Arguments are. A major point of ethical research, as an academic discipline, is to detect implicit bias, take it into account, and either eliminate it or work around it. The whole academic process is centered on the identification of biases and fallacies. This isn't just some newspaper's editorial section. ;)

Also, you seem to be under the presumption that it is somehow a priori impossible to arrive at a most rationally justified answer (most rationally justified until proven otherwise, of course). These things first have to be constructed if they're ever going to be "found"; theories don't exist as facts in nature, they have to be made, and then presented to people in a convincing argument. My wording may have been misleading, a more correct interpretation would be "to arrive at (a) most rationally justified answer(s)". (Excuse me, English isn't my mother tongue.)
The takeaway is that, if you find one or more most rationally justified positions on a topic, people cannot rationally disagree in favor of sub-optimal alternatives. As far as the sake of research goes, it doesn't matter if people disagree on it for whatever other reason, the alternatives are still going to be rationally worse options. And we obviously should not make decisions based on rationally worse options if we can possibly help it.

The whole point of Ethics is to optimize choices and decision-making.
So far, no convincing proof or agrument exists that would rule out the possibility of optimization, so the potential benefits for humanity is at least worth looking into.



Really, I would rather people talk about the actual article and the research mentioned rather than talking out of their arse about the very value and goals of an academic discipline of which they may not have a working knowledge.

Link to comment
Share on other sites

Link to post
Share on other sites

On 20/02/2018 at 10:29 AM, Sauron said:

while all that is informative, I still find it a little hard to believe their potential contribution in this particular area doesn't overlap with the engineers' own experience in creating reliable systems and accounting for different scenarios.

Hence why they are also working with engineers. ;)
Multidisciplinary research is an amazing thing.

 

On 20/02/2018 at 11:33 AM, The Benjamins said:

The correct answer to the autonomous car trolley question is to have it hit no one.

I'm pretty sure everyone agrees on this. It's just that the algorithm for "what to do in the mean time" is going to play an important part until we get there. We may be a long time away from having a completely failure-safe autonomous vehicle system, whereas ethical algorithms are already a thing. They could already save many lives right now, perhaps even your own or your loved ones'.

I mean, if we had perfect things, the fields of Ethics and Engineering would be much simpler, and less concerned with immediate problems.
I'm positive that the majority of ethicists would be relieved and pleased to find themselves out of a job, or at least to have their work cut out for them. But that might never happen.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, GabD said:

 

 

Ethics is precisely a discipline of rational justification. Opinions and knee-jerk reactions aren't a central factor in ethics research, and should not be. Arguments are.
Also, you're presuming that it is somehow a priori impossible to arrive at a most rationally justified answer (most rationally justified until proven otherwise, of course). These things first have to be constructed if they're ever going to be "found"; theories don't exist as facts in nature, they have to be made, and then presented to people in a convincing argument. My wording may have been misleading, a more correct interpretation would be "to arrive at (a) most rationally justified answer(s)". (Excuse me, English isn't my mother tongue.)
The takeaway is that, if you find one or more most rationally justified positions on a topic, people cannot rationally disagree in favor of sub-optimal alternatives. As far as the sake of research goes, it doesn't matter if people disagree on it for whatever other reason, the alternatives are still going to be rationally worse options. And we obviously should not make decisions based on rationally worse options if we can possibly help it.

The whole point of Ethics is to optimize choices and decision-making.
So far, no convincing proof or agrument exists that would rule out the possibility of optimization, so the potential benefits for humanity is at least worth looking into.



Really, I would rather people talk about the actual article and the research mentioned rather than talking out of their arse about the very value and goals of an academic discipline of which they may not have a working knowledge.

You are operating on the premise that majority rule is good enough for ethical conundrums and/or that there is a rational method to resolve ethical questions.   Ethical conundrums cannot be resolved to a single solution because the conditions across humanity vary too much. Such philosophical and ethical questions have been posed for centuries and there is yet to be a rational outcome.   The end result is people do disagree (and always will) on what is the most ethical choice, even if it was 80/20 in favor of a particular decision, you still cannot argue that the 20 are wrong because they have a different ethical standing, nor can you force them to abide by such a ruling if it endangers anyone's life.

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, GabD said:

Hence why they are also working with engineers. ;)
Multidisciplinary research is a awesome thing.

 

I'm pretty sure everyone agrees on this. It's just that the algorithm for "what to do in the mean time" is going to play an important part until we get there. We may be a long time away from having a completely failure-safe autonomous vehicle system, whereas ethical algorithms are already a thing. They could already save many lives right now, perhaps even your own or your loved ones'.

I mean, if we had perfect things, the field of Ethics would be much simpler, and less concerned with immediate problems. I'm positive that the majority of ethicists would be relieved and pleased to find themselves out of a job, or at least to have their work cut out for them. But that might never happen.

In a crash scenario the AI will have less time to make a decision about a problem that requires magnitude more data to process, then the act of driving. It is like creating a escape plan for when you are in the blast radius of a WMD, it is pointless.

 

First Car Companies would prefer to save the driver over all else, and building a AI that can minimize external damage will also in inadvertently answer a low level trolley problem, kill the least amount of people. Analyzing the ethics of the 100ms scenario is a fantasy. How is even the car going to get all the data it would need to asses it in the roughly 100ms it has to react? 

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, mr moose said:

Ethical conundrums cannot be resolved to a single solution because the conditions across humanity vary too much.

Hence why context is important. And why I've been putting plurals in parentheses all the time when talking about "most rationally-justified answer(s)". It is entirely possible for different solutions to be most applicable in different contexts. I agree with you on this, even though I have to admit that we don't yet know for certain wether or not ethical conundrums can or cannot be resolved to a single solution. The assumption of the negative should be seen as a precaution.

52 minutes ago, mr moose said:

You are operating on the premise that majority rule is good enough for ethical conundrums and/or that there is a rational method to resolve ethical questions.

I don't subscribe to the notion that majority rule is good enough.

I do think that rational thinking can be employed to resolve ethical questions for specific contexts. We don't know that it can't, and it's franky the only tool we have to make sense of things and obtain knowledge about anything. It's as good a tool as any to solve problems that indeed need solving. It's good anough to know facts about the world, and it's good enough to direct our methods for understanding the world. I don't think it's too far-fetched to work under the presumption that rationality might be good enough to figure out what should be done and what shouldn't be done by a given ethical agent a given context.
 

52 minutes ago, mr moose said:

Such philosophical and ethical questions have been posed for centuries and there is yet to be a rational outcome.

There are actually indeed rational outcomes, and have been for at least a couple millenias. The thing is that most of the ancient ones are not contextually applicable or justifiable anymore, and we rarely have these: most rationally-justified framework(s), only a bunch of sufficient ones. There have been "most rationally-justified" frameworks in certain historical contexts, but they always worked on premises that turned out to be factually erroneous, and thus inapplicable now that we know that those premises turned out to be false. The problem is more often with factual basis, because the reason and logic part has actually been pretty good for the last 2400 years at least.

Ethical theories don't have to be universal, eternal grand unifying theories (though it'd be great if that's possible) they just have to be most appropriate (or equally most appropriate) for the specific context until said context sufficiently changes.

52 minutes ago, mr moose said:

  The end result is people do disagree (and always will) on what is the most ethical choice

I don't know. I can't predict the future. But apparently you can. ;)

Assumptions are fine as a methodological precaution, but they can't be treated as knowledge.

 

52 minutes ago, mr moose said:

even if it was 80/20 in favor of a particular decision, you still cannot argue that the 20 are wrong because they have a different ethical standing, nor can you force them to abide by such a ruling if it endangers anyone's life.

Well, I can certainly disagree with that. It is entirely possible that people can be wrong (regardless of wether it's the majority or minority). And people can, in fact, make flat-out erroneous or invalid judgements; but usually they aren't completely wrong, and rarely completely invalid, to a varying degree. The point is to strive to be less wrong.

The thing with judgements and reasonings is that their justification can be examined and scrutinized. Reasoning, when subjected to scrutiny and examination, allows to differentiate arguments on the basis of 1) their validity and 2) their factual truth. It is rarely clearly or explicitly the case, but in principle, some are more wrong than others, and some reasonings are less sound than others. It isn't necessarily impossible to differentiate things, it's just arduous.

Also, yeah, it's entirely possible for opposed reasonings to be both equally valid and equally-based on equally-true premises. But that's extremely rare, and also very arduous to differentiate.

Finally, this has nothing to do with the popularity of an opinion or position, nor does it have anything to do with majority rule. The principles of Logic don't take sides.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, GabD said:

Hence why context is important. And why I've been putting plurals in parentheses all the time when talking about "most rationally-justified answer(s)". It is entirely possible for different solutions to be most applicable in different contexts. I agree with you on this, even though I have to admit that we don't yet know for certain wether or not ethical conundrums can or cannot be resolved to a single solution. The assumption of the negative should be seen as a precaution.

I don't subscribe to the notion that majority rule is good enough.

 

Then you would agree if there are multiple answer in the context of an AI that decides between life and death that you can;t implement said AI because no one answer can trump another?

 

1 hour ago, GabD said:


I do think that rational thinking can be employed to resolve ethical questions for specific contexts. We don't know that it can't, and it's franky the only tool we have to make sense of things and obtain knowledge about anything. It's as good a tool as any to solve problems that indeed need solving. It's good anough to know facts about the world, and it's good enough to direct our methods for understanding the world. I don't think it's too far-fetched to work under the presumption that rationality might be good enough to figure out what should be done and what shouldn't be done by a given ethical agent a given context.
 

Except what one person considers to be rationally ethical another may not.  It is self evident in all debates that centre around ethical quandaries that there is always a difference in opinion on what constitutes an ethical resolution, therefore how people rationalize the constitutions of "ethical" varies.  

 

1 hour ago, GabD said:

There are actually indeed rational outcomes, and have been for at least a couple millenias. The thing is that most of the ancient ones are not contextually applicable or justifiable anymore, and we rarely have these: most rationally-justified framework(s), only a bunch of sufficient ones. There have been "most rationally-justified" frameworks in certain historical contexts, but they always worked on premises that turned out to be factually erroneous, and thus inapplicable now that we know that those premises turned out to be false. The problem is more often with factual basis, because the reason and logic part has actually been pretty good for the last 2400 years at least.

 

Can you show me using reason and logic the "rational outcome" in this thread where there are opposing views?   Maybe you can show me the rational resolution to the trolley problem?  Because as far as I can tell it has not been resolved, it is still being debated.  I posted a good video earlier in the thread that demonstrates this.

 

1 hour ago, GabD said:



Ethical theories don't have to be universal, eternal grand unifying theories (though it'd be great if that's possible) they just have to be most appropriate (or equally most appropriate) for the specific context until said context sufficiently changes.

I don't know. I can't predict the future. But apparently you can. ;)

They do if you are going to create a device that uses those theories to decide who lives and dies in specific events.   Unless you have unanimous support for such a protocol or device to use said theory, then you must have complete confidence in the outcome being acceptable to all people.   As I said right back at the start, it doesn't matter the outcome, there will always be people who don't think it was the right decision.   It's not about predicting the future, it's about looking at the obvious complexities of humanity and accepting the truths about it.  The one simple truth is everyone has a different approach to resolving ethical dilema's.  Therefore acceptable outcomes will be different, there is no way to claim moral superiority when such things transcend rational conditions.

 

1 hour ago, GabD said:


Assumptions are fine as a methodological precaution, but they can't be treated as knowledge.

 

Well, I can certainly disagree with that. It is entirely possible that people can be wrong (regardless of wether it's the majority or minority). And people can, in fact, make flat-out erroneous or invalid judgements; but usually they aren't completely wrong, and rarely completely invalid, to a varying degree. The point is to strive to be less wrong.

Be able to be wrong and actually being wrong are two different things.  When it comes to moral/ethical judgments what method do you propose that can determine right/wrong?

1 hour ago, GabD said:

 


The thing with judgements and reasonings is that their justification can be examined and scrutinized. Reasoning, when subjected to scrutiny and examination, allows to differentiate arguments on the basis of 1) their validity and 2) their factual truth. It is rarely clearly or explicitly the case, but in principle, some are more wrong than others, and some reasonings are less sound than others. It isn't necessarily impossible to differentiate things, it's just arduous.

Also, yeah, it's entirely possible for opposed reasonings to be both equally valid and equally-based on equally-true premises. But that's extremely rare, and also very arduous to differentiate.

Finally, this has nothing to do with the popularity of an opinion or position, nor does it have anything to do with majority rule. The principles of Logic don't take sides.

 

The thing is the principals of logic don't even apply.  How another logically determines something to be ethical does not apply to me, in this context ethics cannot be reasoned away.  This is why morals are heavily debated, this is why we have debates regarding laws (as most laws are founded on morals), this is why people feel oppressed in certain countries while others do not.  The logic each individual uses to define their moral acceptance of such conditions is different.   Some people ascribe to a social moral while others ascribe to a capitalist.  Some people ascribe value to quantity while other ascribe value to quality.  These foundations of ethical preference cannot be logically dismissed.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

On 02/03/2018 at 7:22 PM, mr moose said:

Then you would agree if there are multiple answer in the context of an AI that decides between life and death that you can;t implement said AI because no one answer can trump another?

Al right, let's be clear here that we're talking about something that has to be context-sensitive. We don't know if there really are multiple best answers (and not merely "good" answers), but for the purposes of this, so let's assume from the start that this is the case.
Fortunately, multiple answers doesn't necessarily imply multiple answers in the same identicall context. Change the context in any way, and the appropriate answer may change (or it may not, but there's a possibility depending on the difference in context). If you build your system based on taking into account as many contextual parameters as possible (which a machine can do much better than we do, and will do increasingly better with continued improvement), the chances of getting conflicting answers are lowered. Even if we assumed that they couldn't be eliminated, these chances would still be better than with a human behind the wheel.

Also, it is entirely possible to design, create and use practical systems adapted to context, in such a way as to respect the differing values and priorities of different communities. The nice thing with AI-driven cars is that they have an input about where they are, and could in theory switch to a different set of values if/when entering a community that hypothetically has a drastically different set of values than the previous one.
 

On 02/03/2018 at 7:22 PM, mr moose said:

Except what one person considers to be rationally ethical another may not.

Except the whole point is to get rid of personal opinions and use our heads instead. We're talking about justification and argumentation. Opinions don't matter.
When you can indeed provide a sound justification for a given belief or opinion, the interlocutor ought to at least consider them worthy of reflecting on. It's the justification that matters. Sometimes, people can even recognize the value of an argument, even if they don't agree with it, and perhaps at least tolerate its neighboring existence.

Assuming that the "everyone" we're talking about 1) don't have Antisocial Personality Disorder, and 2) fully understand the principles of logic, then there are generally three options : either A) something is rationally ethical for everyone, or B) it can be tolerated as an ethical alternative by everyone, or C) it's not rationally ethical.

From my experience living in multicultural communities with people from extremely varied backgrounds, most people generally can come to agree on the things that they can tolerate, and thus that they can have in common in their respective ethical frameworks. But they almost never initially agree on these things. There has to be a dialogue, an exchange that serves as a way to examine and scrutinize the other's discourse. People can be convinced, or at least convinced to tolerate.

And what is tolerated by people of a given community can be tolerated by AI designed to function within that community.

On 02/03/2018 at 7:22 PM, mr moose said:

It is self evident in all debates that centre around ethical quandaries that there is always a difference in opinion on what constitutes an ethical resolution, therefore how people rationalize the constitutions of "ethical" varies.

It isn't self-evident. And there aren't always differences of opinion. They are just very common because different people are in different contexts. But generally there are actually more similarities than differences.
For example, very few people would argue that an ethical kill while hunting involves skinning the prey alive, and the justifications provided by those who would argue this are generally bullshit and/or questionable from a mental health perspective. Now, most real cases of disagreement aren't as clear-cut as this exaggerated example, but the takeaway remains roughly the same.

Another important point is that relativist assumptions serve no practical purpose and bring us nowhere closer to a position of knowledge. If any given mutually-exclusive opinions were necessarily equally valid and true, then the opinion that they aren't is also equally valid and true... Relativism inherently implies the truth of its own falsification, and is therefore untenable.
Besides, we'd also have to throw science as a whole out of the window if we were to defend relativism. There are plenty of reasons for not doing it.

In reality, what one person considers to be factually true another also may not; that doesn't mean that both are equally correct. What one person considers to be logically valid, another also may not; that doesn't mean that both are equally correct. It is a fact that people's opinions can (and often are) incorrect. We shouldn't go out of our way to adapt our theories so as to not exclude erroneous beliefs.
The point isn't wether or not a conclusion is in line with people's personal opinions, or wether those opinions have intrinsic value. They don't. "Not until they are brought before the tribunal of reason", as an old german would say.

On 02/03/2018 at 7:22 PM, mr moose said:

Can you show me using reason and logic the "rational outcome" in this thread where there are opposing views?   Maybe you can show me the rational resolution to the trolley problem?  Because as far as I can tell it has not been resolved, it is still being debated.  I posted a good video earlier in the thread that demonstrates this.

 

You are right in thinking that the Trolley Prolem has not been resolved, especially not in this thread. But I think you are wrong in thinking that this would somehow have been the point of this thread, or even a reasonable expectation for this thread.

Either way, arriving to an appropriate solution would mostly be done by convincing people. Which is a lot of work, but possible. By that I mean that it's measured in years. Humans are malleable in the long term, but stubborn in the short term.

As for my opinion (and it really is just an opinion so far), it is that, in the mean time, communities should decide what non-ideal outcomes they prefer within the confines of their own community. I dislike that this does resort to majority rule in the case of communities that prefer to resort to this rule, but hopefully we'll work out (a) better alternative(s). Deciding on a policy for this at the community level seems more productive and relevant than making it an individual choice. There are individuals who aren't in a condition to be able to distinguish "better" from "worse", and whose choices would be made with too little concern for others.

IMO, clarifying the sets of values that different communities give themselves would also be a good way to start making more sense of this. Not in a way that would encourage people to move to a different community if they have different values, but rather in a way that would encourage dialogue and exchange to prevent the tyranny of the majority.

On 02/03/2018 at 7:22 PM, mr moose said:

They do if you are going to create a device that uses those theories to decide who lives and dies in specific events.   Unless you have unanimous support for such a protocol or device to use said theory, then you must have complete confidence in the outcome being acceptable to all people.

For this part, I completely agree with you. Hence, community-level acceptance is the first goal to aim for.
Universal acceptance, we'll get there if possible/necessary. I don't think it's necessary. You don't think it's possible.

I don't even think it's an immediate problem, unlike community-level acceptance. Because cars can already have location-restricted programming.
 

On 02/03/2018 at 7:22 PM, mr moose said:

such things transcend rational conditions

I get the fact that individual humans in general aren't very rational. And that we do have biases. Still, I think you have it backwards, that it is rational conditions which transcend subjectivity and bias.
It is also a fact that humans are capable of using reason. We are also capable of improving in our use of reason, to use it better and more often.
I don't care much if people still disagree when using reason, as long as the disagreement is between rational arguments. A rational argument, even one that we personally don't want to agree with, is still pretty much always preferable to unjustified beliefs.

On 02/03/2018 at 7:22 PM, mr moose said:

Be able to be wrong and actually being wrong are two different things.

Indeed. 

On 02/03/2018 at 7:22 PM, mr moose said:

When it comes to moral/ethical judgments what method do you propose that can determine right/wrong?

As a humble undergrad student, I don't feel like I'm necessarily right about this, so I'm not confident in calling this more than mere opinion, but here goes:
(N.B.: My answer here will probably be very superficial, and filled with holes due to not having a lot of time to dedicate to refining forum posts. So just bear with me unless it stops making sense.)


In any case, we first we have to define what we mean by "right" and "wrong": 

- Being right is having a valid belief that has an optimal justification compared to that of all other available alternative beliefs, and which is not proven false. (Note: "being right" does not equal "knowing". One can be right by accident, and rightness can be refuted if proven otherwise, such as if one comes up with an even better justification. In that case the new belief will be optimal.) (The ever so important contextualisation comes from the "all other available alternative beliefs")

- Being wrong is having a belief that is either proven false, proven invalid, or proven to have a sub-optimal justification compared to available alternatives. I.e., it's a spectrum, there are degrees in being wrong. Most people are wrong (perhaps all of us are), but some moreso than others. Obviously, we should strive to be less wrong, as much as we can.


As for actual ethical systems, I personally favor a slightly unusual mix of context-sensitive deontology and utilitarianism, in which an utilitarian justification is what ultimately supports any maxim or axiom, while employing maxims and axioms to great extent due to their usefulness. I honestly don't think that my personal "ideal" ethical system is sufficiently developped for release, if ever; consider it more of a beta. I also don't think it's relevant to this thread, so I won't bother going into more detail for now.

In the mean time, I'm fine with referring people to Peter Singer's utilitarianism, Eliezer Yudkowsky's bayesian rationalism, etc.

I strongly suggest reading Yudkowsky's stuff (particularly the publications for the "LessWrong" community) for anything related to the specific concept of "rationality" that I may have failed to explain clearly enough in this thread.

On 02/03/2018 at 7:22 PM, mr moose said:

The thing is the principals of logic don't even apply. 

Well, you bothered applying them so far. I'd say the evidence is racking up against you there.

On 02/03/2018 at 7:22 PM, mr moose said:

How another logically determines something to be ethical does not apply to me, in this context ethics cannot be reasoned away.

I guess we could say that about anything, the "does not apply" seems purely arbitrary. How another logically determines that the Earth isn't flat wouldn't apply to me because something makes me refuse to listen to reason? Of course it applies. People can very well not listen to reason, although doing so in such a case means being wrong. And people can very well be wrong regardless of wether or not they realize that they're in the wrong.

On 02/03/2018 at 7:22 PM, mr moose said:

This is why morals are heavily debated, this is why we have debates regarding laws (as most laws are founded on morals)

Well, yeah, debate is how we use reason as critical thinking to differentiate sound arguments from worse ones. Debate is a good thing. Debate is how any discipline that seeks knowledge or know-how can manage to advance beyond initial disagreements.

On 02/03/2018 at 7:22 PM, mr moose said:

this is why people feel oppressed in certain countries while others do not.

I don't understand what you mean by this. In sociology, oppression is a rather clearly-defined concept, and oppression is empirically identifiable when and where it is performed and experienced. That people experience a feeling that they associate with being oppressed doesn't tell us much about wether or not they actually are.
Also, it's never 100% of the members of a society who are going to be oppressed. It is a sociological fact that not everyone can be oppressed, just that a lot of people are or have been.

On 02/03/2018 at 7:22 PM, mr moose said:

The logic each individual uses to define their moral acceptance of such conditions is different.

The fact that people can be contradictory does not mean that the non-contradiction principle should be discarded as a logical law. In order to verify or falsify the laws of logic, one must resort to using logic as their weapon, an act which is self-defeating.
Unless you meant something else by "the logic"?
"Justification", perhaps?
In that case the sentence would make sense. But I probaly just misunderstood what you meant by that.

On 02/03/2018 at 7:22 PM, mr moose said:

Some people ascribe to a social moral while others ascribe to a capitalist

Well, capitalism is more of an economic theory, not so much a moral or ethical framework. Most ethicists who defend capitalism are utilitarianists, but a large proportion of those who defend anti-capitalist alternatives (of which there are many) are also utilitarianists.
And a lot of ethicists who defend capitalism base their arguments in social-focused ethics; capitalism would be socially-justified on the basis of its alleged superior benefit for society as a whole compared to that of alternatives (hence why they talk about capitalist societies, with norms and methods of regulation. Capital can't even possibly exist without a structured society). The crux is mostly in wether or not the factual premises are true, because the argumentation itself can be valid.

On 02/03/2018 at 7:22 PM, mr moose said:

These foundations of ethical preference cannot be logically dismissed.

Indeed, of course not. Especially since these preferences and their origins can be logically explained. I did say many times that it's important to identify and account for implicit biases and beliefs which are not held from a rational justification.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, GabD said:

Al right, let's be clear here that we're talking about something that has to be context-sensitive. We don't know if there really are multiple best answers (and not merely "good" answers), but for the purposes of this, so let's assume from the start that this is the case.
Fortunately, multiple answers doesn't necessarily imply multiple answers in the same identicall context. Change the context in any way, and the appropriate answer may change (or it may not, but there's a possibility depending on the difference in context). If you build your system based on taking into account as many contextual parameters as possible (which a machine can do much better than we do, and will do increasingly better with continued improvement), the chances of getting conflicting answers are lowered. Even if we assumed that they couldn't be eliminated, these chances would still be better than with a human behind the wheel.

 

Lots of assumptions, the fact still remains context means nothing, If you develop AI that has to decide between two people and one dying then you are going to have legal issues because you cannot argue that one death was fair.   Hence why I raised the whole harambe thing. 

 

4 minutes ago, GabD said:

 


Also, it is entirely possible to design, create and use practical systems adapted to context, in such a way as to respect the differing values and priorities of different communities. The nice thing with AI-driven cars is that they have an input about where they are, and could in theory switch to a different set of values if/when entering a community that hypothetically has a drastically different set of values than the previous one.
 

Again context means nothing if the outcome is not acceptable to some people.

 

4 minutes ago, GabD said:

Except the whole point is to get rid of personal opinions and use our heads instead. We're talking about justification and argumentation. Opinions don't matter.
When you can indeed provide a sound justification for a given belief or opinion, the interlocutor ought to at least consider them worthy of reflecting on. It's the justification that matters. Sometimes, people can even recognize the value of an argument, even if they don't agree with it, and perhaps at least tolerate its neighboring existence.

Opinions matter greatly,  I don't want AI deciding one persons life is more important than another, humans can't agree on that so you can't exactly make a machine do it.

4 minutes ago, GabD said:


Assuming that the "everyone" we're talking about 1) don't have Antisocial Personality Disorder, and 2) fully understand the principles of logic, then there are generally three options : either A) something is rationally ethical for everyone, or B) it can be tolerated as an ethical alternative by everyone, or C) it's not rationally ethical.

And they are all normal attributes of humanity, so who is to say whether they shoudl be considered or not,  I am autistic should my opinion be ignmored or considered the only opinion worth working into said machine?

 

Or D) it is rationally rejected by some as unethical while rationally accepted by others.  Ignoring the fact people have different perspectives on ethics doesn;t make them go away.

 

4 minutes ago, GabD said:


From my experience living in multicultural communities with people from extremely varied backgrounds, most people generally can come to agree on the things that they can tolerate, and thus that they can have in common in their respective ethical frameworks. But they almost never initially agree on these things. There has to be a dialogue, an exchange that serves as a way to examine and scrutinize the other's discourse. People can be convinced, or at least convinced to tolerate.

Why do you think people can be convinced?  when you successfully convince everyone that Harambe had to be shot because the baby was in danger or the opposite, seeing as it doesn't matter because your premise is people can be convinced, then come back to me. 

 

4 minutes ago, GabD said:


And what is tolerated by people of a given community can be tolerated by AI designed to function within that community.

Have you looked at humanity recently, we don;t tolerate anything really.

 

4 minutes ago, GabD said:

It isn't self-evident. And there aren't always differences of opinion. They are just very common because different people are in different contexts. But generally there are actually more similarities than differences.

Like it isn't self evident that this thread shows people with differences of opinion?  come on man, philosophers have argued this stuff since they worked out how to communicate.   If they can't agree then the rest of humanity hasn't got a chance.

 

4 minutes ago, GabD said:


For example, very few people would argue that an ethical kill while hunting involves skinning the prey alive, and the justifications provided by those who would argue this are generally bullshit and/or questionable from a mental health perspective. Now, most real cases of disagreement aren't as clear-cut as this exaggerated example, but the takeaway remains roughly the same.

But we are not talking about extreme cases where there are obviously superior options, we are talking about a device that will literally decide one human dying over another.

4 minutes ago, GabD said:

 


Another important point is that relativist assumptions serve no practical purpose and bring us nowhere closer to a position of knowledge. If any given mutually-exclusive opinions were necessarily equally valid and true, then the opinion that they aren't is also equally valid and true... Relativism inherently implies the truth of its own falsification, and is therefore untenable.

 The exact issue here is that the proposition cannot be resolved to a single truth. It's not just culture and society that causes difference in morality, it's environment, family, genetics, experience, education.  No matter how you spin it you will never get people to agree on what constitutes a moral truth.

 

4 minutes ago, GabD said:


Besides, we'd also have to throw science as a whole out of the window if we were to defend relativism. There are plenty of reasons for not doing it.

why?  just because philosophy can't be held to the same principals of science (being reason and logic to fund truth), doesn't mean science has to be defunct under the same ruling.   There are gaps in understanding.  Wisdom is knowing when you don't have enough information to form a conclusion, rather than assuming the scientific principals can be used to work out anything. .

4 minutes ago, GabD said:


In reality, what one person considers to be factually true another also may not; that doesn't mean that both are equally correct. What one person considers to be logically valid, another also may not; that doesn't mean that both are equally correct. It is a fact that people's opinions can (and often are) incorrect. We shouldn't go out of our way to adapt our theories so as to not exclude erroneous beliefs.

But that's just it, this whole situation is about people, it is about what people consider to be ethical.  How can a machine that makes decisions that some people will consider unethical (because that will happen) be considered O.K?   You seem to be hung up on this idea that ethics are either only logical or erroneous.  Again you miss the point entirely as ethics are very individual, in fact they are as different as there are differences in people. 

 

4 minutes ago, GabD said:

 


The point isn't wether or not a conclusion is in line with people's personal opinions, or wether those opinions have intrinsic value. They don't. "Not until they are brought before the tribunal of reason", as an old german would say.

Nope,  personal ethics hold great intrinsic value to the individual,  to dismiss them as value less unless they are dicested by other people is wrong and the beginning of the thought police.

4 minutes ago, GabD said:

You are right in thinking that the Trolley Prolem has not been resolved, especially not in this thread. But I think you are wrong in thinking that this would somehow have been the point of this thread, or even a reasonable expectation for this thread.

How would I be wrong? the trolley problem  has been an inseparable part of the this thread because it not only was it raised in the article you linked but it classically illustrates why you can't have a machine make such decisions.  It is a core part of this thread.

4 minutes ago, GabD said:

 


Either way, arriving to an appropriate solution would mostly be done by convincing people. Which is a lot of work, but possible. By that I mean that it's measured in years. Humans are malleable in the long term, but stubborn in the short term.

As I said earlier,  I see no evidence at all in humanity that people can be convinced when they have a moral or ethical ideal.  In fact the evidence suggests that those are the hardest traits to change in someone as they tend to dig further away from evidence when confronted.

 

4 minutes ago, GabD said:


As for my opinion (and it really is just an opinion so far), it is that, in the mean time, communities should decide what non-ideal outcomes they prefer within the confines of their own community. I dislike that this does resort to majority rule in the case of communities that prefer to resort to this rule, but hopefully we'll work out (a) better alternative(s). Deciding on a policy for this at the community level seems more productive and relevant than making it an individual choice. There are individuals who aren't in a condition to be able to distinguish "better" from "worse", and whose choices would be made with too little concern for others.

And that is why it can't be done with something that will ultimately one day decide between two humans.   Unless you have unanimous agreement on how to tackle the trolley problem all you will have is a machine that some members of society don't want being forced upon them.

 

4 minutes ago, GabD said:

 


IMO, clarifying the sets of values that different communities give themselves would also be a good way to start making more sense of this. Not in a way that would encourage people to move to a different community if they have different values, but rather in a way that would encourage dialogue and exchange to prevent the tyranny of the majority.

Except that even within communities the values and ethics of people vary greatly.  Even within families there are dichotomies that tear at the heart of established moral beliefs.

 

4 minutes ago, GabD said:

For this part, I completely agree with you. Hence, community-level acceptance is the first goal to aim for.
Universal acceptance, we'll get there if possible/necessary. I don't think it's necessary. You don't think it's possible.

I do think it's necessarily and I think it's impossible. Again just look at humanity.  Gay debate?  Gun law?  republic/democrat? Islam/Christian?  nuclear/Greenpeace?  You name it, there are ideals, morals and ethical conflicts and some of those debates actually have hard facts underpinning them.  People just don't change.

 

4 minutes ago, GabD said:


I don't even think it's an immediate problem, unlike community-level acceptance. Because cars can already have location-restricted programming.
 

But if it isn't addressed it will become a problem.

 

4 minutes ago, GabD said:

I get the fact that individual humans in general aren't very rational. And that we do have biases. Still, I think you have it backwards, that it is rational conditions which transcend subjectivity and bias.

Just not when it comes to morals.  Two people with identical rational skill sets and education can have radically different opinions on what is ethical merely because morals tend to the emotional. 

 

4 minutes ago, GabD said:


It is also a fact that humans are capable of using reason. We are also capable of improving in our use of reason, to use it better and more often.

That still doesn't mean you can change a persons ethical position with it.

4 minutes ago, GabD said:


I don't care much if people still disagree when using reason, as long as the disagreement is between rational arguments. A rational argument, even one that we personally don't want to agree with, is still pretty much always preferable to unjustified beliefs.

So what is this device going to do when to rational arguments oppose each other?

4 minutes ago, GabD said:

 

As a humble undergrad student, I don't feel like I'm necessarily right about this, so I'm not confident in calling this more than mere opinion, but here goes:
(N.B.: My answer here will probably be very superficial, and filled with holes due to not having a lot of time to dedicate to refining forum posts. So just bear with me unless it stops making sense.)


In any case, we first we have to define what we mean by "right" and "wrong": 

- Being right is having a valid belief that has an optimal justification compared to that of all other available alternative beliefs, and which is not proven false. (Note: "being right" does not equal "knowing". One can be right by accident, and rightness can be refuted if proven otherwise, such as if one comes up with an even better justification. In that case the new belief will be optimal.) (The ever so important contextualisation comes from the "all other available alternative beliefs")

- Being wrong is having a belief that is either proven false, proven invalid, or proven to have a sub-optimal justification compared to available alternatives. I.e., it's a spectrum, there are degrees in being wrong. Most people are wrong (perhaps all of us are), but some moreso than others. Obviously, we should strive to be less wrong, as much as we can.


As for actual ethical systems, I personally favor a slightly unusual mix of context-sensitive deontology and utilitarianism, in which an utilitarian justification is what ultimately supports any maxim or axiom, while employing maxims and axioms to great extent due to their usefulness. I honestly don't think that my personal "ideal" ethical system is sufficiently developped for release, if ever; consider it more of a beta. I also don't think it's relevant to this thread, so I won't bother going into more detail for now.

Yes it is, it very much is.  Because the whole issue is deciding when two opposing ethical positions clash which one is "right" or "better". If you don't feel your ability is sufficiently developed to consider this, then upon what grounds do you consider you have sufficient understanding to say the opposite?   We all have to decide what is right and wrong, some will be demonstrably correct while others will have to keep it on faith.  However this highlights why we cannot discount the fact that situations will arise (like the trolley) that will not be resolved rationally or by unanimous understanding.

 

4 minutes ago, GabD said:

 

Well, you bothered applying them so far. I'd say the evidence is racking up against you there.

Have I? Not sure about that.

4 minutes ago, GabD said:

I guess we could say that about anything, the "does not apply" seems purely arbitrary. How another logically determines that the Earth isn't flat wouldn't apply to me because something makes me refuse to listen to reason? Of course it applies. People can very well not listen to reason, although doing so in such a case means being wrong. And people can very well be wrong regardless of wether or not they realize that they're in the wrong.

Again you are relying on the ability for an ethical perspective to be demonstrably wrong, or at teh very least actually able to be wrong.   The problem is ethics exist outside of evidential right and wrong. Which is why I asked before how would you determine what constitutes a right/wrong moral.

 

4 minutes ago, GabD said:

Well, yeah, debate is how we use reason as critical thinking to differentiate sound arguments from worse ones. Debate is a good thing. Debate is how any discipline that seeks knowledge or know-how can manage to advance beyond initial disagreements.

And why these debates still happen, because people never agree.

4 minutes ago, GabD said:

I don't understand what you mean by this. In sociology, oppression is a rather clearly-defined concept, and oppression is empirically identifiable when and where it is performed and experienced. That people experience a feeling that they associate with being oppressed doesn't tell us much about wether or not they actually are.

What I mean is, you can find people  from communist Russia who said they preferred it, the socialist lifestyle was ethical to them, whilst you also find people who defected because they felt the communist ideal to be ethical abhorrent.  Two very opposing ethical perspectives from the one community.  Both are rational, evidential and neither are wrong.

 

4 minutes ago, GabD said:


Also, it's never 100% of the members of a society who are going to be oppressed. It is a sociological fact that not everyone can be oppressed, just that a lot of people are or have been.

Which does not change the what I said above.

4 minutes ago, GabD said:

The fact that people can be contradictory does not mean that the non-contradiction principle should be discarded as a logical law. In order to verify or falsify the laws of logic, one must resort to using logic as their weapon, an act which is self-defeating.

 

It should be if it unfairly effects the lives of those who are innocent.

 

4 minutes ago, GabD said:

 


Unless you meant something else by "the logic"?
"Justification", perhaps?

 

 

Nope, logic is the exact word that I meant.  because logical conclusions can be different for different people when it comes to morals, as My example from communism exemplifies.  Both parties are using logic and evidence to conclude their ethical position on communism.

4 minutes ago, GabD said:


In that case the sentence would make sense. But I probaly just misunderstood what you meant by that.

Well, capitalism is more of an economic theory, not so much a moral or ethical framework. Most ethicists who defend capitalism are utilitarianists, but a large proportion of those who defend anti-capitalist alternatives (of which there are many) are also utilitarianists.

 

Actually capitalism is born of the same human traits that socialism is.   The outcomes are just different, which is what I have been trying to say all along.

 

4 minutes ago, GabD said:


And a lot of ethicists who defend capitalism base their arguments in social-focused ethics; capitalism would be socially-justified on the basis of its alleged superior benefit for society as a whole compared to that of alternatives (hence why they talk about capitalist societies, with norms and methods of regulation. Capital can't even possibly exist without a structured society). The crux is mostly in wether or not the factual premises are true, because the argumentation itself can be valid.

 

A lot of psychologists simply explain why the human traits tend toward that type of society, we can argue if there is factual premise for it till the cows come home, but at the end of the day it is still an ethical/moral mindset that drives toward one or the other. 

4 minutes ago, GabD said:

Indeed, of course not. Especially since these preferences and their origins can be logically explained. I did say many times that it's important to identify and account for implicit biases and beliefs which are not held from a rational justification.

So when a computer has to decide between a pregnant 24 year old and a virile 16 year old, which beliefs/biases will it account for?

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

At the end of the day, this has to be done. Cars will have to be programmed to handle situations where it might need to decide which one lives and which one dies. Not everyone will agree with the decisions, but they have to be made nonetheless. 

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, mr moose said:

Why do you think people can be convinced?

Because thinking that people can't be convinced is utterly and completely pointless, regardless of wether or not it would be true.

Either a) people can be convinced and thus the relevant problems can be solved, or b) they can't and said problems can't be solved.
Only the former leaves room for any potential course of action.

Given that there exist such problems, it is unwise to presume that people cannot be convinced in such a way as to allow the resolution of said problems (a presumption which is empirically false anyway, as evidenced by every single individual who has ever been convinced of something or changed their mind.)

Besides, if you really thought that people can't be convinced, why are you even bothering to try to convince me of it? It makes no sense.

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, mr moose said:

Again context means nothing if the outcome is not acceptable to some people.

Yet, "context" is also precisely what makes people agree when they do. People whose context made them share more similarities are more likely to agree with eachother on certain things. The good news is that it is possible to understand where other people come from, which facilitates dialogue.

I may have been unclear in my explanation of what I mean by "context". (You have explained it better than me, see a few quotes below this one.)

36 minutes ago, mr moose said:

Opinions matter greatly,  I don't want AI deciding one persons life is more important than another, humans can't agree on that so you can't exactly make a machine do it.

I could easily answer, "But that's just like, your opinion, man. Tough luck".
You see, now my opinion, which "matters greatly" as you have argued, is precisely that "opinions don't matter". Ergo, they don't matter, since if they did, then my opinion wouldn't matter... And so on.

There is no other way out of this. Opinions don't matter. Period.  It is the only approach to this that makes any logical sense. And arguing against the importance of logical sense would be the equivalent of digging oneself into a bottomless hole.

Now, about the sentence itself. Really, I don't "want" this either. Nobody "wants" to have machines making ethical judgements. There is a problem, and it needs fixing, that is all. We're going to have to make them make ethical judgements, because we have the rare opportunity to have them cause less damage than they otherwise could, if things were to go wrong. This is literally the point of the analogy with the trolley problem.

37 minutes ago, mr moose said:

But if it isn't addressed it will become a problem.

A problem which you argue as unsolveable, by the way. ;)
Because, if I summarize properly: 1) opinions can't be changed, 2) people cannot see reason, and 3) all opinions are equally valid.

1 hour ago, mr moose said:

I am autistic should my opinion be ignmored or considered the only opinion worth working into said machine?

Autism is irrelevant in this. Also, I have been diagnosed with Asperger's syndrome, if that is somehow relevant.

 

I mentioned Antisocial Personality Disorder (what was formerly mislabeled as "sociopathy") specifically because it can handicap a person's capacity for moral judgement.

1 hour ago, mr moose said:

But we are not talking about extreme cases where there are obviously superior options, we are talking about a device that will literally decide one human dying over another.

Well, yeah, outside of extreme made-up examples, there rarely are obviously superior options. You have to look very closely to see a distinction when there is one, but that doesn't mean there never is any.

1 hour ago, mr moose said:

It's not just culture and society that causes difference in morality, it's environment, family, genetics, experience, education.

This is precisely what I mean by "context". This is also what causes similarities in morality.

The thing with differences is that they can be communicated and understood. It's not impossible, it has been done before. Maybe some people will never be capable of that, but some definitely are.

Coming to an understanding is possible. And necessary in many cases.

1 hour ago, mr moose said:

Except that even within communities the values and ethics of people vary greatly.  Even within families there are dichotomies that tear at the heart of established moral beliefs.

They still tend to vary less than among the whole of a country, or the whole of humanity.
We're trying for an optimal solution. "Optimal" does not mean "ideal", it's merely as good as it can be.

Also, my notion of an actual community doesn't get much bigger than about 60000 people. It is closer to municipalities.

1 hour ago, mr moose said:

I do think it's necessarily and I think it's impossible. Again just look at humanity.  Gay debate?  Gun law?  republic/democrat? Islam/Christian?  nuclear/Greenpeace?  You name it, there are ideals, morals and ethical conflicts and some of those debates actually have hard facts underpinning them.  People just don't change.

Study the history of social movements would show you people did and do change, just slower than you want them to. It takes sufficient exposure to the required factors for change (which are indeed slightly different for everybody), and that takes time and a lot of effort (but ironically not effort on the part of the person changing, the context/environment matters at least as much as the self) People change constantly wether they want to or not.

1 hour ago, mr moose said:

Just not when it comes to morals.  Two people with identical rational skill sets and education can have radically different opinions on what is ethical merely because morals tend to the emotional. 

Opinions about morals tend to be emotional (and arguably morals are if you distinguish them from Ethics).
But that's not what we are talking about, which is an academic field of research. I'm not much of a believer in a distinction between morals and ethics, but my mention of "Ethics" should specifically refer to the inter-subjective peer-reviewed methodical work, not the subjective opinions that absolutely anyone can have. I won't explain the difference between subjectivity and inter-subjectivity here, you can google it if you need to.

1 hour ago, mr moose said:
Quote

You are right in thinking that the Trolley Prolem has not been resolved, especially not in this thread. But I think you are wrong in thinking that this would somehow have been the point of this thread, or even a reasonable expectation for this thread.

How would I be wrong? the trolley problem  has been an inseparable part of the this thread because it not only was it raised in the article you linked but it classically illustrates why you can't have a machine make such decisions.  It is a core part of this thread.

I was saying that the point and goal of this thread wasn't SOLVING the trolley problem. You seem to think it was.

I shared a news article on AI research pertaining to self-driving cars and ethical algorithms. Then people got their panties in a bunch because "it's questioning muh moral relativism" and such.

1 hour ago, mr moose said:

Have you looked at humanity recently, we don;t tolerate anything really.

Such an exaggeration, and not even for allegorical purposes.

AFAIK, I am tolerating this comment right now. And you cannot stop me. ;)

2 hours ago, mr moose said:

So what is this device going to do when to rational arguments oppose each other?

Two equally rational arguments (cause if they're not of equal worth, there is no question).

Well, I guess one of them would have to be picked by the humans who make the device. But since they are ofequal worth ( which was the point of that example), then the choice wouldn't really bother us so much.

I mean, we could always opt for multitrack drifting, but it seems like a needless waste of intelligent biomass

2 hours ago, mr moose said:
Quote

 

Well, you bothered applying them so far. I'd say the evidence is racking up against you there.

Have I? Not sure about that.

You are making sentences. With meaning. To form arguments. Can't do that without using logical principles.

2 hours ago, mr moose said:

just because philosophy can't be held to the same principals of science (being reason and logic to fund truth), doesn't mean science has to be defunct under the same ruling.   There are gaps in understanding.  Wisdom is knowing when you don't have enough information to form a conclusion, rather than assuming the scientific principals can be used to work out anything. .

Go check out the relation between epistemology and science. Reject the former and the latter goes out with it.
As for the second sentence, yeah, that's true, I agree. I did, however provide a justification for that assumption, which you can find by scrolling up. If it's still unclear, well, I'm sorry. I have spent more time on this thread than I feel I should have.

2 hours ago, mr moose said:

Again you miss the point entirely as ethics are very individual, in fact they are as different as there are differences in people. 

I'm talking about Ethics is an academic discipline, you keep bringing it back to individual opinions. Opinions about ethics aren't the same as Ethics, the same way that opinions about biology aren't the same as Biology.

A discipline is something that multiple people collaborate on. It is by definition not individual, it is concurrently elaborated by many disciples.

People who think that evolution isn't a thing sure have a right to be wrong, they just don't have a right to claim that their personal uninformed opinion is worth as much as the informed opinion of a community of experts.

Believe it or not, there actually tends to be a bit less disagreements among a community of ethicists than there is among a community of evolutionary biologists.

But everyone insists on the fact that ethics in particular is so fundamentally different from other fields of academic research, such that any person, without even being involved in the field, would necessarily be equally qualified from birth.

2 hours ago, mr moose said:

In fact the evidence suggests that those are the hardest traits to change in someone as they tend to dig further away from evidence when confronted.

Oh, I never said it was easy or simple. I just said it was possible.

It is particularly difficult and time-consuming. But as it turns out, it is necessary in many cases, so yeah. Gotta do it.

2 hours ago, mr moose said:

Yes it is, it very much is.

Well then, look up the two authors I mentioned.
Otherwise, you're asking me to dedicate hours of productive time to explain something in this thread which is unrelated to both my original post and the topic of this thread.
I'm all for debating things on the internet, but I currently do not have the hours required by what you ask for.
 

Link to comment
Share on other sites

Link to post
Share on other sites

There are states in America where you can't even pump your own fuel in case you start a fire, and people think self-driving cars which could kill a crowd of people with no clear blame will ever take off, not likely. This whole shitshow will ram into a wall of bureaucracy sooner or later.

Link to comment
Share on other sites

Link to post
Share on other sites

In layman's terms: Essentially, Car AI is being developed by Ubisoft.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

This is why I'll never buy a self driving car. An when they are mainstream I'll be getting a f350 with a huge steel brush guard.

CPU: 6700K Case: Corsair Air 740 CPU Cooler: H110i GTX Storage: 2x250gb SSD 960gb SSD PSU: Corsair 1200watt GPU: EVGA 1080ti FTW3 RAM: 16gb DDR4 

Other Stuffs: Red sleeved cables, White LED lighting 2 noctua fans on cpu cooler and Be Quiet PWM fans on case.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, LAwLz said:

At the end of the day, this has to be done. Cars will have to be programmed to handle situations where it might need to decide which one lives and which one dies. Not everyone will agree with the decisions, but they have to be made nonetheless. 

Or it won't happen at all, or the driver will still retain personal responsibility for the car.    The issue here goes well beyond one moral agreeableness and into a legal one.  If my wife dies unnecessarily because of an algorithm is it murder? and who programed it are they responsible? If the government allows it to happen are they responsible?  EDIT: just to expand upon this, when anyone gets in a car they accept responsibility for that car, if they run over an old lady to avoid a bunch of school kids then they can be charged with manslaughter. So if we extend this reasoning to an AI device who takes responsibility for it?  and if they are held accountable as some must be, then will they release said product?

 

11 hours ago, GabD said:

Because thinking that people can't be convinced is utterly and completely pointless, regardless of wether or not it would be true.

 

You do realise that sentence is self defeating?   IF it is true, then thinking it isn't "completely pointless" because it's just accepting reality, and if it's not true, to date it is just accepting a very observable reality of people.

 

Quote


Either a) people can be convinced and thus the relevant problems can be solved, or b) they can't and said problems can't be solved.
Only the former leaves room for any potential course of action.

Given that there exist such problems, it is unwise to presume that people cannot be convinced in such a way as to allow the resolution of said problems (a presumption which is empirically false anyway, as evidenced by every single individual who has ever been convinced of something or changed their mind.)

Besides, if you really thought that people can't be convinced, why are you even bothering to try to convince me of it? It makes no sense.

There is no presumption here, it is a fact of reality, people do not agree, will not agree and cannot be convinced of many things, especially pertaining to morals. No matter how many times you say it it humans are not going to start being reasonable or agreeable on such topics.

 

8 hours ago, GabD said:

Yet, "context" is also precisely what makes people agree when they do. People whose context made them share more similarities are more likely to agree with eachother on certain things. The good news is that it is possible to understand where other people come from, which facilitates dialogue.

 

Quote

 


I may have been unclear in my explanation of what I mean by "context". (You have explained it better than me, see a few quotes below this one.)

I could easily answer, "But that's just like, your opinion, man. Tough luck".
You see, now my opinion, which "matters greatly" as you have argued, is precisely that "opinions don't matter". Ergo, they don't matter, since if they did, then my opinion wouldn't matter... And so on.

No, if your opinion was tough luck then we'd all have to accept that.   If I was the one saying we must have this AI in cars, and you said, that's not possible because no one can agree on how it should decide, then that would be tough luck for me.  I can't have what I want because your ethical conditioning won't accept it. 

 

Quote



There is no other way out of this. Opinions don't matter. Period.  It is the only approach to this that makes any logical sense. And arguing against the importance of logical sense would be the equivalent of digging oneself into a bottomless hole.

??  been over this ethical opinions matter, you cannot dismiss someone else's ethical nature because you have a different opinion.

Quote


Now, about the sentence itself. Really, I don't "want" this either. Nobody "wants" to have machines making ethical judgements. There is a problem, and it needs fixing, that is all. We're going to have to make them make ethical judgements, because we have the rare opportunity to have them cause less damage than they otherwise could, if things were to go wrong. This is literally the point of the analogy with the trolley problem.

Then don't have it,  if you don't want it and I don't want it and half this thread seems to not want it, why have it? There is no absolute necessity for this.  And I find it perplexing that you keep saying how we should be implementing this yet you now agree the trolley problem is very relevant but has no resolution.  So you want something you know has no resolution, will adversely effect some portion of the population fatally, half the population has no say in it and some who don't want  it yet you propose we just ignore them.

 

 

Quote

A problem which you argue as unsolveable, by the way. ;)

It can be solved by not implementing said AI.   Are you getting  confused as to what the debate is about?

Quote


Because, if I summarize properly: 1) opinions can't be changed, 2) people cannot see reason, and 3) all opinions are equally valid.

Autism is irrelevant in this. Also, I have been diagnosed with Asperger's syndrome, if that is somehow relevant.

You listed some mental conditions as if they had bearing on who should be considered, that is why I asked if being autistic meant my ethical standing was somehow less relevant.   Now you are ignoring the ethical nature of the debate.  Moral opinions cannot be changed, moral reasoning is not the same as cold logic and ethical opinions are equal.  If moral opinions are not equal what you are telling me is mine is less important than yours yet you haven't given me a single reason why. 

 

Quote

 

 

Well, yeah, outside of extreme made-up examples, there rarely are obviously superior options. You have to look very closely to see a distinction when there is one, but that doesn't mean there never is any.

you have failed to illustrate a mechanism by which these (now minute) details give us a distinction to work with, much less provide sufficient evidence to change someones ethical standing.

 

Quote

This is precisely what I mean by "context". This is also what causes similarities in morality.

The thing with differences is that they can be communicated and understood. It's not impossible, it has been done before. Maybe some people will never be capable of that, but some definitely are.

Coming to an understanding is possible. And necessary in many cases.

They still tend to vary less than among the whole of a country, or the whole of humanity.
We're trying for an optimal solution. "Optimal" does not mean "ideal", it's merely as good as it can be.

Also, my notion of an actual community doesn't get much bigger than about 60000 people. It is closer to municipalities.

Study the history of social movements would show you people did and do change, just slower than you want them to. It takes sufficient exposure to the required factors for change (which are indeed slightly different for everybody), and that takes time and a lot of effort (but ironically not effort on the part of the person changing, the context/environment matters at least as much as the self) People change constantly wether they want to or not.

All of this centres around the concept that ethical positions can be changed.  You cannot hope that changing some people is enough in the context of a device that will literally chose between two humans, you have to convince everyone.  Slow changes for generations over thousands years is not the same as changing one persons ethical position.

As I asked with those suggestions, can you change their ethical; positions now?  why is their still debate NOW? Arguing that with enough time and evolution ethical opinions will change does not mean you can change the mind of an individual today.

 

Quote

Opinions about morals tend to be emotional (and arguably morals are if you distinguish them from Ethics).
But that's not what we are talking about, which is an academic field of research. I'm not much of a believer in a distinction between morals and ethics, but my mention of "Ethics" should specifically refer to the inter-subjective peer-reviewed methodical work, not the subjective opinions that absolutely anyone can have. I won't explain the difference between subjectivity and inter-subjectivity here, you can google it if you need to.

What?  The article refers to a device used in public which decides between two humans in the event of an accident,  how on earth is that not related to the ethical opinions "anyone can have".   That is exactly the problem, it is going to be used in public therefore the public has a say.  Ethics and morals are identical, you can't have an ethical belief that isn't founded on moral principals, my concern here is you don't truly understand that part of humanity given that you think their is a sufficient distinction and that one can academically separate morals into acceptable and erroneous.

 

Quote

I was saying that the point and goal of this thread wasn't SOLVING the trolley problem. You seem to think it was.

 

Not at all, I have always said from the beginning that is has no solution and the people will never solve it,  this is why such a topic is doomed from the onset.

Quote



I shared a news article on AI research pertaining to self-driving cars and ethical algorithms. Then people got their panties in a bunch because "it's questioning muh moral relativism" and such.

Of course people got active about it, it is a device that will ultimately effect them, in a very real way.  If people can't express their concerns and have their concerns heard then what you propose is a dictatorship run by people who think only logic is necessary.  You don't get to decide other people morals opinions much less decide their value. If anyone tries to do that through a device that could potentially take life then there is going to be consequences.  I am assuming if said AI was programed to favor young over old and you 40 year old mother was killed in favor of a 16 year old delinquent you would not have a problem accepting it was for the best?  If you answer yes you are either a liar or emotional stunted.

 

Quote

Such an exaggeration, and not even for allegorical purposes.

AFAIK, I am tolerating this comment right now. And you cannot stop me. ;)

Two equally rational arguments (cause if they're not of equal worth, there is no question).

And as I posted before, where is the tolerance for gun laws, LBGT rights, legalizing weed, road rage and religious freedoms?  It was no where near an exaggeration.  I suppose you'll tell me that richard dawkins tolerates religion? or that trump tolerates mexicans?

Quote

 

 

I'm talking about Ethics is an academic discipline, you keep bringing it back to individual opinions. Opinions about ethics aren't the same as Ethics, the same way that opinions about biology aren't the same as Biology.

 

You can't separate the two in this context.  The ethics academics are talking about are the exact ethical principals people experience.   It is the average persons ethical position that philosophers research, not some isolated ideal that is irrelevant to the masses.

 

 

Quote


A discipline is something that multiple people collaborate on. It is by definition not individual, it is concurrently elaborated by many disciples.

People who think that evolution isn't a thing sure have a right to be wrong, they just don't have a right to claim that their personal uninformed opinion is worth as much as the informed opinion of a community of experts.

Believe it or not, there actually tends to be a bit less disagreements among a community of ethicists than there is among a community of evolutionary biologists.

But everyone insists on the fact that ethics in particular is so fundamentally different from other fields of academic research, such that any person, without even being involved in the field, would necessarily be equally qualified from birth.

 

I am getting the very distinct impression you don't have much experience in this field.    

 

 

Quote

Oh, I never said it was easy or simple. I just said it was possible.

It is particularly difficult and time-consuming. But as it turns out, it is necessary in many cases, so yeah. Gotta do it.

Well then, look up the two authors I mentioned.
Otherwise, you're asking me to dedicate hours of productive time to explain something in this thread which is unrelated to both my original post and the topic of this thread.
I'm all for debating things on the internet, but I currently do not have the hours required by what you ask for.
 

 

Disagreeing doesn't mean I am wrong, I can link to many authors, ethicists and philosophers that whole condemn everything you have said.  But that is not the point. the point is to discuss the Article, which claims the design of a product that will try to resolve the trolley problem in real life.  Which hitherto has not been resolved and people who have the right to be concerned are.   You cannot dismiss their concerns because you believe some higher academic understand trumps ethical reasoning. 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

44 minutes ago, mr moose said:

-snip-

I am not sure what your stance on this is. You'll have to excuse me but I haven't read this thread or any of your long posts, so maybe I am misinterpreting you.

Are you saying that cars should not be programmed in such a way that it will intervene to minimize damage, even if inaction would result in more severe damage? If that's what your stance is then I would argue that having self-driving cars out in the world, which won't try to minimize the damages they will inevitably cause is completely unacceptable and irresponsible.

Cars should be programmed in such a way that if harm is inevitable, it has a set of priorities it should follow. I am not sure what those priorities should look like, and that's where the philosopher comes in to play.

 

50 minutes ago, mr moose said:

If my wife dies unnecessarily because of an algorithm is it murder? and who programed it are they responsible?

We are in similar situations like that every day already.

There have been plenty of civil casualties caused by software, and as far as I know it has never resulted in more than light slaps on the wrists of the companies responsible. Granted, those situations have had a major difference and that's that they were accidents caused by bugs (such as that radiation treatment machine that gave too high dosage, or aircraft engines failing) but my point is that completely relying on a computer for your survival is something we do today without even thinking of it. Your car? Chances are it has computers controlling the steering, fuel injection and other vital parts which could kill you if they malfunctioned.

 

 

55 minutes ago, mr moose said:

just to expand upon this, when anyone gets in a car they accept responsibility for that car, if they run over an old lady to avoid a bunch of school kids then they can be charged with manslaughter.

That's not really true though. If a car catches on fire because of engine failure then it is not the driver that would be held responsible.

The world is not that black and white. Remember, cars won't have to decide between killing 1 or 10 people during normal operations. It's only when something goes horribly wrong it will have to do that. For example if it gets rear ended and has to decide if it should veer right or left because it can't break. A human will probably act randomly without thinking, which can result in more casualties than necessary. A car can think the decision through and choose the optimal one, which will be different depending on who you're asking.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×