Jump to content

Philosophers are building ethical algorithms to help control self-driving cars

2 hours ago, LAwLz said:

I am not sure what your stance on this is. You'll have to excuse me but I haven't read this thread or any of your long posts, so maybe I am misinterpreting you.

Are you saying that cars should not be programmed in such a way that it will intervene to minimize damage, even if inaction would result in more severe damage? If that's what your stance is then I would argue that having self-driving cars out in the world, which won't try to minimize the damages they will inevitably cause is completely unacceptable and irresponsible.

Cars should be programmed in such a way that if harm is inevitable, it has a set of priorities it should follow. I am not sure what those priorities should look like, and that's where the philosopher comes in to play.

My stance is that when a computer is programed to decide between two humans, then it is either going to have to satisfy all involved parties that it made the right choice or people should have the right to deny such a device on the road.   My main issue in the above long posts is that ethical positioning is not a condition that can be argues as right or wrong and is deeply individual.

 

2 hours ago, LAwLz said:

We are in similar situations like that every day already.

There have been plenty of civil casualties caused by software, and as far as I know it has never resulted in more than light slaps on the wrists of the companies responsible. Granted, those situations have had a major difference and that's that they were accidents caused by bugs (such as that radiation treatment machine that gave too high dosage, or aircraft engines failing) but my point is that completely relying on a computer for your survival is something we do today without even thinking of it. Your car? Chances are it has computers controlling the steering, fuel injection and other vital parts which could kill you if they malfunctioned.

 

That's because there is a difference between a fault/mistake and the program actually deciding which human dies.  This issue is purely about moral reasoning.  I have often worried about onboard computers on cars,  My father in laws land rover decided to lock up one of the rear wheels at 80Km/h because of a mismatched sensor.  The company is definitely legally liable in that situation.   Imagine if a sensor has issue in an AI car and it decides to plough through 30 people thinking that was the best outcome?

 

2 hours ago, LAwLz said:

That's not really true though. If a car catches on fire because of engine failure then it is not the driver that would be held responsible.

We are not talking about accidents again, we are talking about a computer doiing exactly what is was programed to and deciding between two human lives.

2 hours ago, LAwLz said:

The world is not that black and white. Remember, cars won't have to decide between killing 1 or 10 people during normal operations. It's only when something goes horribly wrong it will have to do that. For example if it gets rear ended and has to decide if it should veer right or left because it can't break. A human will probably act randomly without thinking, which can result in more casualties than necessary. A car can think the decision through and choose the optimal one, which will be different depending on who you're asking.

 

I know the world is not black and white, that is my exact argument, everyone has a different opinion on what constitutes the best outcome, meaning everyone has a slightly different level of acceptance.  If this device is designed to go on the road and effect my life, then don;'t I have a right to say how it should prioritize life in the event of an accident?  

 

Yes people are likely to make more mistakes than a computer, but the reality we can accept human error when people die,  with this device it will not be human error but the result of someone else's ethical ideals that dictate how the computer decides the outcome.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, mr moose said:

My stance is that when a computer is programed to decide between two humans, then it is either going to have to satisfy all involved parties that it made the right choice or people should have the right to deny such a device on the road.   My main issue in the above long posts is that ethical positioning is not a condition that can be argues as right or wrong and is deeply individual.

Why does it have to satisfy all involved parties? Not even a human decision will do that.

While letting people deny such a thing on the road might sound good, it's actually a horrible thing. People should not have the right to deny self driving cars on the road because at some point in time, human drivers will be more harmful than self driving cars will be.

Does that mean we might end up in some situations where a car will decide to kill someone? Absolutely, but at some point in the not too distant future the decision will be between maybe 1000 people killed by self driving cars, or 40,000 people killed by human drivers. I think it is absolutely illogical to say that you would rather have 40,000 people killed just because "I don't like having software decide who lives and who dies". As it is today, it's pure chance who lives and who dies.

 

In the end, what matters is results. Who to blame is fairly irrelevant when we're talking about potentially saving tens if not hundreds of thousands of lives. Would knowing that your wife was killed by a drunk driver somehow lessen the blow compared to knowing she was killed by the car in order to save 3 other people? The end result is the same either way.

 

6 hours ago, mr moose said:

That's because there is a difference between a fault/mistake and the program actually deciding which human dies.  This issue is purely about moral reasoning.  I have often worried about onboard computers on cars,  My father in laws land rover decided to lock up one of the rear wheels at 80Km/h because of a mismatched sensor.  The company is definitely legally liable in that situation.   Imagine if a sensor has issue in an AI car and it decides to plough through 30 people thinking that was the best outcome?

Are you sure the company is liable in the mismatched sensor example? Like I said before, I am not aware of any software issue resulting in more than a light slap on the companies wrist, even in the case of the aircraft that crashed. Like I said before, cars won't have to decide who to kill in normal situations either. It is only when something goes wrong.

I don't see how it is any different from for example the military shooting down an airplane if it gets hijacked, which is what they will do.

 

6 hours ago, mr moose said:

We are not talking about accidents again, we are talking about a computer doiing exactly what is was programed to and deciding between two human lives.

A computer will not have to decide between two human lives unless something goes wrong.

We are not programming killing machines here. We're programming cars that will have to have a set of guidelines to follow in case of an emergency.

 

6 hours ago, mr moose said:

I know the world is not black and white, that is my exact argument, everyone has a different opinion on what constitutes the best outcome, meaning everyone has a slightly different level of acceptance.  If this device is designed to go on the road and effect my life, then don;'t I have a right to say how it should prioritize life in the event of an accident?  

If you ask me, you don't. If you want to kill 39,000 people in order to save 1,000 then I am sorry, but your opinion does not matter. What matters is the safety of society as a whole.

Here are the two inevitable facts.

1) Self driving cars will be safer, as a whole, sometime in the future. Human error is by far the biggest reason for car accidents and AI will at some point make fewer mistakes than humans.

2) Accidents will happen even if we have perfect AI. Things like mechanical issues (breaks locking up for example) will always be a factor, and when something like that happens, a situation might occur where a car has to choose who dies and who lives.

 

So what do you suggest we do?

Just saying "people won't agree" is not a solution, and something has to be done.

 

6 hours ago, mr moose said:

Yes people are likely to make more mistakes than a computer, but the reality we can accept human error when people die,  with this device it will not be human error but the result of someone else's ethical ideals that dictate how the computer decides the outcome.

You're looking at it the wrong way. The death won't be because a car decided one person should die. The death will be caused by whatever circumstance caused the car to need to decide to begin with.

 

Think of it like this.

A self driving car gets rear ended by a human driver. The self-driving car is unable to break and is heading towards a group of people. The car has to decide. Will it continue straight ahead into the group of people, or will it avoid the group by steering into a wall? If it choose the wall, will you really say it was the AI that killed the driver, rather than saying it was the car who rear ended it?

 

My point is that AI cars will avoid deaths as much as reasonably possible, but because of circumstances it will not always have a choice. It will sometimes have to decide who dies. Is that really the car "murdering" people? The way I look at it, it's accidents causing the car to decided between two bad outcomes. When the car inevitably has to choose between two bad outcomes, I want it to have a proper set of rules everyone can more or less agree with. Not everyone will agree with them, but society as a whole should agree on it.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, LAwLz said:

Why does it have to satisfy all involved parties? Not even a human decision will do that.

While letting people deny such a thing on the road might sound good, it's actually a horrible thing. People should not have the right to deny self driving cars on the road because at some point in time, human drivers will be more harmful than self driving cars will be.

Does that mean we might end up in some situations where a car will decide to kill someone? Absolutely, but at some point in the not too distant future the decision will be between maybe 1000 people killed by self driving cars, or 40,000 people killed by human drivers. I think it is absolutely illogical to say that you would rather have 40,000 people killed just because "I don't like having software decide who lives and who dies". As it is today, it's pure chance who lives and who dies.

Because that is the nature of ethics and legal liability.   In one instance you have an accident that is beyond the control of a human and in the other you have a preprogramed outcome that can be argued might not have resulted in a specific death had a human been driving.   That is why laws that centre around ethical conundrums are heavily debated.  A few pages back I linked to a good video on this.

 

Just now, LAwLz said:

In the end, what matters is results. Who to blame is fairly irrelevant when we're talking about potentially saving tens if not hundreds of thousands of lives. Would knowing that your wife was killed by a drunk driver somehow lessen the blow compared to knowing she was killed by the car in order to save 3 other people? The end result is the same either way.

Nope,  If that's your argument then reduce road traffic rather than program cars to choose between humans.  

 

Just now, LAwLz said:

Are you sure the company is liable in the mismatched sensor example? Like I said before, I am not aware of any software issue resulting in more than a light slap on the companies wrist, even in the case of the aircraft that crashed. Like I said before, cars won't have to decide who to kill in normal situations either. It is only when something goes wrong.

Pretty sure,  Failing to put a part in that caused a death has been through the courts before:

https://www.healthandsafetyatwork.com/work-equipment/robert-churchyard-jailed-three-and-half-years-after-powered-gate-manslaughter 

 

That's just a mistake, but we are talking about a program that will actually make the decision and carry it out.   There is a reason the trolley problem hasn't be resolved.

 

Just now, LAwLz said:

I don't see how it is any different from for example the military shooting down an airplane if it gets hijacked, which is what they will do.

It's probably not different, and to be honest I don't know how I feel about that.

 

Just now, LAwLz said:

A computer will not have to decide between two human lives unless something goes wrong.

So it will then.  This is the problem, just because 99% of the time it will be fine, does not change the ethical problem of when something does go wrong.

Just now, LAwLz said:

We are not programming killing machines here. We're programming cars that will have to have a set of guidelines to follow in case of an emergency.

So you won't have an issue if it decides to run your mother over instead of someone else, because it had to choose between one of the two people on the street or the three occupants of the car?   That's  a loaded question, you can't really answer it without lying or outing yourself as an emotionless narcissist. 

 

Just now, LAwLz said:

If you ask me, you don't. If you want to kill 39,000 people in order to save 1,000 then I am sorry, but your opinion does not matter. What matters is the safety of society as a whole.

Here are the two inevitable facts.

1) Self driving cars will be safer, as a whole, sometime in the future. Human error is by far the biggest reason for car accidents and AI will at some point make fewer mistakes than humans.

2) Accidents will happen even if we have perfect AI. Things like mechanical issues (breaks locking up for example) will always be a factor, and when something like that happens, a situation might occur where a car has to choose who dies and who lives.

Again the issue is not the numbers, the issue is the ethical question is raises, an accident is fine because human error is just that.  But a programed outcome is not an error.

 

Just now, LAwLz said:

So what do you suggest we do?

Just saying "people won't agree" is not a solution, and something has to be done.

This sounds familiar, like when I said criminals where using encryption and making life worse for everyone and something has to be done you said? 

 

In the end of the day I have already stated this AI problem will not go away, but if it is implemented against the will of people and someone dies then there is an issue.  Because unlike human error, this issue will be a programed one that can be argued had the driver been human the death would not have occurred.   It is the trolley problem.

 

 

Just now, LAwLz said:

You're looking at it the wrong way. The death won't be because a car decided one person should die. The death will be caused by whatever circumstance caused the car to need to decide to begin with.

Nope, as I just said, in the event of an accident involving a care with AI and a person dies, but without it other people die instead. Then the AI has involved them without permission,  by current legal standards that is a crime.  That is unethical by many people reasoning, and that is why there is currently no solution o the trolley problem.

 

Just now, LAwLz said:

Think of it like this.

A self driving car gets rear ended by a human driver. The self-driving car is unable to break and is heading towards a group of people. The car has to decide. Will it continue straight ahead into the group of people, or will it avoid the group by steering into a wall? If it choose the wall, will you really say it was the AI that killed the driver, rather than saying it was the car who rear ended it?

You can list as many examples as you want, but in every exaple the same thing is happening, the AI is choosing who dies, where without it is human error.  this make sit an ethical problem and not one of numbers.

 

Just now, LAwLz said:

My point is that AI cars will avoid deaths as much as reasonably possible, but because of circumstances it will not always have a choice. It will sometimes have to decide who dies. Is that really the car "murdering" people? The way I look at it, it's accidents causing the car to decided between two bad outcomes. When the car inevitably has to choose between two bad outcomes, I want it to have a proper set of rules everyone can more or less agree with. Not everyone will agree with them, but society as a whole should agree on it.

 

If a proper set of rules exist that satisfy all road users then I don't think there will be a problem, but until then the ethics hurdle is still too big to just hop over and ignore.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, mr moose said:

My stance is that when a computer is programed to decide between two humans, then it is either going to have to satisfy all involved parties that it made the right choice or people should have the right to deny such a device on the road. 

You seem to assume that this is an exceptional case of disagreement, when it isn't. As per your own argument, you argue it is absolutely impossible, for all time and all contexts, to satisfy all involved parties (and with no regard for the arguments of any parties). However, to be coherent, this would mean that we would have the right to deny the use of anything in existence on the basis of disagreement, which we don't, and for good reason.

Also, human actions certainly don't satisfy all involved parties, yet you seem to have no problem with them being sub-optimal.
Humans already decide between people wether they know it or not. And they do exert decisions with potentially or necessarily lethal consequences. the only reason why humans aren't held to the same standard that you wish to hold machines at is that they can't process information as quickly or efficiently as machines can right now. Human error, even when the consequences are generally worse, is seen as less of a problem than the consequences of non-human mechanisms.
As if machines were more ethically responsible than conscious informed humans, or as if they could be held more accountable.

20 hours ago, mr moose said:

Imagine if a sensor has issue in an AI car and it decides to plough through 30 people thinking that was the best outcome?

The failure rate on human perception and interpretation is a LOT worse than that of even the most cheap-ass sensors on the market. We have fewer solutions for when humans are the defective element.

On 04/03/2018 at 4:16 PM, mr moose said:

You do realise that sentence is self defeating?   IF it is true, then thinking it isn't "completely pointless" because it's just accepting reality, and if it's not true, to date it is just accepting a very observable reality of people.

No, because if people can't be convinced, then it is impossible for people to accept reality unless that was the initial position they had from the start. Otherwise, that would imply being convinced of it, which you argue is impossible. This is why I said that this notion is pointless.
By your own argument, it ifollows that it is impossible to "accept" reality, it is only possible to be lucky enough to happen to have an initial belief that happens to be representative of reality at some point.
And you better hope that reality doesn't change too much, cause then you could end up being wrong, since even the changes in reality couldn't convince you that is has changed, because if people cannot be convinced, they cannot ever change beliefs. And if they do change beliefs, that implies a cause for that change, which leaves open the possibility that factors outside of our own mindset can affect our mindset, i.e. that people can be convinced, which you argue is impossible.
I hope you begin to see why I said that this is an untenable position.

As for my sentence being self-defeating, it only would be so if it your assumption that people cannot be convinced was true. Yet you haven't provided proof for it, only asserting it as if it was self-evident.
While there is ample proof for the possibility for one's beliefs to change. Linus changed his beliefs about things in at least a couple of his videos. And individuals have definitely changed their beliefs about cigarettes during their life once it was proven beyond reasonable doubt that they were actually detrimental to health instead of beneficial as previously believed. The ethical question "Should people smoke cigarettes?" has largely been resolved, now hardly anybody would answer "yes, you should". The only positive answer that can pass as reasonable is with the conditional "if you really want to and are aware of the consequences".

On 04/03/2018 at 4:16 PM, mr moose said:

There is no presumption here, it is a fact of reality, people do not agree, will not agree and cannot be convinced of many things, especially pertaining to morals. No matter how many times you say it it humans are not going to start being reasonable or agreeable on such topics.

Again, asserting it as fact is not empirical proof. You claim is one that requires empirical proof. I actually am also trying to find the proof for you, but have yet to see anything that would invalidate the proof that individuals have indeed changed their minds in the past, and still do right now, which hints that they can.

I mean, things would be much simpler if you were right. The implication of relativism towards all prescriptive judgements with an ethical connotation would mean that there would be no further need to invest time and effort in science, philosophy or any other academic discipline, since it wouldn't be in any way more justified than simply not doing it. The thing is that ethics applies to a lot more than you seem to think. Scientific research is just one example of things that are justified by the ethical judgement of its value being worth more than the absence of it.

I'm not opposed to what you say because I want to be, I am because I think your assumptions are wrong, which leads you to an incorrect conclusion, which in turn has ramifications that I don't think you would accept either.

On 04/03/2018 at 4:16 PM, mr moose said:

you cannot dismiss someone else's ethical nature because you have a different opinion.

You'll eventually have to accept that there is a difference between "opinion" and "argument". You can dismiss someone else's ethical position (not their nature as an ethical agent) on the basis of their arguments, but not their opinion.
This is what I mean by "opinions don't matter" or "are irrelevant". They're no basis for judgement, only their justification is.
People can be of the same opinion with some having better or worse justification for the same conclusion. In effect, the positions with worse argumentation will be dismissable in favor of those with better argumentation, even though they argue for the same conclusion.

On 04/03/2018 at 4:16 PM, mr moose said:

And I find it perplexing that you keep saying how we should be implementing this yet you now agree the trolley problem is very relevant but has no resolution. 

I agree that it has no agreed-upon resolution yet, not that it would be impossible to agree upon an optimal solution.
There's a difference between "so far" and "for all time". The present lack of an agreed-upon solution does not make it impossible. Arguments are being made and considered. We'll see if they can be optimized, by trying to optimize them and perhaps succeeding. That takes time and effort, which the NSF recognizes and supports.
Specialists, more competent people than you or I, have tackled this subject and found it worthy of further research.
You argue against that judgement, not me.

On 04/03/2018 at 4:16 PM, mr moose said:

So you want something you know has no resolution, will adversely effect some portion of the population fatally, half the population has no say in it and some who don't want  it yet you propose we just ignore them.

No, because we don't know that it has no resolution, that statement presently does not qualify for the status of knowledge. And no to the sedond part, because I want the alternative a lot less less than my not wanting this. Not having self-driving cars currently affects a much larger portion of the population fatally, compared to what the adoption of self-driving cars would.
People dying is not desireable and currently happens a lot, without much possibility or will for improvement on the human part. This is a statistical improvement compated to the alternatives. This is the less-wrong option.

On 04/03/2018 at 4:16 PM, mr moose said:

It can be solved by not implementing said AI.   Are you getting  confused as to what the debate is about?

"The problem" that you mentioned and that that my sentence was specifically referring to was that of adopting a specific ethical algorithm on a large scale (rather than just at the level of communities). The solution you propose ends up being the same as not adressing the problem in the first place. i.e., not recognizing it as a problem. I mean, you can argue wether or not it really is a problem, but you can't propose inaction as a solution to something you'd recognize as a problem. If you'd argue that there's nothing we could do about it (that there is no solution), as you seem to tend to do, then you shouldn't talk about it as a problem, but as an unfortunate inevitability.

On 04/03/2018 at 4:16 PM, mr moose said:

And as I posted before, where is the tolerance for gun laws, LBGT rights, legalizing weed, road rage and religious freedoms?

I guess the fact that I live in a part of the world where these specific issues are all much less prevalent than they can be in other parts of the world sort of prevents me from seeing the point you're trying to make. Still, contextual factors determine people, and arguments brought up in discussion are among the contextual factors (since people are frequently exposed to them by hearing and reading). Many things are innate, but opinions and arguments aren't.
In the last five years, a large proportion of canadians have been convinced to support legalizing weed, mostly by providing sound arguments for it.
There are other cases around the world that apply for every one of your other examples.

On 04/03/2018 at 4:16 PM, mr moose said:

I can link to many authors, ethicists and philosophers that whole condemn everything you have said.

By all means, please do. I honestly want to see if people have managed to provide a better justification for moral relativism. I personally can't fathom a valid argument for it, and everyone out of the 7 Ethics professors that I personally know starts the first class of the semester with a thourough debunking of this in order to prevent students from writing nonsense in their term papers. If it turns out that it can be justified in a way that is valid, people ought to learn about it, as it would utterly revolutionize the discipline by invalidating it completely.

On 04/03/2018 at 4:16 PM, mr moose said:

I am getting the very distinct impression you don't have much experience in this field.

Nor did I claim to. Hence why I refer to what experts generally say about it.

On 04/03/2018 at 4:16 PM, mr moose said:

It is the average persons ethical position that philosophers research, not some isolated ideal that is irrelevant to the masses.

That's arguably more what sociologists do. And Ethics doesn't have to be ideal, only optimal. It is also usually about things that are very relevant to "the masses".

On 04/03/2018 at 4:16 PM, mr moose said:

Moral opinions cannot be changed, moral reasoning is not the same as cold logic and ethical opinions are equal.

An entire field of current academic research exists on the basis of these three statements being completely wrong.

Now, as an undergrad student, I don't claim to be the best person to provide those arguments, I merely tried, and I think I did a decent job here.
This field of research has argued for its continued existence and relevance, and successfully so.

But you somehow think that you know the worth of a discipline better than the staggering majority of the specialists who actually have the qualifications to make that judgement.
That you know it better than even the National Science Foundation seems highly dubitable to me.

Maybe you should have some doubts about your qualifications in this matter, as I do about mine.
If invalidating Ethics as a worthwhile discipline was so easy, it would likely have been done a very long time ago.

On 04/03/2018 at 4:16 PM, mr moose said:

it is a device that will ultimately effect them, in a very real way

Only in the extremely small chance that they, or people they know about, find themselves in an "AI-must-decide-who-dies" situation. They're still actually much more likely to get run over and killed by a human driver, or to crash their own car and kill themselves, even after self-driving cars have become commonplace.

On 04/03/2018 at 4:16 PM, mr moose said:

Arguing that with enough time and evolution ethical opinions will change does not mean you can change the mind of an individual today.

You are also able to find examples of individuals changing their mind during their own lifetime.
It does not take centuries to convince people. Convincing has necessary and sufficient conditions. They vary, of course, but there is a finite number of them.
I'm not saying that I know how to convince anybody quickly. If I did, I'd be president of the world by now. I just know that convicing happens and is possible.

On 04/03/2018 at 4:16 PM, mr moose said:

you have failed to illustrate a mechanism by which these (now minute) details give us a distinction to work with, much less provide sufficient evidence to change someones ethical standing.

I don't think so, but it's possible I have failed. Though, I hope you realize that the worth and effectiveness of Ethics as a discipline does not rest on my perhaps-incapable shoulders alone. Maybe you might want to read up what actual ethicists have to say about it, and about their methodology. Me, I'm just an undergrad student, and also not the best one at that.

My proposition for the mechanisms you asked for is the same as that of any academic discipline: the method of dialogue and argumentation to defend a position in a convincing manner allows the competition and elimination of arguments. This can then be a method for finding solutions that make optimal sense for real problems. This is how academia in general works. If you invalidate this, you invalidate academia in general, including Philosophy, Arts, Technics and Sciences.

And I can guarantee you than changing one's mind does happen in academia too. Some people even change their minds out of their own research refuting their own previous conclusions. It's not all like that, but it happens.
 

On 04/03/2018 at 4:16 PM, mr moose said:

I am assuming if said AI was programed to favor young over old and you 40 year old mother was killed in favor of a 16 year old delinquent you would not have a problem accepting it was for the best?  If you answer yes you are either a liar or emotional stunted.

Of course I'd be devastated by a trumatic event like that. Just as much as I'd be devastated if my mother was hit by a human driver whose brain made a move to run her over instead of someone else. Immediate reactions are decisions that our nervous systems make depending on the information that they can process.
In both cases, not all possible factors are taken into account, even AI isn't gonna be omniscient in the foreseeable future.
By being emotionally devastated by the death of my mother is arguably not very relevant. Maybe that delinquent kid has more people who care about him than my mother does. And maybe he would hav ended up being a better person in the future. I wouldn't know, and neither would the foreseeable AI.
I'd ALSO be pretty fucking sad if a delinquent kid died so my mother could survive. It's still a fucking tragedy. Being emotionally stunted would be "having a total lack of empathy for the life of strangers". I don't believe that the people I know and care about are inherently worth more than strangers. I feel for strangers too.

I also honestly think that deciding solely based on age is a very sub-optimal criteria for this. Obviously factors such as "chances of surviving being run over" are going to be more important factors than "how many trips around the sun that person has made". It's a bit disingenuous to directly jump to the most bait-y criterias, which the wording of the article itself doesn't help with.

On 04/03/2018 at 4:16 PM, mr moose said:

If moral opinions are not equal what you are telling me is mine is less important than yours yet you haven't given me a single reason why. 

What I argue is that not all arguments are equal. If an opinion is backed by a valid argument, that argument can be scrutinized. If an opinion is not backed by a valid argument, then it's still an opinion, but it's only an opinion, and not worth spending the time to consider it with rigor.
I argue that opinions regarding ethics are irrelevant, and that only the arguments matter for serious consideration. Other things that you might think should matter can be made to matter when framed in the form of a valid argument.

I'm not talking about what people actually take as mattering in their day-to-day life. People (myself and you included) generally don't constantly practice rigorous ethical enquiry in their day-to-day lives. That doesn't mean people can't do it when we do put our minds to it. And when people do that, it has the potential to go beyond mere opinion, when valid arguments are provided.

I haven't judged your opinions to be less important than mine. I have provided arguments which I think are more sound and convincing than yours. Let me be clear: your opinion is backed by arguments that I think could be valid, but in which the factual premises are empirically wrong; and in core parts make your arguments self-defeating.

You haven't provided any empirical proof backing the fundamental fact-statement that people never get convinced (and thus somehow can't be). It only takes one case of convincing to refute your argument. I have provided such a case, and also argued that convincing is possible. And if it is indeed possible, then the rest of your argument falls.

Maybe you would care to continue trying to convince me that people can't be convinced, but at this point I have exhausted what little time I had available for this.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, GabD said:

a massive wall of text.

 

Sorry, I stuck it out as long as I could but I am not here to write a Phd thesis.

 

1. morals are individual and hold intrinsic value to said individual.  that is not up for debate although you will keep trying.

2. Having a device that chooses life based on the trolley problem is one that decides based on a moral principals.  Principals that history has shown not to be agreeable between individuals.

3. People will not change their moral principals no matter how much you think your reasoning is more logical.   I have given plenty of examples but you keep ignoring them.

 

If you want to prove me wrong then provide some evidence.   Until then I will just accept the reality of history.

 

Some light read about why people cannot have their minds changed when it comes to such principals:

 

https://www.scientificamerican.com/article/why-people-fly-from-facts/

https://www.sciencealert.com/researchers-have-figured-out-what-makes-people-reject-science-and-it-s-not-ignorance

https://www.newscientist.com/article/2146124-we-ignore-what-doesnt-fit-with-our-biases-even-if-it-costs-us/

 

People will not change their minds even in the face of strong evidence, so what makes you think a moral principal that has no tangible rationality underpinning it beyond personal experience,  is going to be easier to change?

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

  • 4 weeks later...
On 05/03/2018 at 10:39 PM, mr moose said:

Sorry, I stuck it out as long as I could but I am not here to write a Phd thesis.

Alright, I’ll keep this rebuttal short :

- I agree with the research you posted; In fact I have read it a while ago.

- It indeed proves that: 1) biases are a thing, 2) it is overwhelmingly common to be unreasonable due to bias, and 3) biases can be detected, observed, described and understood.

- The research details the observation that “convincing” happens relatively rarely in everyday life under statistically-normal circumstances because the vast majority of humans are rarely rational in their everyday life under statistically-normal circumstances.

- The fact that most people usually aren't rational in their moral judgements doesn't make it impossible. That's a naturalistic fallacy, you can’t base your argument on that.

- But the fact that "“convincing” rarely happens" actually proves my point, in disfavor of yours: it necessarily implies that it does in fact happen sometimes, and thus that it is indeed possible and can be done.

- And if it’s both possible and has been described, it can be understood and can theorically also be replicated. Ergo, we could know and employ methods to increase the likelihood for people to be convinced by facts and reason.

Your own sources back my argument more than yours. ;)

Link to comment
Share on other sites

Link to post
Share on other sites

Good that this will happen. Everyone see the Uber self-driven car mess-up. The driver couldn't seem to decide lol was half asleep. Better an AI watching out then nothing at all.

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, GabD said:

- But the fact that "“convincing” rarely happens" actually proves my point, in disfavor of yours: it necessarily implies that it does in fact happen sometimes, and thus that it is indeed possible and can be done.

- And if it’s both possible and has been described, it can be understood and can theorically also be replicated. Ergo, we could know and employ methods to increase the likelihood for people to be convinced by facts and reason.

Your own sources back my argument more than yours. ;)

 

So you agree it rarely happens but you also think that proves your point that it can happen on a large scale?  that makes no sense, they are literally two conditions that are at odds with each other.  Once you make it happen on a large scale it no longer is rare (as the evidence states). You are literally clinging to an ideal that has been thus far proven not to exist, that is,  that there is no method to increase the likelihood of convincing people they are wrong.

 

 

EDIT: and yes I know, the irony is that I am using evidence to try and convince someone that you can't use evidence to convince someone.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×