Jump to content

MIT posts results of a 4 year global study into the morality of driverless vehicles (who should they save in a crash?)

Master Disaster
20 minutes ago, mr moose said:

The point of this is to gather as much information as possible to gauge what the best actually is.  AI transport goes well beyond the realms of any product we have seen before.  It would be great if it was a simple as programming it to avoiding killing people, but it's not. As a computer it can think and react so much faster than humans, it crosses over from an accident where the results are beyond the control of humans (i.e not intentional) to being a calculated response that results in the potential death of humans.  This is something that can't just be done, it is new territory and unless they check all their boxes the end results could be disastrous.

The best of what actually is is to obey the law, period. Feelings have no say in this matter, at all

If actually the goal is to have fewer casualties possible, then self driving vehicles should not ever be allowed on the same pathways as pedestrians or other human driven ones

 

A computer can "think" (actually it can't), it only does what its programming tells it to do, and it reacts based on those algorithms. The Self driving car, in a situation like those, should be solely tasked to stop the car as fast and as safe as possible 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mr moose said:
The dictionary begs to differ. All ethics are based on a system of rationalization developed over time.

I'm not sure what you're trying to say there as the definition you quoted said the exact same thing I said, namely that it's a matter of random choice or personal whim. Random choice is choice, it is chosen by something, and as such is personal whim. But it doesn't necessarily need to be even seemingly random as it can also be a matter of personal whim. The definition you quoted in argument of my point supports it...

 

And yeah... They're a matter of rationalization. That is exactly what I've been saying. That *is* why they're flawed. Rationalization is not the same thing as rational thought, it's a psychological defence mechanism whereby you cover up confusing or uncomfortable truths with a different *seemingly* rational answer that you convince yourself of. Rationalization is not truth. It is not logic. It is merely a convenient lie we tell ourself when then truth is inconvenient.

1 hour ago, mr moose said:

That may be the roots of the word but it isn't how the word is used in modern vernacular,  if you want to use outdated definitions then you are going to have lots of trouble communicating. It is also the root of the word arbitration meaning to settle a dispute by judicial method/hearing.

Yeah and judicial method *is* the method where the solution to an issue is the result of the whine of a third party. An arbiter or judge. Nothing to disagree with as that all fits perfectly with the meaning of the other words.

 

But more importantly you can't throw around technical matters like ethics, psychology, sociology, etc and then excuse yourself as using the words incorrectly because that's the "colloquial use". If you're talking about a technical topic, your colloquial use of words is irrelevant, the technical use is what matters and that *is* the technical meaning of arbitrary. That is the technical meaning of arbitrary across multiple fields, not just statistics or psychology. Heck even in CS, arbitrary code is code that is chosen naively because it appears to be proper code, regardless of whether it is or not. That is the basis of "Arbitrary Code Execution".

1 hour ago, mr moose said:

If you use the etymology of the word and not the current definition, sure.

And so is your argument,  flawed because instead of being able to explain why,  you simply dismiss an entire field of human psychology (that goes well beyond the psychologist, corporate ethics is a big thing hence this study in the first place).   In fact if ethics weren't a thing that we must consider then this debate wouldn't even be taking place.  

If I use the technical definition, or even the definition that *you* quoted at me yes it is also true.

 

And yes my reasoning is flawed. I am a human. I am flawed. Therefore anything created by my own agency is similarly flawed. I don't contest that. Do you have some point to make about my flawed reasoning?

 

You seem to present them as being important because they're "big" or, if I'm understanding you correctly, popular. Is that correct? But just because it is something popular does not make it correct. A lot of rationalizations we've had about our world have been "big", have been popular. The Geocentric model was pretty darn "big" for a while. That doesn't make it right. The fact that people see themselves as having value and see other "life" as having value doesn't mean that that "life" has any fundamental value. Corporate Ethics is "big" because both public relations and internal relations between the humans that make up a corporation are "big", not because those ethical beliefs are accurate to fundamental facts.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mr moose said:

It would be great if it was a simple as programming it to avoiding killing people, but it's not.

Maybe it should be.  System takes the option of statistically greatest combination for survival for both, and does that.  Whether 100% survival for pedestrian / 10% occupant, vice versa, 1% for both or 100% to one and 0 for the other.  If it takes a route with the best chance for both, what fault can be found?  What stones can be thrown then?

The problem here is people wanting to make it one or the other, it says more about the people asking the question than anything thing else.

 

Here's a better question.  Should the AI values its own survival on those same scales with the occupant and pedestrian?  Or is it not AI at all and is just the programmers moral values?  Engineers at Volkswagen gonna save you, or give you lungs full of particulates for decades?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, bitsandpieces said:

The best of what actually is is to obey the law, period. Feelings have no say in this matter, at all
 

Don't confuse what people are saying about ethnics with mere feelings.  As I have already pointed out, if you wish to obey the law, then the AI vehicle must not involve any person who is not intrinsically a part of a situation already.  By doing so the incident goes from being an accident to manslaughter.  That is the law and that has to worked out.  Remember this is not a simply case of program the computer to avoid hurting people.  there is a whole array of complex issues that have to be addressed in implenting such a device.

1 hour ago, bitsandpieces said:

If actually the goal is to have fewer casualties possible, then self driving vehicles should not ever be allowed on the same pathways as pedestrians or other human driven ones

yes, and?

1 hour ago, bitsandpieces said:

A computer can "think" (actually it can't), it only does what its programming tells it to do, and it reacts based on those algorithms. The Self driving car, in a situation like those, should be solely tasked to stop the car as fast and as safe as possible 

Without involving innocent people.

 

3 minutes ago, Sniperfox47 said:

I'm not sure what you're trying to say there as the definition you quoted said the exact same thing I said, namely that it's a matter of random choice or personal whim. Random choice is choice, it is chosen by something, and as such is personal whim. But it doesn't necessarily need to be even seemingly random as it can also be a matter of personal whim. The definition you quoted in argument of my point supports it...

 

You can't throw everyone's opinion or ethical stance into the category of "personal whim",  ethics is a lot more complicated than that.

3 minutes ago, Sniperfox47 said:

 Rationalization is not the same thing as rational thought,

????

3 minutes ago, Sniperfox47 said:

it's a psychological defence mechanism whereby you cover up confusing or uncomfortable truths with a different *seemingly* rational answer that you convince yourself of.

No that's cognitive dissonance.  There is a huge difference between placing an scale of values onto something based on your personal connection to them and ignoring facts because it upsets you.

3 minutes ago, Sniperfox47 said:

Rationalization is not truth. It is not logic. It is merely a convenient lie we tell ourself when then truth is inconvenient.

Again that is cognitive dissonance,  Please what you are saying is absolutely incoherent, rational thought is by definition rational, it is a logical process, ignoring facts or lying to yourself is not rational thought, that is the exact opposite of being rational.

 

In fact your claims are so wrong that your foundation is flawed, I don't know if its worth continuing the discussion until you understand that.

 

3 minutes ago, Sniperfox47 said:

But more importantly you can't throw around technical matters like ethics, psychology, sociology, etc and then excuse yourself as using the words incorrectly because that's the "colloquial use".

That's not an incorrect use of the word.  In fact if you tell ten people to define arbitrary behaviour most will tell you it is irrational.  That;'s how language works, get used to it.

3 minutes ago, Sniperfox47 said:

If you're talking about a technical topic, your colloquial use of words is irrelevant, the technical use is what matters and that *is* the technical meaning of arbitrary. That is the technical meaning of arbitrary across multiple fields, not just statistics or psychology. Heck even in CS, arbitrary code is code that is chosen naively because it appears to be proper code, regardless of whether it is or not. That is the basis of "Arbitrary Code Execution".

If I use the technical definition, or even the definition that *you* quoted at me yes it is also true.

 

That's just a fancy way to deflect from the issue.

3 minutes ago, Sniperfox47 said:

And yes my reasoning is flawed. I am a human. I am flawed. Therefore anything created by my own agency is similarly flawed. I don't contest that. Do you have some point to make about my flawed reasoning?

Yes,  if you wish to propose that an inherent ability to be flawed means that "everything" is flawed then you need to stop discussing these things. leave debates that shape the world to less futile thinkers.

3 minutes ago, Sniperfox47 said:

You seem to present them as being important because they're "big" or, if I'm understanding you correctly, popular. Is that correct?

No, they are important because they are an intrinsic part of humanity, All laws and regulation stem from human perceptions of value and morality.  To try and discount that as being an inferior and thus an ignore-able condition of humanity is deeply flawed in itself.

3 minutes ago, Sniperfox47 said:

But just because it is something popular does not make it correct.

Never said anything about popularity, that's your just you trying too dismiss ethics.

3 minutes ago, Sniperfox47 said:

A lot of rationalizations we've had about our world have been "big", have been popular.

And?

3 minutes ago, Sniperfox47 said:

The Geocentric model was pretty darn "big" for a while. That doesn't make it right.

Lack of data is not the same as lack of rational thought.    You can only rationalize with the data you have, pointing at history and claiming we were wrong then means we are wrong now is flawed.

3 minutes ago, Sniperfox47 said:

The fact that people see themselves as having value and see other "life" as having value doesn't mean that that "life" has any fundamental value.

That is merely a disagreement, because value is totally in the eye of the beholder, and you have no right to deny the value of something to someone else just because you don't see value in it.  Especially when we are talking about human life.

3 minutes ago, Sniperfox47 said:

Corporate Ethics is "big" because both public relations and internal relations between the humans that make up a corporation are "big", not because those ethical beliefs are accurate to fundamental facts.

I wonder why public relations is important? is it because people don't like unethical businesses? does that mean people value an ethical system? does that ultimately mean ethics is an inseparable part of humanity?

 

Of course it does.  Again, if it wasn't the intrinsic part of the human psyche that it is, we wouldn't be having this discussion.

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, MoonSpot said:

 What stones can be thrown then?

 

 Because if the pedestrian was not in any danger to begin with and the AI chose a course of action that did include them in the probability of being hurt, then the AI has included an innocent person.  Without AI humans can't think fast enough so the decision is not intentional, AI can think fast enough which makes any damages intentional (thus unethical by some standards and illegal). 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Life of the few Outweighs the life of many

if killing one person saves 2 or more people then i guess its worth it?

 

CPU: Intel9-9900k 5.0GHz at 1.36v  | Cooling: Custom Loop | MOTHERBOARD: ASUS ROG Z370 Maximus X Hero | RAM: CORSAIR 32GB DDR4-3200 VENGEANCE PRO RGB  | GPU: Nvidia RTX 2080Ti | PSU: CORSAIR RM850X + Cablemod modflex white cables | BOOT DRIVE: 250GB SSD Samsung 850 evo | STORAGE: 7.75TB | CASE: Fractal Design Define R6 BLackout | Display: SAMSUNG OLED 34 UW | Keyboard: HyperX Alloy elite RGB |  Mouse: Corsair M65 PRO RGB | OS: Windows 10 Pro | Phone: iPhone 11 Pro Max 256GB

 

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, mr moose said:

No that's cognitive dissonance.  There is a huge difference between placing an scale of values onto something based on your personal connection to them and ignoring facts because it upsets you.

Again that is cognitive dissonance,  Please what you are saying is absolutely incoherent, rational thought is by definition rational, it is a logical process, ignoring facts or lying to yourself is not rational thought, that is the exact opposite of being rational.

I'm going to reply just to this because it is indicative of the communication issue we're having.

 

If you're going to talk about how ethics is an important part of psychology, please read a psychology textbook. Even the basics used for psychology 101.

 

Rationalization, the term *you* used has nothing to do with rational thought or being rational. Rationalization is specifically that psychological quality of using a dissonance as a defence mechanism, that's what rationalization means. It means justifying something irrational as being rational, hence the suffix at the end which means "to make as though" as in "to make as though rational".

 

The logical fallicy of rationalization is *directly* related to the psychology term rationalization.

 

Technical language exists because it is a consistent common basis for which to build discussion and debate. You can't have your cake and eat it too, calling out things as not being technically true based on the colloquial meaning of a word used out of context. If you want to argue sociology and psychology I'm happy to do so, but not if you're going to disregard the platform of understanding in those fields.

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, Sniperfox47 said:

I'm going to reply just to this because it is indicative of the communication issue we're having.

 

If you're going to talk about how ethics is an important part of psychology, please read a psychology textbook. Even the basics used for psychology 101.

 

Rationalization, the term *you* used has nothing to do with rational thought or being rational. Rationalization is specifically that psychological quality of using a dissonance as a defence mechanism, that's what rationalization means. It means justifying something irrational as being rational, hence the suffix at the end which means "to make as though" as in "to make as though rational".

 

The logical fallicy of rationalization is *directly* related to the psychology term rationalization.

 

Technical language exists because it is a consistent common basis for which to build discussion and debate. You can't have your cake and eat it too, calling out things as not being technically true based on the colloquial meaning of a word used out of context. If you want to argue sociology and psychology I'm happy to do so, but not if you're going to disregard the platform of understanding in those fields.

 

I can rationalize something anyway I want, but that doesn't mean I am always being irrational about it.  When you ignore facts as a matter of self protection, that is nearly always cognitive dissonance.  

 

You ignore the importance of ethics in human psychology because it interferes with your desire to divorce yourself from the emotional burden of said AI implications.    That is a classic example of cognitive dissonance. 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

How about instead of prioritizing on who should be saved first, we should take a look into how we could prevent a crash from happening in the first place or even if a crash occurs on how to minimize the damage as much as possible and keep the possibility of fatal injuries low. If I am not mistaken, there is (or has) been a research on how to develop an external airbag to not only protect those inside, but those from the outside as well. 

Desktops

 

- The specifications of my almighty machine:

MB: MSI Z370-A Pro || CPU: Intel Core i3 8350K 4.00 GHz || RAM: 20GB DDR4  || GPU: Nvidia GeForce GTX1070 || Storage: 1TB HDD & 250GB HDD  & 128GB x2 SSD || OS: Windows 10 Pro & Ubuntu 21.04

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, mr moose said:

 

I can rationalize something anyway I want, but that doesn't mean I am always being irrational about it.  When you ignore facts as a matter of self protection, that is nearly always cognitive dissonance.  

 

You ignore the importance of ethics in human psychology because it interferes with your desire to divorce yourself from the emotional burden of said AI implications.    That is a classic example of cognitive dissonance. 

I never once made the assertion that ethics isn't important to psychology... It's a cornerstone of psychology and the whole concept of conciousness.

 

I made the claim that ethics, a field of psychology and sociology, has no place in the discussion of the correct choice for an automated car to make when there is always a mechanically optimal solution. That solution may not always line up with ethics, but that's kind of irrelevant because any other possible choive, by virtue of being a worse choice and therefore also posing risk, would also not line up with ethics.

 

With a self driving car you are dealing with analogue situations, not digital ones. You don't have the option of killing person A or killing person B, you also have the option to take actions which have the lowest probability of killing either.

 

And before you get offended about my use of the word probability with regards to killing them, everything has a probability to kill someone. Eating a banana there is a probability that they will choke on that banana and die, or a probability that they will have an allergic reaction. You can never eliminate these probabilities, merely mitigate them, and a computer can take the option with the computationally lowest risk of injury or death for *all* parties.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Sniperfox47 said:

I never once made the assertion that ethics isn't important to psychology... It's a cornerstone of psychology and the whole concept of conciousness.

 

I made the claim that ethics, a field of psychology and sociology, has no place in the discussion of the correct choice for an automated car to make when there is always a mechanically optimal solution.

There's not always a mechanically optimal solution though.  Ethics (that you claim not to be dismissing) says there is a huge barrier that must be over come before AI can be accepted.

7 minutes ago, Sniperfox47 said:

That solution may not always line up with ethics,

Which is the problem,  if it doesn't then another solution has to be found. Ignoring ethics is not going to make the solution any less unattractive/wanted.

7 minutes ago, Sniperfox47 said:

but that's kind of irrelevant because any other possible choive, by virtue of being a worse choice and therefore also posing risk, would also not line up with ethics.

Except it does,  because the imperfection of humans behind the wheel means that we cannot always definitively know in an accident who is going to be hurt and who isn't, it happens too fast for the human brain to make a conscious decision on. 

7 minutes ago, Sniperfox47 said:

 

With a self driving car you are dealing with analogue situations, not digital ones. You don't have the option of killing person A or killing person B, you also have the option to take actions which have the lowest probability of killing either.

they are some of the likely situations it will be in for sure, but that doesn't mean it will never have to choose between life A or life B.

7 minutes ago, Sniperfox47 said:

 

And before you get offended about my use of the word probability with regards to killing them, everything has a probability to kill someone. Eating a banana there is a probability that they will choke on that banana and die, or a probability that they will have an allergic reaction. You can never eliminate these probabilities, merely mitigate them, and a computer can take the option with the computationally lowest risk of injury or death for *all* parties.

Who said I was offended at that?   I have used the term probability of human death/injury  several times already in this discussion.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, mr moose said:

There's not always a mechanically optimal solution though.  Ethics (that you claim not to be dismissing) says there is a huge barrier that must be over come before AI can be accepted.

Except in an analogue environment there always is... The chances of two options being equally optimal in an analogue setting is infinitismal in the most literal of senses. A 30% chance of killing someone is mechanically Superior to a 30.00000000001% chance of killing someone, within margin of error.

 

And why do we care about AI being accepted? Allowing human drivers behind the wheel in situations where they will kill a person but an AI would not is grossly negligent. People don't have to like or accept it. They have to deal with the fact that AI are strictly superior to them and live with it.

 

2 hours ago, mr moose said:

Which is the problem,  if it doesn't then another solution has to be found. Ignoring ethics is not going to make the solution any less unattractive/wanted.

Again that's literally impossible, since in the field of ethics both answers are wrong to some audiences. Even in the incredibly contrived situation of you can kill person A or person B, either option will result in a negative reaction from at least some portion of the population on an ethical basis. Which of the two choices you choose is really irrelevant, what matters is you're making a choice.

 

People inherently dislike the agency of that choice being taken out of their control. That's too bad for them. They don't have to like it.

 

2 hours ago, mr moose said:

Except it does,  because the imperfection of humans behind the wheel means that we cannot always definitively know in an accident who is going to be hurt and who isn't, it happens too fast for the human brain to make a conscious decision on. 

How does that have literally anything to do with the section you quoted? The block you quoted there was a part of the block you quoted above explaining how any solution is going to provoke a negative reaction.

 

But let's talk about that. Negligence and incompetence is not an excuse for manslaughter. It doesn't excuse the life being taken when an alternative technology exists that would not have resulted in that death and which you could have trivially leveraged. Humans are worse drivers than self driving vehicles.

 

The best human drivers are about on par with the current high tier self driving vehicles (excluding Uber as mentioned above), and these vehicles are only set to get better and better. The majority of the populace is *much* worse at driving than the best human drivers. There's literally no ethical dilemma there. You have a choice between killing person A and not killing person A. That's a simple optimization problem, not an ethical one, as one option is clearly better.

 

2 hours ago, mr moose said:

Who said I was offended at that?   I have used the term probability of human death/injury  several times already in this discussion.

Nobody did. I said I'd head it off before you did. You seem to have a thing for misrepresenting my wording and I figured I should get in front of it on this instance.

Link to comment
Share on other sites

Link to post
Share on other sites

So you are in an AI car, another oncoming car moves in to your lane and will cause a head on collision. On the side walk are pedestrians, there is more on coming traffic, there are no sidings, no off ramps, no where to go. Does the AI stay the course and crash head on or swerve and hit the pedestrians, both situations will cause injury with potential death. Which is correct?

 

But this won't happen if all cars are AI driven, well what if it's a mechanical fault like a tyre blow out. At some point something will go wrong and everything being self driving cars won't eliminate that.

 

This is an ethics conundrum, reducing the chances of the posed issue doesn't actually solve it. Edit: Not that there is necessarily a "solution".

Link to comment
Share on other sites

Link to post
Share on other sites

All we need is more bridges and tunnels.

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 9 5900X          Case: Antec P8     PSU: Corsair RM850x                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Sniperfox47 said:

Except in an analogue environment there always is... The chances of two options being equally optimal in an analogue setting is infinitismal in the most literal of senses. A 30% chance of killing someone is mechanically Superior to a 30.00000000001% chance of killing someone, within margin of error.

No there's not.   It doesn't matter if there is only 1 pedestrian and 30 in the car, if the AI chooses that 1 pedestrian it has violated an ethical condition.  full stop, period, end of story.  no amount 30/70 50/50 or 1/99 will change that.

1 hour ago, Sniperfox47 said:

And why do we care about AI being accepted? Allowing human drivers behind the wheel in situations where they will kill a person but an AI would not is grossly negligent. People don't have to like or accept it. They have to deal with the fact that AI are strictly superior to them and live with it.

Explained that already.

1 hour ago, Sniperfox47 said:

Again that's literally impossible,

Yep. it certainly seems that way and  that  is why a lot of people are saying there is no answer or solution. 

1 hour ago, Sniperfox47 said:

since in the field of ethics both answers are wrong to some audiences. Even in the incredibly contrived situation of you can kill person A or person B, either option will result in a negative reaction from at least some portion of the population on an ethical basis. Which of the two choices you choose is really irrelevant, what matters is you're making a choice.

And yet that scenario will arise at some point,  you can't just pretend its likely so rare we don't have to worry about it.   In fact with the sheer number of cars on the road and the millions of things that can go wrong, the chances of it happening are much higher than you think.

1 hour ago, Sniperfox47 said:

People inherently dislike the agency of that choice being taken out of their control. That's too bad for them. They don't have to like it.

They don't have to like it, but you might have to accept that they may retain control.  That is the thing about living in a community, we all have to make compromises and yours might just have to be the fact that people won't allow a car on the road that can kill innocent people.

1 hour ago, Sniperfox47 said:

How does that have literally anything to do with the section you quoted? The block you quoted there was a part of the block you quoted above explaining how any solution is going to provoke a negative reaction.

Because you are arguing that:

4 hours ago, Sniperfox47 said:

 That solution may not always line up with ethics, but that's kind of irrelevant because any other possible choive, by virtue of being a worse choice and therefore also posing risk, would also not line up with ethics.

Humans behind the wheel are worse therefore won';t line up with ethics.  I am telling you they do, because the person behind the wheel more often than not cannot  make a conscious decision to kill one person over another, therefore the ethical question of who should live or die doesn't come into it at all. It's just a reaction and then it's all over.  The fact AI can reduce injury and death is moot.  it introduces an ethical dilemma that you can't ignore because you think it's purely a numbers game.

1 hour ago, Sniperfox47 said:

But let's talk about that. Negligence and incompetence is not an excuse for manslaughter.

Correct, manslaughter comes from intentional neglect.  not an actual inability.

1 hour ago, Sniperfox47 said:

It doesn't excuse the life being taken when an alternative technology exists that would not have resulted in that death and which you could have trivially leveraged. Humans are worse drivers than self driving vehicles.

Now you are moving the outcomes to suit the argument,  again, it might in many cases result in lower deaths, but when it has to decide between 1 pedestrian and 2 occupants,  it's not a superior technology that is the heart of the problem, it is a technology that has been programed to decide between the life of two people. 

1 hour ago, Sniperfox47 said:

 

The best human drivers are about on par with the current high tier self driving vehicles (excluding Uber as mentioned above), and these vehicles are only set to get better and better. The majority of the populace is *much* worse at driving than the best human drivers. There's literally no ethical dilemma there. You have a choice between killing person A and not killing person A. That's a simple optimization problem, not an ethical one, as one option is clearly better.

Your trying to play the numbers game again,  all the numbers in the world will not remove value of a human life or the ethics of taking one life in order to save others.

1 hour ago, Sniperfox47 said:

Nobody did. I said I'd head it off before you did. You seem to have a thing for misrepresenting my wording and I figured I should get in front of it on this instance.

What a wild assumption and silly mistake then.

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

54 minutes ago, williamcll said:

All we need is more bridges and tunnels.

Personally I am in favor of more public transport.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mr moose said:

Personally I am in favor of more public transport.

Well that just makes the equation harder, if it's an AI bus etc.

Link to comment
Share on other sites

Link to post
Share on other sites

I believe the auto pilot should save the vehicle and its occupants firstly as the occupants paid big money to drive that vehicle. If the occupants want to opt out they can change the settings as they wish.

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, leadeater said:

Well that just makes the equation harder, if it's an AI bus etc.

In the event of a malfunction of the brakes, The bus should swerve into at least an equal number or less of innocent bystanders as is on the bus in order to maintain the safety probability ratio.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Canada EH said:

I believe the auto pilot should save the vehicle and its occupants firstly as the occupants paid big money to drive that vehicle. If the occupants want to opt out they can change the settings as they wish.

So relative security from malfunctioning vehicles should be for those who can afford it and not for innocent bystanders?

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Canada EH said:

I believe the auto pilot should save the vehicle and its occupants firstly as the occupants paid big money to drive that vehicle. If the occupants want to opt out they can change the settings as they wish.

They've also put themselves in the situation where a computer is in control of their fate while people innocently walking down the road are not involved in any way.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, bitsandpieces said:

The best of what actually is is to obey the law, period. Feelings have no say in this matter, at all

If actually the goal is to have fewer casualties possible, then self driving vehicles should not ever be allowed on the same pathways as pedestrians or other human driven ones

 

A computer can "think" (actually it can't), it only does what its programming tells it to do, and it reacts based on those algorithms. The Self driving car, in a situation like those, should be solely tasked to stop the car as fast and as safe as possible 

I actually think this is a viable solution tbh. Only allow self driving mode in areas where pedestrian contact is impossible (highways, motorways etc) or incredibly unlikely (country roads, non urbanised areas etc). Anywhere even semi urbanised the AI should shut down and require the human to take control.

 

The fastest and busiest roads should require AI to be enabled at all times on every vehicle including transportation vehicles (such as lorries and vans) and public transport vehicles (taxis, minibuses, coaches etc) as these roads tend to have larger accidents including multiple vehicles more often.

 

The only issue right now is every manufacturer has their own AI system and non of them are able to communicate with any of the others. For AI to become safer than human control the AIs need to be able to talk to other vehicles around it. Once this happens crashes on the biggest and fastest roads will all but disappear.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, VegetableStu said:

automated road trams? o_o

I'd like to see a train/tram swerve lol

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, leadeater said:

I'd like to see a train/tram swerve lol

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/27/2018 at 10:04 AM, leadeater said:

This is easily solved by having a "Save Me" button in the car, car detects potential collision and you like everyone will hit that button so fast and hard.

If there's time to press that button, there's time to avoid a fatal accident. What's being asked here is what the car should do, when faced with an unavoidable crash, if the "driver" isn't able to choose on the spot.

 

Either way I think we're very far from this actually being a problem. Aside from the problems self driving cars have right now that prevent them from being completely independent, recognizing things like "if I swerve right now I'll hit exactly a kid and a crippled cow, mildly injuring the first and killing the other, while if I don't swerve my human will suffer whiplash" requires a level of AI that we aren't likely to reach any time soon. I think following the rules of the road and staying on the road if swerving could hit a human are good rules of thumb, and probably better than a human driver could do.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×