Jump to content

MIT posts results of a 4 year global study into the morality of driverless vehicles (who should they save in a crash?)

Master Disaster
Just now, Sniperfox47 said:

Which is what it means for them to *be* arbitrary... Defined by personal whim as opposed to a concrete and fundamental system...

 

Everyone's ethics are different. Everyone's morals are different. And that's precisely why we shouldn't care about anyone's ethics or morals because they're typically not based on any logical reasoning, but rather on the societal conditioning we grow up as.

 

Everyone's ethics are inherritantly flawed. We shouldn't base our future on something we know to be flawed.

That very idealistic. You try explaining to the parents of someone who was killed by a self driving car through no fault of their own that they're only upset because their morals are down to social conditioning.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

42 minutes ago, Vanderburg said:

It's a fair point, but I still think that the question of morality should be removed from the equation all together. It's tragic, sure, but the car's job is to transport its passengers safely, first and foremost. Once an AI starts to calculate who "deserves" to live, it will start to consider things we never thought of and start making decisions we don't expect, or want.

In that case I deny you the right to have this car because it will endanger the lives of innocent bystanders.

 

1 minute ago, Sniperfox47 said:

Which is what it means for them to *be* arbitrary... Defined by personal whim as opposed to a concrete and fundamental system...

 

Everyone's ethics are different. Everyone's morals are different. And that's precisely why we shouldn't care about anyone's ethics or morals because they're typically not based on any logical reasoning, but rather on the societal conditioning we grow up as.

 

Everyone's ethics are inherritantly flawed. We shouldn't base our future on something we know to be flawed.

Arbitrary means seemingly random, most peoples ethical standing is not random, it is the result of environmental conditioning.

 

You can't just ignore it and pretend it is not thing.   Different does not equal flawed. You seem to think that just because someone values one particular life over another that somehow that is a flaw or thing that can just be ignored.  It can't and the fact you are even arguing that it can is an ethical position in itself.  So by your reason I can ignore that?

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, mr moose said:

In that case I deny you the right to have this car because it will endanger the lives of innocent bystanders.

Not any more than with a human driving, Much less, really, and the car could calculate a maneuver that could reduce the chance of a bystander dying while still preserving the life of the passenger, much better than a human could.

 

If the total number of deaths go way down, including bystanders simply due to the speed at which the car could calculate and react, then it's a net positive.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Vanderburg said:

Not any more than with a human driving, Much less, really, and the car could calculate a maneuver that could reduce the chance of a bystander dying while still preserving the life of the passenger, much better than a human could.

 

If the total number of deaths go way down, including bystanders simply due to the speed at which the car could calculate and react, then it's a net positive.

But suddenly you've got AI making choices on who lives and who dies.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Master Disaster said:

But suddenly you've got AI making choices on who lives and who dies.

No, my point is that the AI shouldn't make that decision. It should always try to save the passenger first. Then try its best to reduce the chance of a bystander getting hurt.

Link to comment
Share on other sites

Link to post
Share on other sites

"MIT study for 4 years the morality on if you should put a baby in an oven or a freezer..."

Giving 2 wrong answers is not a morality test. It's a seg fault test.

 

Putting AI in charge of "who to crash into" is a mistake because we should be working on preventing that, not enabling it.

 

(as above, "the passenger or the pedestrian", both are the wrong answer)

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, Vanderburg said:

No, my point is that the AI shouldn't make that decision. It should always try to save the passenger first. Then try its best to reduce the chance of a bystander getting hurt.

Contradiction much? By trying to save one group over another the AI is by definition, making a choice over who to save.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Master Disaster said:

Contradiction much? By trying to save one group over another the AI is by definition, making a choice over who to save.

If the ruleset of the decision is hardcoded then its not the AI's choice(and IMHO it should be a very tightly regulated ruleset so the unexpected outcomes can be kept at the minimum). BTW there are cases where not even the AI is fast enough and an evasion maneuver could do more harm than good.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Vanderburg said:

Not any more than with a human driving, Much less, really, and the car could calculate a maneuver that could reduce the chance of a bystander dying while still preserving the life of the passenger, much better than a human could.

 

If the total number of deaths go way down, including bystanders simply due to the speed at which the car could calculate and react, then it's a net positive.

Except the bystanders are not a part of an accident until the AI makes them a part, that is thee difference, the occupants of the car put themselves at the mercy of the AI by choice, the bystander on the street doesn't.  We can't just argue a numbers game because each person does not share the same value and that value is dictated by individual subjective appraisal not mathematical formula.

 

1 hour ago, Vanderburg said:

No, my point is that the AI shouldn't make that decision. It should always try to save the passenger first. Then try its best to reduce the chance of a bystander getting hurt.

That is a contradiction, once the AI tries to protect the occupants first it is making a decision.   If you want the AI to protect anyone first, it should be the people who are not a part of the accident to begin with.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, jagdtigger said:

If the ruleset of the decision is hardcoded then its not the AI's choice(and IMHO it should be a very tightly regulated ruleset so the unexpected outcomes can be kept at the minimum). BTW there are cases where not even the AI is fast enough and an evasion maneuver could do more harm than good.

Whether its hardcoded or not the fact remains that if an AI prioritises one group over another then it's choosing who lives and who dies. It was given a situation and it decided to take action to save one group over another group.

 

You could argue that humans don't really make choices, in the large majority of stress inducing situations any given human will make the exact same choice almost every time because our choices are hardcoded into us by our experiences, our upbringing and our social conditioning. Although we do have the ability to consciously decide most of the time we don't, we go with what our brains tell us is the right thing at that moment, thought is removed from the process almost entirely. This is why emergency responders are so heavily trained, so their brains will make the right choice for them in that split second without them having to think about it.

 

I understand a machine cannot think and doesn't have the ability to act outside of its programming however it is still choosing to save the passengers over the pedestrians because thats what it is programmed to do.

 

One of the big unanswered questions of AI is based entirely on this conundrum. If a machine, acting within the parameters of its programming kills a human then is the human that programmed the machine to make that choice responsible for the outcome?

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

Just wait till ONE AI driven car is hacked and hits pedestrians, that is a Pandora box I am glad I am not a lawyer who would have to sort out in court whenever the next incident occurs and the question is brought up.

 

 

PC - NZXT H510 Elite, Ryzen 5600, 16GB DDR3200 2x8GB, EVGA 3070 FTW3 Ultra, Asus VG278HQ 165hz,

 

Mac - 1.4ghz i5, 4GB DDR3 1600mhz, Intel HD 5000.  x2

 

Endlessly wishing for a BBQ in space.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, mr moose said:

Except the bystanders are not a part of an accident until the AI makes them a part, that is thee difference, the occupants of the car put themselves at the mercy of the AI by choice, the bystander on the street doesn't.  We can't just argue a numbers game because each person does not share the same value and that value is dictated by individual subjective appraisal not mathematical formula.

 

That is a contradiction, once the AI tries to protect the occupants first it is making a decision.   If you want the AI to protect anyone first, it should be the people who are not a part of the accident to begin with.

No, the AI isn't deciding whether or not to save the passenger. It's already being told that is what it must do. It isn't thinking "should I save the passenger", it's thinking "how can I save the passenger". It isn't thinking "should I kill this bystander", it's not considering that in its calculation for saving the passenger. Once it has figured out how to save the passenger, it can then evaluate if anyone else is put at risk and then calculate the ways to reduce the chance of anyone else being killed without reducing the chance for the passenger being killed. While yes, the bystander isn't part of the accident until the car makes them part of the accident, the total number of bystanders involved in accidents would be much lower than with human drivers because the AI driven car is inherently safer overall.

 

You can disagree with my opinion, that's fine, but it's just as valid as anyone else's.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Master Disaster said:

One of the big unanswered questions of AI is based entirely on this conundrum. If a machine, acting within the parameters of its programming kills a human then is the human that programmed the machine to make that choice responsible for the outcome?

Depends on the situation and how it was programmed. If its a last resort option then i wouldnt blame the programmer...

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/27/2018 at 8:16 AM, TetraSky said:

They are unrealistic scenarios to me, because any proper "smart" self driving car should be able to detect the brakes have failed long before it gets to that point of no return

exactly this. None of these scenarios are realistic.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Master Disaster said:

Whether its hardcoded or not the fact remains that if an AI prioritises one group over another then it's choosing who lives and who dies. It was given a situation and it decided to take action to save one group over another group.

 

You could argue that humans don't really make choices, in the large majority of stress inducing situations any given human will make the exact same choice almost every time because our choices are hardcoded into us by our experiences, our upbringing and our social conditioning. Although we do have the ability to consciously decide most of the time we don't, we go with what our brains tell us is the right thing at that moment, thought is removed from the process almost entirely. This is why emergency responders are so heavily trained, so their brains will make the right choice for them in that split second without them having to think about it.

 

I understand a machine cannot think and doesn't have the ability to act outside of its programming however it is still choosing to save the passengers over the pedestrians because thats what it is programmed to do.

 

One of the big unanswered questions of AI is based entirely on this conundrum. If a machine, acting within the parameters of its programming kills a human then is the human that programmed the machine to make that choice responsible for the outcome?

Not quite. Not that we disagree with you though. Just we are applying the wrong systems in the wrong areas. If a hammer hits you on the head, it certainly did not "choose" to hit you on the head.

 

No amount of change will mean an AI "chooses". It will look at it's programming, it will look at the situation, and it will *select* based on what a programmer/lawmaker put into it.

 

(This is, as you say, ignoring who/how/what choice is. But thankfully, AI don't "choose", are not human and don't have "free will", so we can ignore those for a different discussion. AI specifically select based or a math/lookup table)

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Vanderburg said:

No, the AI isn't deciding whether or not to save the passenger. It's already being told that is what it must do. It isn't thinking "should I save the passenger", it's thinking "how can I save the passenger". It isn't thinking "should I kill this bystander", it's not considering that in its calculation for saving the passenger. Once it has figured out how to save the passenger, it can then evaluate if anyone else is put at risk and then calculate the ways to reduce the chance of anyone else being killed without reducing the chance for the passenger being killed. While yes, the bystander isn't part of the accident until the car makes them part of the accident, the total number of bystanders involved in accidents would be much lower than with human drivers because the AI driven car is inherently safer overall.

 

You can disagree with my opinion, that's fine, but it's just as valid as anyone else's.

That's still a contradiction.   If the computer needs to calculate how to reduce the chance of anyone else being killed without killing the passengers then it is still choosing to kill innocent people regardless whether it was programed too (which becomes manslaughter under current law in most countries), or the AI does it automatically.  

 

Who to kill between two people inseparably linked to the event is a matter of opinion, But advocating for a device  that will kill people against their will goes beyond opinion, that is a position I can argue.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, mr moose said:

That's still a contradiction.   If the computer needs to calculate how to reduce the chance of anyone else being killed without killing the passengers then it is still choosing to kill innocent people regardless whether it was programed too (which becomes manslaughter under current law in most countries), or the AI does it automatically.  

 

Who to kill between two people inseparably linked to the event is a matter of opinion, But advocating for a device  that will kill people against their will goes beyond opinion, that is a position I can argue.

I can see this is confusing for you, so this is the last time I'll try to explain it.

 

The computer at no point says "should I kill a bystander". It's not a variable it considers. Because I've declared this as a variable it doesn't consider in my hypothetical solution to the question, it cannot make that choice. It only says "what do I need to do to save the passenger". The only thing it says is "what trajectory do I need to manage to prevent from hitting that tree, or flying into that barrier or hitting the back of the car ahead so hard that it's fatal for the passenger". After it's calculated that, it can then go "Oh, this trajectory also is going to be fatal a bystander. Is it possible to alter this trajectory to not be fatal for the bystander while also not being fatal for the passenger? If so, do it, otherwise, don't."

 

The car is not making a moral judgement about which person to kill or let live. It's already been told which person is the priority to save. And again, because the car is orders of magnitude safer than a human driver, the total number of fatalities will be greatly reduced, for a significant net positive result. The number of bystanders killed may not be reduced by as much as the number of passengers (It could even be 80% passenger deaths reduced and 0% bystander deaths reduced and it would still be a significant net positive), but my way, the car doesn't have to decide who "deserves" to live, because if it did, the car would start considering variables we can't predict, or want, and you can't program it to value things exactly like we do.

 

And that's about all I have to say about that.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Vanderburg said:

I can see this is confusing for you, so this is the last time I'll try to explain it.

 

The computer at no point says "should I kill a bystander". It's not a variable it considers. Because I've declared this as a variable it doesn't consider in my hypothetical solution to the question, it cannot make that choice. It only says "what do I need to do to save the passenger". The only thing it says is "what trajectory do I need to manage to prevent from hitting that tree, or flying into that barrier or hitting the back of the car ahead so hard that it's fatal for the passenger". After it's calculated that, it can then go "Oh, this trajectory also is going to be fatal a bystander. Is it possible to alter this trajectory to not be fatal for the bystander while also not being fatal for the passenger? If so, do it, otherwise, don't."

I understand what you are saying, What I am telling you is that unless you program the car to put innocent bystanders first, then it's manslaughter.   There is no way to morally or ethically involve a person in a crash that is not already inherently a part of that accident. 

 

11 minutes ago, Vanderburg said:

The car is not making a moral judgement about which person to kill or let live. It's already been told which person is the priority to save.

I got that from the first post you made, the problem is not whether the car decides or the programing decides, the problem is the decision itself.

11 minutes ago, Vanderburg said:

And again, because the car is orders of magnitude safer than a human driver,

irrelevant,  I am a big advocate for AI cars, however we can't ignore a few basic principals of Law and social obligation.

11 minutes ago, Vanderburg said:

the total number of fatalities will be greatly reduced, for a significant net positive result.

As I said earlier, you cannot replace moral obligation with numbers.  A persons value cannot be determined by their age, sex or education. Especially when we as a race of creatures are not struggling for survival, we're not all going to die because one professor gets hit by a car. 

 

11 minutes ago, Vanderburg said:

The number of bystanders killed may not be reduced by as much as the number of passengers (It could even be 80% passenger deaths reduced and 0% bystander deaths reduced and it would still be a significant net positive),

Again irrelevant, you are talking about a program/condition that would dictate the value of one life over another when the other life would otherwise not be in any danger.

11 minutes ago, Vanderburg said:

but my way, the car doesn't have to decide who "deserves" to live, because if it did, the car would start considering variables we can't predict, or want, and you can't program it to value things exactly like we do.

Again, you are advocating that you program the computer to bias the safety of those who choose the car and the route over the safety of innocent people who have no choice in any of it.

 

11 minutes ago, Vanderburg said:

 

And that's about all I have to say about that.

 

I really hope for your sake no one chooses to involve you in an accident because you were considered a lesser of a human than the occupant of the car.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

I took the test and it's utter bull

How is the AI of the car going to identify the person's sex or age?

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/28/2018 at 2:08 AM, mr moose said:

In that case I deny you the right to have this car because it will endanger the lives of innocent bystanders.

What are you even talking about? Having that car is not a right... Being alive is a right. You don't have a right to have a car. But even if you did your logic is totally baseless because self driving cars are provably less risk to civilians than cars driven by people, unless they're made by Uber. By that logic you should take away everyone's drivers lisence and shut down Uber's self driving program before you even consider shutting down the other major self driving players.

Quote

Arbitrary means seemingly random, most peoples ethical standing is not random, it is the result of environmental conditioning.

What are you even talking about? No it doesn't...

 

Arbitrary is the Latin word Arbitrarius coming from the latin word Arbiter meaning "one who judges or dictate". Arbitrary is a thing that is dependent on ones personal judgement or reasoning as opposed to fundamental principles of nature or logical reasoning...

 

If morals and ethics are decided on personal whim and the influences of society they are the very definition of Arbitrary... That's what the word means...

 

Quote

You can't just ignore it and pretend it is not thing.   Different does not equal flawed. You seem to think that just because someone values one particular life over another that somehow that is a flaw or thing that can just be ignored.  It can't and the fact you are even arguing that it can is an ethical position in itself.  So by your reason I can ignore that?

Them being different is not the reason they're flawed, them being totally independent of any principle or logical reasoning is why they're flawed. Humans are flawed creatures, anything we determine by virtue of our own agency is likewise flawed, again by definition...

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Sniperfox47 said:

What are you even talking about? Having that car is not a right... Being alive is a right. You don't have a right to have a car.

You just answered your own question,  if being alive is a right that trumps owning a car then I am absolutely allowed to deny you the ownership of a car that would take the life of an innocent person. 

4 hours ago, Sniperfox47 said:

But even if you did your logic is totally baseless because self driving cars are provably less risk to civilians than cars driven by people, unless they're made by Uber.

Can you please read what I was responding to before making arguments I have already addressed.

The person I was responding to was trying to argue that the life of the occupants is more important than the life of innocent bystanders.   And the car should be programmed to prioritize the safety of thee occupants.

4 hours ago, Sniperfox47 said:

By that logic you should take away everyone's drivers lisence and shut down Uber's self driving program before you even consider shutting down the other major self driving players.

See above,  that argument means nothing if you read my post and the reason I was making it.

4 hours ago, Sniperfox47 said:

What are you even talking about? No it doesn't...

 

 
Quote

 

adjective
adjective: arbitrary
  1. 1.
    based on random choice or personal whim, rather than any reason or system.

 

 
The dictionary begs to differ.    All ethics are based on a system of rationalization developed over time.
 
4 hours ago, Sniperfox47 said:

Arbitrary is the Latin word Arbitrarius coming from the latin word Arbiter meaning "one who judges or dictate". Arbitrary is a thing that is dependent on ones personal judgement or reasoning as opposed to fundamental principles of nature or logical reasoning...

That may be the roots of the word but it isn't how the word is used in modern vernacular,  if you want to use outdated definitions then you are going to have lots of trouble communicating. It is also the root of the word arbitration meaning to settle a dispute by judicial method/hearing. 

 

So what I said about ethics still stands as true.

 

4 hours ago, Sniperfox47 said:

If morals and ethics are decided on personal whim and the influences of society they are the very definition of Arbitrary... That's what the word means...

If you use the etymology of the word and not the current definition, sure.

4 hours ago, Sniperfox47 said:

Them being different is not the reason they're flawed, them being totally independent of any principle or logical reasoning is why they're flawed. Humans are flawed creatures, anything we determine by virtue of our own agency is likewise flawed, again by definition...

And so is your argument,  flawed because instead of being able to explain why,  you simply dismiss an entire field of human psychology (that goes well beyond the psychologist, corporate ethics is a big thing hence this study in the first place).   In fact if ethics weren't a thing that we must consider then this debate wouldn't even be taking place.  

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, bitsandpieces said:

I took the test and it's utter bull

How is the AI of the car going to identify the person's sex or age?

Jesus another one. It isn't, that's not the point of the test. The test is designed to see how HUMANS feels about those situations, its an information gathering exercise.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, Master Disaster said:

Jesus another one. It isn't, that's not the point of the test. The test is designed to see how HUMANS feels about those situations, its an information gathering exercise.

What's the point of the test or how I'd feel when it's most likely I'd be dead as a pedestrian or a passenger

How's any of this have a relevance on obeying the law? That's what the self driving car manufacturers need to do, obey the law and do their absolute best to protect both the passenger and the pedestrians

What is this, the court of social justice? Was it this the whole point?

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, bitsandpieces said:

What's the point of the test or how I'd feel when it's most likely I'd be dead as a pedestrian or a passenger

How's any of this have a relevance on obeying the law? That's what the self driving car manufacturers need to do, obey the law and do their absolute best to protect both the passenger and the pedestrians

The point of this is to gather as much information as possible to gauge what the best actually is.  AI transport goes well beyond the realms of any product we have seen before.  It would be great if it was a simple as programming it to avoiding killing people, but it's not. As a computer it can think and react so much faster than humans, it crosses over from an accident where the results are beyond the control of humans (i.e not intentional) to being a calculated response that results in the potential death of humans.  This is something that can't just be done, it is new territory and unless they check all their boxes the end results could be disastrous.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×