Jump to content

Philosophers are building ethical algorithms to help control self-driving cars

On the topic of "jobs and fields that won't be replaced by automation", a philosophy degree is apparently still going to be relevant for a while.

https://qz.com/1204395/self-driving-cars-trolley-problem-philosophers-are-building-ethical-algorithms-to-solve-the-problem/
 

«Artificial intelligence experts and roboticists aren’t the only ones working on the problem of autonomous vehicles. Philosophers are also paying close attention to the development of what, from their perspective, looks like a myriad of ethical quandaries on wheels.

The field has been particularly focused over the past few years on one particular philosophical problem posed by self-driving cars: They are a real-life enactment of a moral conundrum known as the Trolley Problem. In this classic scenario, a trolley is going down the tracks towards five people. You can pull a lever to redirect the trolley, but there is one person stuck on the only alternative track. The scenario exposes the moral tension between actively doing versus allowing harm: Is it morally acceptable to kill one to save five, or should you allow five to die rather than actively hurt one?»
 

«Rather than pontificating on this, a group of philosophers have taken a more practical approach, and are building algorithms to solve the problem. Nicholas Evans, philosophy professor at Mass Lowell, is working alongside two other philosophers and an engineer to write algorithms based on various ethical theories. Their work, supported by a $556,000 grant from the National Science Foundation, will allow them to create various Trolley Problem scenarios, and show how an autonomous car would respond according to the ethical theory it follows.»

«he hopes the results from his algorithms will allow others to make an informed decision, whether that’s by car consumers or manufacturers. Evans isn’t currently collaborating with any of the companies working to create autonomous cars, but hopes to do so once he has results.

Perhaps Evans’s algorithms will show that one moral theory will lead to more lives saved than another, or perhaps the results will be more complicated. “It’s not just about how many people die but which people die or whose lives are saved,” says Evans. It’s possible that two scenarios will save equal numbers of lives, but not of the same people.»

«One of the hallmarks of a good experiment in medicine, but also in science more generally, is that participants are able to make informed decisions about whether or not they want to be part of that experiment,” he said. “Hopefully, some of our research provides that information that allows people to make informed decisions when they deal with their politicians.»

Link to comment
Share on other sites

Link to post
Share on other sites

Does it mean machine learning algorithms can now solve the age old question among philosophers and neuroscientists: “Do human beings have free will or it is determined by past experience?” 

 

But if algorithms car can stop someone from drunk driving or take over a car when someone is stoned then why not? 

 

There is more that meets the eye
I see the soul that is inside

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

The trolley problem isn't actually a real thing. There are always more variables and more options.

2 hours ago, GabD said:

«Rather than pontificating on this, a group of philosophers have taken a more practical approach, and are building algorithms to solve the problem. Nicholas Evans, philosophy professor at Mass Lowell, is working alongside two other philosophers and an engineer to write algorithms based on various ethical theories. Their work, supported by a $556,000 grant from the National Science Foundation, will allow them to create various Trolley Problem scenarios, and show how an autonomous car would respond according to the ethical theory it follows.»

This is pretty ridiculous, it hardly takes half a million $ to run simulations on existing frameworks and quite frankly the philosophers' input seems pretty superflous. In the end, a perfect trolley problem (which doesn't exist in the real world, but let's suppose it did) does not have a fixed solution by its very nature.

 

I also find it extremely ironic that these philosophers believe they have the authority to make this decision for others. One of the first thing you learn in philosophy courses is that being a philosopher does not make you more right than others. Realistically, the way it's most likely going to pan out is that governments will introduce a law that values the life of most over the life of a few every time and the cars will have to abide by it, and this "research" will have been just a waste of money and time.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

It doesn't matter which decision the philosophical AI makes, there will always be half the population that thinks it was wrong and it was murder.  Harambe anyone?

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

It doesn't take half a million to know to change road infrastructure so this never actually happens lol

Link to comment
Share on other sites

Link to post
Share on other sites

48 minutes ago, Sauron said:

The trolley problem isn't actually a real thing. There are always more variables and more options.

This is pretty ridiculous, it hardly takes half a million $ to run simulations on existing frameworks and quite frankly the philosophers' input seems pretty superflous. In the end, a perfect trolley problem (which doesn't exist in the real world, but let's suppose it did) does not have a fixed solution by its very nature.

It may not have a moral solution in the real world but it has a legal one, which is why Harvard legal political and philosophical uses it in their lecture, "the moral side of murder".   Which coincidental surprised me that some students still let moral conditions undermine logical conditions of the problem.

 

Quote

I also find it extremely ironic that these philosophers believe they have the authority to make this decision for others. One of the first thing you learn in philosophy courses is that being a philosopher does not make you more right than others. Realistically, the way it's most likely going to pan out is that governments will introduce a law that values the life of most over the life of a few every time and the cars will have to abide by it, and this "research" will have been just a waste of money and time.

Which makes it interesting that philosophers might come undone morally but not legally.  

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mr moose said:

It doesn't matter which decision the philosophical AI makes, there will always be half the population that thinks it was wrong and it was murder.  Harambe anyone?

hum,harambe was murder. He was clearly protecting the kid, and died for it

One day I will be able to play Monster Hunter Frontier in French/Italian/English on my PC, it's just a matter of time... 4 5 6 7 8 9 years later: It's finally coming!!!

Phones: iPhone 4S/SE | LG V10 | Lumia 920 | Samsung S24 Ultra

Laptops: Macbook Pro 15" (mid-2012) | Compaq Presario V6000

Other: Steam Deck

<>EVs are bad, they kill the planet and remove freedoms too some/<>

Link to comment
Share on other sites

Link to post
Share on other sites

Vsauce did a video on the trolley problem where they staged a scene with unknowing participants to see how people reacted in a situation where they could save many for the life of 1, but at the cost of taking action to make that happen.

 

Link to vid on youtube: 

Its a real issue. what should the AI value higher in certain situation. Should the AI in a car value the passanger over the people at the street? Would people drive in a car, knowing that it will allways value the people on the street instead of the driver/passenger?

 

Edit: You dont have to pay to watch it. the Seasons are locked of to youtube RED with the exception of ep 1

Edited by GoldenLag
Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, VegetableStu said:

"Good morning Mr. Bond. How would you like to die today?"

*death by seatbelt choke

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, VegetableStu said:

"Good morning Mr. Bond. How would you like to die today?"

just let me die.

- snip-

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, suicidalfranco said:

hum,harambe was murder. He was clearly protecting the kid, and died for it

Thank you for validating my point.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, GoldenLag said:

Would people drive in a car, knowing that it will allways value the people on the street instead of the driver/passenger?

I think it comes down to a few factors, as there are so many variables; however, a car is equipped with safety features such as seat belts and air bags and the physical structure of the vehicle. A pedestrian has no such safety and should take precedent in that type of scenario for the AI.

 

Would I drive a car with that type of programming? Yes, because we're both more likely to survive. I'm not in the mood for vehicular manslaughter when driving a vehicle, AI or not.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Sauron said:

The trolley problem isn't actually a real thing. There are always more variables and more options.

That line of reasoning basically kills science as a thing.

Decomposing complex problems into simpler elements is called "abstraction". It works astonishingly well.

11 hours ago, Sauron said:

This is pretty ridiculous, it hardly takes half a million $ to run simulations on existing frameworks and quite frankly the philosophers' input seems pretty superflous.

 

11 hours ago, Sauron said:

In the end, a perfect trolley problem (which doesn't exist in the real world, but let's suppose it did)

Spherical horses don't exist either. Think of that the next time you see a car, self-driving or not, that actually works.

 

11 hours ago, Sauron said:

I also find it extremely ironic that these philosophers believe they have the authority to make this decision for others.

That's very poor understanding of what they are doing. Who exactly is getting stripped of their ability to make decisions? 

 

What they are working on is actually very valuable. Philosophers have cumulatively thought extensively about ethics, and more generally reflection on ethical problems informs our legal systems. We basically rule on what is acceptable behavior based on a certain ethical standpoint. Now, making rules to judge human behavior in democracies is tasked to citizens, either directly or through representatives. Philosophers don't directly legislate, nor they make decisions for others. However, autonomous cars are not persons, yet if the decision process is truly autonomous then decisions will be made. You see, I don't have to worry about the ethics of my cup because it doesn't make any decision anyway. As for my decisions, I'm free to make them, but I may face legal consequences for them. Autonomous cars are not cups, nor subjects of law. Moreover, their decisions are based on contingent rules coded in algorithms. There is no unpredictability in their behavior (at most, bugs). That means that these are machines that will come with pre-coded decisions for every potential event (explicitly or implicitly), since as the trolley problem clearly illustrates to anyone who actually understands its point, not executing an action as a consequence of new information can be as much of an active decision as executing one in a given context, removing the illusion of "neutrality" that we sometimes attach to inertial behavior.

Ultimately, the ethically desirable rules to embed in autonomous machines are a political problem, that in democracies is decided at the parliament. However, we cannot make informed decisions if we don't understand the consequences of our choices. This team puts together philosophers with deep knowledge of the ethical systems that guide much of our legislation with engineers that can code those principles into self-driving algorithms. Simulating the behavior of a car that follows that algorithm in a wide range of situations is necessary to avoid passing legislation based on what some dude (or army of duded for that matter) on an armchair cleverly conceived was a good idea. The point is not for some philosopher to tell us "I determined that the best ethics is X, now go do it", but precisely the opposite: to gain understanding of what real-life situations would look like under different systems. As a person, you can always claim to adhere to some system of values with very strict rules of behavior, but when confronted with a particular deviation nothing prevents you from deviating and doing something else, because it seems like a better choice to you at the time. Hence, laws only need to deal with specific human behavior on a case by case basis. There is no reason to judge your (proclaimed) system of values in court, but just what you actually did on day X, time Y, circumstance Z. Autonomous machines don't actually make "decisions" in the same sense, they will follow a particular decision-making rule to the letter, in a predictable manner, hence why the law-making problem and the case-judging problem fuse into one: there are no separate instances, any action you may want to judge later is actually coded from the get-go in the self-driving algorithm.

 

So no, you can put down the anti-science pitchfork, nobody is trying to take away anyone's right to make decisions. They are just providing a systematic description of what ethical systems would look like in practice in the context of autonomous driving.

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, divito said:

I think it comes down to a few factors, as there are so many variables; however, a car is equipped with safety features such as seat belts and air bags and the physical structure of the vehicle. A pedestrian has no such safety and should take precedent in that type of scenario for the AI.

 

Would I drive a car with that type of programming? Yes, because we're both more likely to survive. I'm not in the mood for vehicular manslaughter when driving a vehicle, AI or not.

Tbh I kinda agree because I would likely drive off the road if I was about to hit a person. This is assuming that i wouldn't be driving off the road into other people.

Link to comment
Share on other sites

Link to post
Share on other sites

55 minutes ago, SpaceGhostC2C said:

So no, you can put down the anti-science pitchfork

image.png.2bce7568a223f3d55c81bb2914b3afdf.png

 

it takes a lot of imagination to take anything I wrote as anti science...

1 hour ago, SpaceGhostC2C said:

 They are just providing a systematic description of what ethical systems would look like in practice in the context of autonomous driving.

You still gave no compelling reason for which they should be in any way more qualitied to do this than anyone else, or why they need half a million to do it, unless it's just their professor paycheck for that time frame.

1 hour ago, SpaceGhostC2C said:

Simulating the behavior of a car that follows that algorithm in a wide range of situations is necessary to avoid passing legislation based on what some dude (or army of duded for that matter) on an armchair cleverly conceived was a good idea.

Again, this can all be done without a single philosopher in the room - or garage.

1 hour ago, SpaceGhostC2C said:

That means that these are machines that will come with pre-coded decisions for every potential event (explicitly or implicitly), since as the trolley problem clearly illustrates to anyone who actually understands its point, not executing an action as a consequence of new information can be as much of an active decision as executing one in a given context, removing the illusion of "neutrality" that we sometimes attach to inertial behavior.

Again... at which point do the philosophers come in to provide useful advice? How do you even define useful advice in this case? Autonomous cars will face some difficult decisions and their inaction may cause more harm than their action, but the question here is not "how do we program those decisions in", because if it were the engineers would be perfectly capable of figuring something out within the limits of current technology by themselves. Unless they are there to provide an "morally correct answer" to such a situation, they are pretty much useless.

1 hour ago, SpaceGhostC2C said:

Autonomous cars are [...] subjects of law.

That's not true. Their programming can (and most likely will) be subject to local legislation before they are allowed on the market. The car itself is obviously not responsible for its actions, but those who programmed and sold it are.

1 hour ago, SpaceGhostC2C said:

What they are working on is actually very valuable. Philosophers have cumulatively thought extensively about ethics, and more generally reflection on ethical problems informs our legal systems.

Again, so what? I could read, think, meditate about a topic for decades and still come up with a moral stance about it that you may abhor. There are so many philosophers who have come to conclusions I would consider horrific, or at least incredibly misguided in the past. What makes any of them more qualified to decide which experiments to run, than anyone else? Again, engineers can run simulations and develop algorithms too.

1 hour ago, SpaceGhostC2C said:

That's very poor understanding of what they are doing. Who exactly is getting stripped of their ability to make decisions?

Then make a better effort of explaining it, because neither the article nor you have been able to clarify what exactly it is these people are doing that requires a bunch of philosophers running simulations. I may have jumped the gun a little in claiming they are making decisions for others, but in the end the scenarios that are being considered depend exclusively on what they deem appropriate.

 

1 hour ago, SpaceGhostC2C said:

That line of reasoning basically kills science as a thing.

Decomposing complex problems into simpler elements is called "abstraction". It works astonishingly well.

Except abstraction only helps if it provides an answer.

1 hour ago, SpaceGhostC2C said:

Spherical horses don't exist either. Think of that the next time you see a car, self-driving or not, that actually works.

So a hypothetical situation is the same as the idea for an invention to you? This comparison just doesn't work at all. The trolley problem is an extreme oversemplification that simply doesn't occur in real life. In my opinion, this is one of the cases where abstraction defeats the original purpose and provides no meaningful answer.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

I'll just leave this here.

http://moralmachine.mit.edu/

My Rig "Jenova" Ryzen 7 3900X with EK Supremacy Elite, RTX3090 with EK Fullcover Acetal + Nickel & EK Backplate, Corsair AX1200i (sleeved), ASUS X570-E, 4x 8gb Corsair Vengeance Pro RGB 3800MHz 16CL, 500gb Samsung 980 Pro, Raijintek Paean

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, Kukielka said:

I'll just leave this here.

http://moralmachine.mit.edu/

That seems pointless, Just like every other driver on the planet and the current situation being that the decision making is Self preservation first, genetic preservation second and herd preservation last.   It doesn't matter how many pedestrians it kills, it's primary goal has to be the occupants or it's not doing it's job of being safer than the occupant driving.

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

I tend to be a philosophical person and I just have to say this doesn't sound very safe at all. There are only so many questions that can be asked and made into an algorithm before the question, "what is this even" gets asked. 

"The only thing that matters right now is that you're here, and you're safe."

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mr moose said:

That seems pointless, Just like every other driver on the planet and the current situation being that the decision making is Self preservation first, genetic preservation second and herd preservation last.   It doesn't matter how many pedestrians it kills, it's primary goal has to be the occupants or it's not doing it's job of being safer than the occupant driving.

 

 

I agree with your order of decision-making, however that doesn't cover everything. That is where that study kicks in,. :) 

What is the moraly best thing to do when in a certain situation, there is no universal answer.

My Rig "Jenova" Ryzen 7 3900X with EK Supremacy Elite, RTX3090 with EK Fullcover Acetal + Nickel & EK Backplate, Corsair AX1200i (sleeved), ASUS X570-E, 4x 8gb Corsair Vengeance Pro RGB 3800MHz 16CL, 500gb Samsung 980 Pro, Raijintek Paean

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Kukielka said:

I agree with your order of decision-making, however that doesn't cover everything. That is where that study kicks in,. :) 

What is the moraly best thing to do when in a certain situation, there is no universal answer.

I won't get in a car (or any man made tool) that doesn't prioritize my life over everyone else's.

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

To me this problem will be solved by how the courts play out with regards to lawsuits, and how public opinion plays out with regards to purchases.

 

If owners win more lawsuits than pedestrians, cars will favor owners.

If parents refuse to buy cars that will kill their kids instead of some random stranger, cars will favor occupants.

 

More importantly, if the car has time to figure out how is more "valuable" as a human being, something has already gone 100% wrong.  The car should never be driving faster than it can handle.  It should always be 100% focused on simply stopping the vehicle as safely as possible, irrespective of what it is trying not to hit!!!  If the car makes ANY decisions, the maker will be liable for those decisions.  This is where the the makers will choose based on who wins the most lawsuits and how the laws are written.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mr moose said:

I won't get in a car (or any man made tool) that doesn't prioritize my life over everyone else's.

 

I got some troubling news about air travel for you then...

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, ChineseChef said:

The car should never be driving faster than it can handle.  It should always be 100% focused on simply stopping the vehicle as safely as possible, irrespective of what it is trying not to hit!!! 

Things aren't as simple as "drive slower and it will be able to react in a way that will not kill anyone 100% of the time".

The only car that will be able to stop without killing someone 100% of the time is a car without wheels or an engine.

 

What you have to remember is that it's not necessary the car itself that's the root cause of the dangerous situation. It might be a case where another car is on its way to drive into a bunch of people. Should the car drive itself and block the road or should it stand still and watch as a car runs over 20 people? That's just an example where even a car driving 0 mph has to make a decision on who to save. Another one would be if you get rear ended. It applies the breaks but it's not enough to stop the car. What direction should it steer in that case?

Or what about if one tier blows up? In such a scenario the only options the car might have is "should I try and veer left, or try and veer right?".

There are also scenarios where breaking might cause more danger than continuing forward. Such as scenarios where you have a fast moving car behind you. Simply applying the break will cause the car behind you to drive into you.

 

There's an infinite amount of scenarios where a self-driving car will have to make ethical decisions regarding scenarios not caused by itself.

 

 

 

 

 

 

 

On 2/13/2018 at 6:45 AM, GabD said:

Perhaps Evans’s algorithms will show that one moral theory will lead to more lives saved than another, or perhaps the results will be more complicated. “It’s not just about how many people die but which people die or whose lives are saved,” says Evans. It’s possible that two scenarios will save equal numbers of lives, but not of the same people.»

I agree with Evans that "it's not just about how many you save", but deciding who gets to live and who gets to die purely based on the appearance of someone will inevitably be heavily influenced by the biases of the person writing the algorithm. It will not end well.

You essentially have to tell someone "you look old so therefore your life is not as important as this person who looks younger". Or "you look poor so therefore your life is worth less than someone who appears to be richer". Or something along those lines.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×