Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
GabD

Philosophers are building ethical algorithms to help control self-driving cars

Recommended Posts

Posted · Original PosterOP

On the topic of "jobs and fields that won't be replaced by automation", a philosophy degree is apparently still going to be relevant for a while.

https://qz.com/1204395/self-driving-cars-trolley-problem-philosophers-are-building-ethical-algorithms-to-solve-the-problem/
 

«Artificial intelligence experts and roboticists aren’t the only ones working on the problem of autonomous vehicles. Philosophers are also paying close attention to the development of what, from their perspective, looks like a myriad of ethical quandaries on wheels.

The field has been particularly focused over the past few years on one particular philosophical problem posed by self-driving cars: They are a real-life enactment of a moral conundrum known as the Trolley Problem. In this classic scenario, a trolley is going down the tracks towards five people. You can pull a lever to redirect the trolley, but there is one person stuck on the only alternative track. The scenario exposes the moral tension between actively doing versus allowing harm: Is it morally acceptable to kill one to save five, or should you allow five to die rather than actively hurt one?»
 

«Rather than pontificating on this, a group of philosophers have taken a more practical approach, and are building algorithms to solve the problem. Nicholas Evans, philosophy professor at Mass Lowell, is working alongside two other philosophers and an engineer to write algorithms based on various ethical theories. Their work, supported by a $556,000 grant from the National Science Foundation, will allow them to create various Trolley Problem scenarios, and show how an autonomous car would respond according to the ethical theory it follows.»

«he hopes the results from his algorithms will allow others to make an informed decision, whether that’s by car consumers or manufacturers. Evans isn’t currently collaborating with any of the companies working to create autonomous cars, but hopes to do so once he has results.

Perhaps Evans’s algorithms will show that one moral theory will lead to more lives saved than another, or perhaps the results will be more complicated. “It’s not just about how many people die but which people die or whose lives are saved,” says Evans. It’s possible that two scenarios will save equal numbers of lives, but not of the same people.»

«One of the hallmarks of a good experiment in medicine, but also in science more generally, is that participants are able to make informed decisions about whether or not they want to be part of that experiment,” he said. “Hopefully, some of our research provides that information that allows people to make informed decisions when they deal with their politicians.»

Link to post
Share on other sites

Does it mean machine learning algorithms can now solve the age old question among philosophers and neuroscientists: “Do human beings have free will or it is determined by past experience?” 

 

But if algorithms car can stop someone from drunk driving or take over a car when someone is stoned then why not? 

 


There is more that meets the eye
I see the soul that is inside

Link to post
Share on other sites

The trolley problem isn't actually a real thing. There are always more variables and more options.

2 hours ago, GabD said:

«Rather than pontificating on this, a group of philosophers have taken a more practical approach, and are building algorithms to solve the problem. Nicholas Evans, philosophy professor at Mass Lowell, is working alongside two other philosophers and an engineer to write algorithms based on various ethical theories. Their work, supported by a $556,000 grant from the National Science Foundation, will allow them to create various Trolley Problem scenarios, and show how an autonomous car would respond according to the ethical theory it follows.»

This is pretty ridiculous, it hardly takes half a million $ to run simulations on existing frameworks and quite frankly the philosophers' input seems pretty superflous. In the end, a perfect trolley problem (which doesn't exist in the real world, but let's suppose it did) does not have a fixed solution by its very nature.

 

I also find it extremely ironic that these philosophers believe they have the authority to make this decision for others. One of the first thing you learn in philosophy courses is that being a philosopher does not make you more right than others. Realistically, the way it's most likely going to pan out is that governments will introduce a law that values the life of most over the life of a few every time and the cars will have to abide by it, and this "research" will have been just a waste of money and time.


sudo chmod -R 000 /*

What is scaling and how does it work? Asus PB287Q unboxing! Console alternatives :D Watch Netflix with Kodi on Arch Linux Sharing folders over the internet using SSH Beginner's Guide To LTT (by iamdarkyoshi)

Sauron'stm Product Scores:

Spoiler

Just a list of my personal scores for some products, in no particular order, with brief comments. I just got the idea to do them so they aren't many for now :)

Don't take these as complete reviews or final truths - they are just my personal impressions on products I may or may not have used, summed up in a couple of sentences and a rough score. All scores take into account the unit's price and time of release, heavily so, therefore don't expect absolute performance to be reflected here.

 

-Lenovo Thinkpad X220 - [8/10]

Spoiler

A durable and reliable machine that is relatively lightweight, has all the hardware it needs to never feel sluggish and has a great IPS matte screen. Downsides are mostly due to its age, most notably the screen resolution of 1366x768 and usb 2.0 ports.

 

-Apple Macbook (2015) - [Garbage -/10]

Spoiler

From my perspective, this product has no redeeming factors given its price and the competition. It is underpowered, overpriced, impractical due to its single port and is made redundant even by Apple's own iPad pro line.

 

-OnePlus X - [7/10]

Spoiler

A good phone for the price. It does everything I (and most people) need without being sluggish and has no particularly bad flaws. The lack of recent software updates and relatively barebones feature kit (most notably the lack of 5GHz wifi, biometric sensors and backlight for the capacitive buttons) prevent it from being exceptional.

 

-Microsoft Surface Book 2 - [Garbage - -/10]

Spoiler

Overpriced and rushed, offers nothing notable compared to the competition, doesn't come with an adequate charger despite the premium price. Worse than the Macbook for not even offering the small plus sides of having macOS. Buy a Razer Blade if you want high performance in a (relatively) light package.

 

-Intel Core i7 2600/k - [9/10]

Spoiler

Quite possibly Intel's best product launch ever. It had all the bleeding edge features of the time, it came with a very significant performance improvement over its predecessor and it had a soldered heatspreader, allowing for efficient cooling and great overclocking. Even the "locked" version could be overclocked through the multiplier within (quite reasonable) limits.

 

-Apple iPad Pro - [5/10]

Spoiler

A pretty good product, sunk by its price (plus the extra cost of the physical keyboard and the pencil). Buy it if you don't mind the Apple tax and are looking for a very light office machine with an excellent digitizer. Particularly good for rich students. Bad for cheap tinkerers like myself.

 

 

Link to post
Share on other sites

It doesn't matter which decision the philosophical AI makes, there will always be half the population that thinks it was wrong and it was murder.  Harambe anyone?


QuicK and DirtY. Read the CoC it's like a guide on how not to be moron.  Also I don't have an issue with the VS series.

Link to post
Share on other sites
48 minutes ago, Sauron said:

The trolley problem isn't actually a real thing. There are always more variables and more options.

This is pretty ridiculous, it hardly takes half a million $ to run simulations on existing frameworks and quite frankly the philosophers' input seems pretty superflous. In the end, a perfect trolley problem (which doesn't exist in the real world, but let's suppose it did) does not have a fixed solution by its very nature.

It may not have a moral solution in the real world but it has a legal one, which is why Harvard legal political and philosophical uses it in their lecture, "the moral side of murder".   Which coincidental surprised me that some students still let moral conditions undermine logical conditions of the problem.

 

Quote

I also find it extremely ironic that these philosophers believe they have the authority to make this decision for others. One of the first thing you learn in philosophy courses is that being a philosopher does not make you more right than others. Realistically, the way it's most likely going to pan out is that governments will introduce a law that values the life of most over the life of a few every time and the cars will have to abide by it, and this "research" will have been just a waste of money and time.

Which makes it interesting that philosophers might come undone morally but not legally.  

 

 


QuicK and DirtY. Read the CoC it's like a guide on how not to be moron.  Also I don't have an issue with the VS series.

Link to post
Share on other sites
1 hour ago, mr moose said:

It doesn't matter which decision the philosophical AI makes, there will always be half the population that thinks it was wrong and it was murder.  Harambe anyone?

hum,harambe was murder. He was clearly protecting the kid, and died for it


One day I will be able to play Monster Hunter Frontier in French/Italian/English on my PC, it's just a matter of time... 4 5 6 7 8 9 years later: It's finally coming!!!

Phones: iPhone 4S/SE | LG V10 | Lumia 920

Laptops: Macbook Pro 15" (mid-2012) | Compaq Presario V6000

Link to post
Share on other sites

Vsauce did a video on the trolley problem where they staged a scene with unknowing participants to see how people reacted in a situation where they could save many for the life of 1, but at the cost of taking action to make that happen.

 

Link to vid on youtube: 

Its a real issue. what should the AI value higher in certain situation. Should the AI in a car value the passanger over the people at the street? Would people drive in a car, knowing that it will allways value the people on the street instead of the driver/passenger?

 

Edit: You dont have to pay to watch it. the Seasons are locked of to youtube RED with the exception of ep 1

Edited by GoldenLag
Link to post
Share on other sites
1 minute ago, VegetableStu said:

"Good morning Mr. Bond. How would you like to die today?"

*death by seatbelt choke

Link to post
Share on other sites
10 hours ago, suicidalfranco said:

hum,harambe was murder. He was clearly protecting the kid, and died for it

Thank you for validating my point.


QuicK and DirtY. Read the CoC it's like a guide on how not to be moron.  Also I don't have an issue with the VS series.

Link to post
Share on other sites
10 hours ago, GoldenLag said:

Would people drive in a car, knowing that it will allways value the people on the street instead of the driver/passenger?

I think it comes down to a few factors, as there are so many variables; however, a car is equipped with safety features such as seat belts and air bags and the physical structure of the vehicle. A pedestrian has no such safety and should take precedent in that type of scenario for the AI.

 

Would I drive a car with that type of programming? Yes, because we're both more likely to survive. I'm not in the mood for vehicular manslaughter when driving a vehicle, AI or not.

Link to post
Share on other sites
11 hours ago, Sauron said:

The trolley problem isn't actually a real thing. There are always more variables and more options.

That line of reasoning basically kills science as a thing.

Decomposing complex problems into simpler elements is called "abstraction". It works astonishingly well.

11 hours ago, Sauron said:

This is pretty ridiculous, it hardly takes half a million $ to run simulations on existing frameworks and quite frankly the philosophers' input seems pretty superflous.

 

11 hours ago, Sauron said:

In the end, a perfect trolley problem (which doesn't exist in the real world, but let's suppose it did)

Spherical horses don't exist either. Think of that the next time you see a car, self-driving or not, that actually works.

 

11 hours ago, Sauron said:

I also find it extremely ironic that these philosophers believe they have the authority to make this decision for others.

That's very poor understanding of what they are doing. Who exactly is getting stripped of their ability to make decisions? 

 

What they are working on is actually very valuable. Philosophers have cumulatively thought extensively about ethics, and more generally reflection on ethical problems informs our legal systems. We basically rule on what is acceptable behavior based on a certain ethical standpoint. Now, making rules to judge human behavior in democracies is tasked to citizens, either directly or through representatives. Philosophers don't directly legislate, nor they make decisions for others. However, autonomous cars are not persons, yet if the decision process is truly autonomous then decisions will be made. You see, I don't have to worry about the ethics of my cup because it doesn't make any decision anyway. As for my decisions, I'm free to make them, but I may face legal consequences for them. Autonomous cars are not cups, nor subjects of law. Moreover, their decisions are based on contingent rules coded in algorithms. There is no unpredictability in their behavior (at most, bugs). That means that these are machines that will come with pre-coded decisions for every potential event (explicitly or implicitly), since as the trolley problem clearly illustrates to anyone who actually understands its point, not executing an action as a consequence of new information can be as much of an active decision as executing one in a given context, removing the illusion of "neutrality" that we sometimes attach to inertial behavior.

Ultimately, the ethically desirable rules to embed in autonomous machines are a political problem, that in democracies is decided at the parliament. However, we cannot make informed decisions if we don't understand the consequences of our choices. This team puts together philosophers with deep knowledge of the ethical systems that guide much of our legislation with engineers that can code those principles into self-driving algorithms. Simulating the behavior of a car that follows that algorithm in a wide range of situations is necessary to avoid passing legislation based on what some dude (or army of duded for that matter) on an armchair cleverly conceived was a good idea. The point is not for some philosopher to tell us "I determined that the best ethics is X, now go do it", but precisely the opposite: to gain understanding of what real-life situations would look like under different systems. As a person, you can always claim to adhere to some system of values with very strict rules of behavior, but when confronted with a particular deviation nothing prevents you from deviating and doing something else, because it seems like a better choice to you at the time. Hence, laws only need to deal with specific human behavior on a case by case basis. There is no reason to judge your (proclaimed) system of values in court, but just what you actually did on day X, time Y, circumstance Z. Autonomous machines don't actually make "decisions" in the same sense, they will follow a particular decision-making rule to the letter, in a predictable manner, hence why the law-making problem and the case-judging problem fuse into one: there are no separate instances, any action you may want to judge later is actually coded from the get-go in the self-driving algorithm.

 

So no, you can put down the anti-science pitchfork, nobody is trying to take away anyone's right to make decisions. They are just providing a systematic description of what ethical systems would look like in practice in the context of autonomous driving.

 

Link to post
Share on other sites
2 hours ago, divito said:

I think it comes down to a few factors, as there are so many variables; however, a car is equipped with safety features such as seat belts and air bags and the physical structure of the vehicle. A pedestrian has no such safety and should take precedent in that type of scenario for the AI.

 

Would I drive a car with that type of programming? Yes, because we're both more likely to survive. I'm not in the mood for vehicular manslaughter when driving a vehicle, AI or not.

Tbh I kinda agree because I would likely drive off the road if I was about to hit a person. This is assuming that i wouldn't be driving off the road into other people.

Link to post
Share on other sites
55 minutes ago, SpaceGhostC2C said:

So no, you can put down the anti-science pitchfork

image.png.2bce7568a223f3d55c81bb2914b3afdf.png

 

it takes a lot of imagination to take anything I wrote as anti science...

1 hour ago, SpaceGhostC2C said:

 They are just providing a systematic description of what ethical systems would look like in practice in the context of autonomous driving.

You still gave no compelling reason for which they should be in any way more qualitied to do this than anyone else, or why they need half a million to do it, unless it's just their professor paycheck for that time frame.

1 hour ago, SpaceGhostC2C said:

Simulating the behavior of a car that follows that algorithm in a wide range of situations is necessary to avoid passing legislation based on what some dude (or army of duded for that matter) on an armchair cleverly conceived was a good idea.

Again, this can all be done without a single philosopher in the room - or garage.

1 hour ago, SpaceGhostC2C said:

That means that these are machines that will come with pre-coded decisions for every potential event (explicitly or implicitly), since as the trolley problem clearly illustrates to anyone who actually understands its point, not executing an action as a consequence of new information can be as much of an active decision as executing one in a given context, removing the illusion of "neutrality" that we sometimes attach to inertial behavior.

Again... at which point do the philosophers come in to provide useful advice? How do you even define useful advice in this case? Autonomous cars will face some difficult decisions and their inaction may cause more harm than their action, but the question here is not "how do we program those decisions in", because if it were the engineers would be perfectly capable of figuring something out within the limits of current technology by themselves. Unless they are there to provide an "morally correct answer" to such a situation, they are pretty much useless.

1 hour ago, SpaceGhostC2C said:

Autonomous cars are [...] subjects of law.

That's not true. Their programming can (and most likely will) be subject to local legislation before they are allowed on the market. The car itself is obviously not responsible for its actions, but those who programmed and sold it are.

1 hour ago, SpaceGhostC2C said:

What they are working on is actually very valuable. Philosophers have cumulatively thought extensively about ethics, and more generally reflection on ethical problems informs our legal systems.

Again, so what? I could read, think, meditate about a topic for decades and still come up with a moral stance about it that you may abhor. There are so many philosophers who have come to conclusions I would consider horrific, or at least incredibly misguided in the past. What makes any of them more qualified to decide which experiments to run, than anyone else? Again, engineers can run simulations and develop algorithms too.

1 hour ago, SpaceGhostC2C said:

That's very poor understanding of what they are doing. Who exactly is getting stripped of their ability to make decisions?

Then make a better effort of explaining it, because neither the article nor you have been able to clarify what exactly it is these people are doing that requires a bunch of philosophers running simulations. I may have jumped the gun a little in claiming they are making decisions for others, but in the end the scenarios that are being considered depend exclusively on what they deem appropriate.

 

1 hour ago, SpaceGhostC2C said:

That line of reasoning basically kills science as a thing.

Decomposing complex problems into simpler elements is called "abstraction". It works astonishingly well.

Except abstraction only helps if it provides an answer.

1 hour ago, SpaceGhostC2C said:

Spherical horses don't exist either. Think of that the next time you see a car, self-driving or not, that actually works.

So a hypothetical situation is the same as the idea for an invention to you? This comparison just doesn't work at all. The trolley problem is an extreme oversemplification that simply doesn't occur in real life. In my opinion, this is one of the cases where abstraction defeats the original purpose and provides no meaningful answer.


sudo chmod -R 000 /*

What is scaling and how does it work? Asus PB287Q unboxing! Console alternatives :D Watch Netflix with Kodi on Arch Linux Sharing folders over the internet using SSH Beginner's Guide To LTT (by iamdarkyoshi)

Sauron'stm Product Scores:

Spoiler

Just a list of my personal scores for some products, in no particular order, with brief comments. I just got the idea to do them so they aren't many for now :)

Don't take these as complete reviews or final truths - they are just my personal impressions on products I may or may not have used, summed up in a couple of sentences and a rough score. All scores take into account the unit's price and time of release, heavily so, therefore don't expect absolute performance to be reflected here.

 

-Lenovo Thinkpad X220 - [8/10]

Spoiler

A durable and reliable machine that is relatively lightweight, has all the hardware it needs to never feel sluggish and has a great IPS matte screen. Downsides are mostly due to its age, most notably the screen resolution of 1366x768 and usb 2.0 ports.

 

-Apple Macbook (2015) - [Garbage -/10]

Spoiler

From my perspective, this product has no redeeming factors given its price and the competition. It is underpowered, overpriced, impractical due to its single port and is made redundant even by Apple's own iPad pro line.

 

-OnePlus X - [7/10]

Spoiler

A good phone for the price. It does everything I (and most people) need without being sluggish and has no particularly bad flaws. The lack of recent software updates and relatively barebones feature kit (most notably the lack of 5GHz wifi, biometric sensors and backlight for the capacitive buttons) prevent it from being exceptional.

 

-Microsoft Surface Book 2 - [Garbage - -/10]

Spoiler

Overpriced and rushed, offers nothing notable compared to the competition, doesn't come with an adequate charger despite the premium price. Worse than the Macbook for not even offering the small plus sides of having macOS. Buy a Razer Blade if you want high performance in a (relatively) light package.

 

-Intel Core i7 2600/k - [9/10]

Spoiler

Quite possibly Intel's best product launch ever. It had all the bleeding edge features of the time, it came with a very significant performance improvement over its predecessor and it had a soldered heatspreader, allowing for efficient cooling and great overclocking. Even the "locked" version could be overclocked through the multiplier within (quite reasonable) limits.

 

-Apple iPad Pro - [5/10]

Spoiler

A pretty good product, sunk by its price (plus the extra cost of the physical keyboard and the pencil). Buy it if you don't mind the Apple tax and are looking for a very light office machine with an excellent digitizer. Particularly good for rich students. Bad for cheap tinkerers like myself.

 

 

Link to post
Share on other sites

I'll just leave this here.

http://moralmachine.mit.edu/


My Rig "Jenova" Ryzen 7 1700X with EK Supremacy Elite, 2x GTX1080TI with EK Fullcover Acetal + Nickel & EK Backplate, Corsair AX1200i (sleeved), ASUS Crosshair VI Hero, 2x 8gb Trident Z RGB 3200MHz 14CL, 256gb Samsung 960 Evo, Xigmatek Elysium (modded), 140mm, 360mm and 420mm HardwareLabs Nemesis GTX, 5x Noctua NF-A14, 3x Noctua NF-F12

Link to post
Share on other sites
43 minutes ago, Kukielka said:

I'll just leave this here.

http://moralmachine.mit.edu/

That seems pointless, Just like every other driver on the planet and the current situation being that the decision making is Self preservation first, genetic preservation second and herd preservation last.   It doesn't matter how many pedestrians it kills, it's primary goal has to be the occupants or it's not doing it's job of being safer than the occupant driving.

 

 


QuicK and DirtY. Read the CoC it's like a guide on how not to be moron.  Also I don't have an issue with the VS series.

Link to post
Share on other sites

I tend to be a philosophical person and I just have to say this doesn't sound very safe at all. There are only so many questions that can be asked and made into an algorithm before the question, "what is this even" gets asked. 


"I tried to set you free, you keep trying to rescue me, but you can't, tell a heart, when to start, how to beat....."

*Kina Grannis saved my life*

Link to post
Share on other sites
1 hour ago, mr moose said:

That seems pointless, Just like every other driver on the planet and the current situation being that the decision making is Self preservation first, genetic preservation second and herd preservation last.   It doesn't matter how many pedestrians it kills, it's primary goal has to be the occupants or it's not doing it's job of being safer than the occupant driving.

 

 

I agree with your order of decision-making, however that doesn't cover everything. That is where that study kicks in,. :) 

What is the moraly best thing to do when in a certain situation, there is no universal answer.


My Rig "Jenova" Ryzen 7 1700X with EK Supremacy Elite, 2x GTX1080TI with EK Fullcover Acetal + Nickel & EK Backplate, Corsair AX1200i (sleeved), ASUS Crosshair VI Hero, 2x 8gb Trident Z RGB 3200MHz 14CL, 256gb Samsung 960 Evo, Xigmatek Elysium (modded), 140mm, 360mm and 420mm HardwareLabs Nemesis GTX, 5x Noctua NF-A14, 3x Noctua NF-F12

Link to post
Share on other sites
9 hours ago, Kukielka said:

I agree with your order of decision-making, however that doesn't cover everything. That is where that study kicks in,. :) 

What is the moraly best thing to do when in a certain situation, there is no universal answer.

I won't get in a car (or any man made tool) that doesn't prioritize my life over everyone else's.

 


QuicK and DirtY. Read the CoC it's like a guide on how not to be moron.  Also I don't have an issue with the VS series.

Link to post
Share on other sites

To me this problem will be solved by how the courts play out with regards to lawsuits, and how public opinion plays out with regards to purchases.

 

If owners win more lawsuits than pedestrians, cars will favor owners.

If parents refuse to buy cars that will kill their kids instead of some random stranger, cars will favor occupants.

 

More importantly, if the car has time to figure out how is more "valuable" as a human being, something has already gone 100% wrong.  The car should never be driving faster than it can handle.  It should always be 100% focused on simply stopping the vehicle as safely as possible, irrespective of what it is trying not to hit!!!  If the car makes ANY decisions, the maker will be liable for those decisions.  This is where the the makers will choose based on who wins the most lawsuits and how the laws are written.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×