Autonomous cars and Unintended Consequences

Share:

Autonomous cars and Unintended Consequences

No one has been able to repeal the law of unintended consequences. No matter how hard we try, if something can go wrong, it will. Remember, Murphy was an optimist.

There’s an article from Quartz Media posted today on parknews.biz that talks about philosophers building ‘ethical algorithms’ to address moral problems with self driving cars. Just think about it, the same mind set that keeps machines running in a factory, or allows you to get money from an ATM is going to decide whether or not you live or die in an accident. I can hardly wait.

Consider the “Trolly Problem.”

The field has been particularly focused over the past few years on one particular philosophical problem posed by self-driving cars: They are a real-life enactment of a moral conundrum known as the Trolley Problem. In this classic scenario, a trolley is going down the tracks towards five people. You can pull a lever to redirect the trolley, but there is one person stuck on the only alternative track. The scenario exposes the moral tension between actively doing versus allowing harm: Is it morally acceptable to kill one to save five, or should you allow five to die rather than actively hurt one?

Though the Trolley Problem sounds farfetched, autonomous vehicles will be unable to avoid comparable scenarios. If a car is in a situation where any action will put either the car passenger or someone else in danger—if there’s a truck crash ahead and the only options are to swerve into a motorbike or off a cliff—then how should the car be programmed to respond?

I’m sure Mr.Spock would have a “Good of the many outweigh the good of the few, or the one” answer but Kirk may disagree. In the second scenario above, is the car going to save your life, or the life of the motorcycle driver? Somehow I think I want to make that decision, no matter which way I would swerve.

There aren’t just ‘life or death’ issues. Consider these posited by a prof at Cal Poly:

Patrick Lin, philosophy professor at Cal Poly, San Luis Obispo, is one of the few philosophers who’s examining the ethics of self-driving cars outside the Trolley Problem. There are concerns about advertising (could cars be programed to drive past certain shops shops?), liability (who is responsible if the car is programmed to put someone at risk?), social issues (drinking could increase once drunk driving isn’t a concern), and privacy (“an autonomous car is basically big brother on wheels,” Lin said.) There may even be negative consequences of otherwise positive results: If autonomous cars increase road safety and fewer people die on the road, will this lead to fewer organ transplants?

Autonomous cars will likely have massive unforeseen effects. “It’s like predicting the effects of electricity,” Lin said. “Electricity isn’t just the replacement for candles. Electricity caused so many things to come to life—institutions, cottage industries, online life. Ben Franklin could not have predicted that, no one could have predicted that. I think robotics and AI are in a similar category.”

I’m sure of only one thing: Unintended Consequences will happen. And I’m sure we won’t like it.

JVH

 

Picture of John Van Horn

John Van Horn

Leave a Reply

Your email address will not be published. Required fields are marked *

Only show results from:

Recent Posts

A Note from a Friend

I received this from John Clancy. Now retired, John worked in the technology side of the industry for decades. I don’t think this needs any

Read More »

Look out the Window

If there is any advice I can give it’s concerning the passing scene. “Look out the window.” Rather than listen to CNN or the New

Read More »

Archives

Send message to



    We use cookies to monitor our website and support our customers. View our Privacy Policy