Share

Morality and Autonomous Cars: How will they really be handled?

Morality and Autonomous Cars

Swerve and kill a man breaking the law in a crosswalk or drive straight into a barricade killing two passengers.

What do you choose?

Sound far-fetched? Maybe not. That is one of many possible choices that self-driving cars might have to make in complex traffic situations.

While extreme, it could be a major news headline once autonomous vehicles begin shuttling around passengers in volume. To bring awareness to the issue, decision making for autonomous vehicles is being simulated in the mit media lab Moral Machine.

The Moral Machine is a project meant to illuminate the choices that are involved in machine learning and the repercussions of how humans view these choices. The website walks visitors through several moral dilemmas in regards to driving, each without a clear outcome. Test takers have to make tough decisions in regards to the value of life in potentially fatal driving situations.

Are Robots Ready?

The rise of robot learning isn’t new. We’ve always had a slight obsession with machines. Building them. Improving them. Creating movies where they become sentient. They’ve replaced jobs, made remedial and repetitive tasks easier, and even respond to our voice commands. But can we trust them with human level decisions? Can we rely on them to execute morality? Can machines truly learn or are they subject to the decision making of their makers?

Questions. So many questions.

There is some inherent difficulty in giving over morale decisions to machines. According to an article in Yale Scientific, one of the main challenges is approaching the experience computationally. What is the end of the computational pattern? How does a machine determine, through calculation, emotions of empathy? Can difficult decisions be solved with a math equation? While math can be considered a universal language, it’s far from clear it can speak about feelings of love, sadness, and humility.

A second challenge is referred to as the “frames problem”. This examines the likelihood that a machine can even determine if it is in a morale situation. What is the threshold for evaluating the tipping point of morality? This kind of issue lends itself more in the human realm than a plastic and wires one.

When Is It OK to Break the Rules?

What if we needed machines to become immoral and break the rules. That is exactly the question posed by professor Colin Allen.

“Ima­gine we pro­grammed an auto­mated car to nev­er break the speed lim­it. “That might seem like a good idea un­til you’re in the back seat bleed­ing to death,” Allen noted.

If we want a machine to “break the rules” when it actually benefits humanity, at what level of morale sensitivity do we program? Furthermore, are vehicle overrides being made by mathematical calculations or by the riders of the autonomous cars? If handed over to the rider, what morale threshold do we assign a rider before allowing them the ability to “break the rules”?

If you think these questions are easily answered, go ahead and try some exercises from the Moral Machine. The tests become real character analyzers. Would you swerve to save a doctor’s life and kill a regular citizen or save the regular citizen and kill the doctor? Each scenario puts you at odds with these kinds of decisions.

Clearly, not every scenario in real life will feature life and death for self-driving cars, but that is kind of the point. What level of decision is too much for a robot or machine? How do we cope with the kinds of choices we give up in order to automate daily tasks?

The Future of Transportation

Put in the context of transportation, how would you program an autonomous car? What decisions would you choose when faced between two negative outcomes?

As news of Uber’s major push into self-driving cars spread around the globe, we may start to see answers sooner rather than later. While the announcement caused many to take note, the driverless future isn’t necessarily right around the corner. Uber will still have employees in the vehicle as emergency backups when the project begins to take shape. Still, the future is coming and it will impact the way we move in and around cities.

For now, we must keep asking the right questions to make sure we have the best solution on and off the roads as we further our exploration of machine learning and autonomous robotics.

Keep asking. Keep investigating.

Make Socrates proud.