Hello, ladies and gents, Luscid here!
Imagine you're watching a runaway trolley heading down the tracks straight towards three
workers who will all be killed if the trolley proceeds on its present course.
You happen to be standing next to a switch that will divert the trolley onto a second track.
But here's the catch.
That track has a worker on it, too, but just one.
What do you do?
Do you sacrifice one life to save three?
And how can a person decide between two really bad options?
This disturbing choice is a variation of the so called "trolley problem," an iconic
philosophical thought experiment.
In the past 40 years it has caught the attention of brilliant minds, from academic ethicists
to psychologists to engineers.
The trolley problem has also been criticized over the years for being too unrealistic to
reveal anything important about real-life morality.
But new technology is making this kind of ethical consideration more important than ever.
Self-driving cars are already cruising the streets.
Fully autonomous vehicles have the potential to benefit our world by increasing traffic
efficiency, reducing pollution, and beyond all eliminating up to 90% of traffic accidents.
It has been estimated that worldwide, 1.2 million people die every year in traffic accidents.
As you might have noticed, humans are terrible drivers.
AVs, by contrast, make consistent and calculated choices, and are incapable of getting drunk, angry,
or distracted.
Not all crashes will be avoided, though, and some crashes will require AVs to make difficult
ethical decisions in cases that involve unavoidable harm.
For example, the AV may avoid harming several pedestrians by swerving and sacrificing a passerby,
or the AV may be faced with the choice of sacrificing its own passenger to
save one or more pedestrians.
Those scenarios are nothing unprecedented.
Many drivers have encountered them.
But here is the problem.
If you were driving in manual mode, whichever way you'd react in a situation like this
would be understood as just that, an instinctual panicked reaction, not a deliberate decision.
But if a programmer were to instruct the car to make the same move in the same situation,
well that looks more like intentional homicide because the outcomes would have been determined
months or even years in advance.
In order to see what's the public's opinion on the matter, scientist Iyad Rahwan and his
colleagues ran a survey in which people were presented with the aforementioned types of scenarios.
They were given two options inspired by two philosophers: Jeremy Bentham and Immanuel Kant.
According to Bentham's view the car should follow the so called Utilitarian ethics:
it should take the action that will minimize total harm - even if that action will kill
a bystander or a passenger.
Kant's philosophy is different.
Conforming to his view you should not take an action that explicitly harms a human being.
So in this case you should let the car take its course even if that's going to harm more people.
So what's your opinion?
Bentham or Kant?
The results from the study show that most people stick to Bentham's view.
It seems that people want the self-driving cars of the future to be utilitarian, to minimize total harm.
Problem solved!
But as it turns out – not quite.
When people were later asked whether they would purchase such cars, their answer was,
"Absolutely not."
And therein lies the paradox.
It seems that people would like to buy cars that protect them at all cost, but they want
everybody else to buy cars
that minimize harm.
That's one of the main problems with the trolley dilemma.
When faced with such a hypothetical scenario,
people regularly answer how they wish they'd act rather than how they actually would act.
A few months ago, thanks to virtual reality, researchers at the University of Osnabrück
have shed light on the question how to model such ethical problems in a way that reveals our true values.
With the aid of immersive virtual reality, the researchers placed participants in unexpected unavoidable crash situations
while driving a virtual car.
Faced with choices between hitting people, animals, and inanimate objects,
the participants had to make split-second decisions revealing how they valued each item.
Through a careful analysis of the results, the authors were able to create
a value-of-life table of every human, animal, and inanimate object the drivers encountered.
Hypothetically, when faced with an unavoidable crash,
the autonomous vehicle could simply consult the value-of-life table and choose to hit the entity with lowest value of life.
This at least would succeed in transferring human values onto automobiles.
And yet, there are at least two major problems with this approach. The first concern is that different cultures and regional groups
may have very different value-of-life tables.
So should we apply different ethics in different cultures or should we use one ethical code worldwide.
The second (and far more disturbing) problem is that the sum of our ethical choices might reveal us to be incredibly selfish and racist.
Applying such value-of-life system to autonomous vehicles would result in transferring our most embarrassing biases onto machines.
And if we start correcting for these biases, we're back to where we started:
facing the question of whether to program machines behave how we wish we would act, or how we actually would act.
Those findings pose a serious problem: AVs may soon be ready to hit the market,
but humans aren't ready to accept the ethical challenges that come along with them.
So what should we do? Could it be the case that a random decision is still better than a predetermined one
even though it's designed to minimize harm?
And who should be making those decisions?
Nobody has a final answer to those questions, but one thing is certain:
technology is going to advance whether we like it or not.
What we need to do right now as a society is to figure out collective answers to these puzzling questions
and embrace the future together.
Thank you for watching!
If you enjoyed this video, check out more from us here
and consider subscribing for a monthly dose of science! See you soon!
Không có nhận xét nào:
Đăng nhận xét