While it’s not a “flying car”, Toyota did say earlier this week they’re investigating hover car technology. Oh boo hoo, not flying cars. There are already flying cars, and you want one, but you don’t want anyone to have one, because cars flying around in three dimensions is just a logistical nightmare. But a car that hovers just above the ground? Oh yeah I could get behind that. And make sweet love to it.
The question of how robots would or should handle complicated ethical questions has been asked ever since the first basic robots were invented. We have built computers and robots to be impartial tools for accomplishing all sorts of tasks, but very soon, we’ll have to answer the question of robot ethics for real, especially when it comes to self-driving cars, which are just around the corner.
So the question is this:
A front tire blows, and your autonomous SUV swerves. But rather than veering left, into the opposing lane of traffic, the robotic vehicle steers right. Brakes engage, the system tries to correct itself, but there’s too much momentum. Like a cornball stunt in a bad action movie, you are over the cliff, in free fall. Your robot, the one you paid good money for, has chosen to kill you.
Should it? A human driver would invariably save its own life first, but when we become dependent on having robotic cars drive us around, should we build in that same kind of self-preservation instinct? Meaning the car would kill two people to save you? Or should we keep the robot brain impartial, relying on data to make its decision, even if it means you’re expendable?