News & Perspectives

Does That New Self-Driving Car Have Your Ethics?

Does That New Self-Driving Car Have Your Ethics?

Perspective// Posted by: Christine Mason / 16 Nov 2014

On October 2, 2014, Elon Musk told CNNMoney that “A Tesla car next year will probably be 90 percent capable of autopilot. Like, 90 percent of your miles can be on autopilot. For sure on the highway.” See the official announcement made on 10/10/14.

Artificial Intelligence in vehicles carries a promise of safer roads and fewer accidents, and a much more productive commute. There are impressive examples of autonomous cars navigating well in close proximity, going 100+ miles per hour around a test track, and driving in all kinds of weather conditions. But, as in all new technologies, the technical possibility and promise of a thing, and it’s efficacy in closed circumstances, isn’t enough. For broad scale deployment in high risk environments (like driving), we need to think through edge cases and subtleties to more seamlessly surf the technical shift, with less harm. Here are some questions for your pondering:  

Does your self driving car’s algorithm have a moral hierarchy, and does it agree with yours? In the event of an unavoidable accident, if the car has to make a decision between hitting a deer and hitting a dog, which should it pick? Between hitting a child or hitting a school bus filled with children? Between rear-ending the car in front or a potential head-on collision with an oncoming vehicle farther away? These choices will be coded into the self driving car’s moral hierarchy and assumption of values.

So will its tolerance for breaking the laws. For example, how should an autonomous vehicle adapt to the aggression level in a driving environment? Should it follow the letter of the law, or be contextual? If it follows the same rules in downtown Atlanta or in a small rural town, it will certainly create more danger. After all, many driving situations are situational to culture. What about crossing the yellow line to avoid a pothole or a bit of debris in the road or a perceived erratic driver? Can the car be encoded to break the rules once in a while? How will this context be accounted for?

Will an autonomous car change human behavior in human driven cars, or in pedestrians, and do we know enough about those scenarios? While there are closed case tests occurring of the effectiveness of DSRC communication protocols in crash prevention, what is still not understood are how other cars, and people, will respond when the autonomous car algorithms become well known. For example, if the self driving car always yields to traffic that doesn’t give way and people know that, the car will be stopped, not be able to make headway. It most definitely will not be able to merge into the Lincoln Tunnel.

Or what if people know if autonomous cars will stop when someone jumps out in front of it — will people start daring each other to jump in front of? After all, humans are the inventors of the Darwin awards – there is no limit to our stupidity when it comes to adrenaline high seeking behaviors.

What happens when autonomous cars come to a 4 way stop at the same time? Humans solve this problem through nodding, waving and hand signs. How will the autonomous negotiate this? We haven’t yet seen 4 models driven by 4 different AIs approach an intersection together.

How should the car interact with the driver? When should an autonomous car hand the controls back to the driver, and how should it be done? Studies have shown that drivers who are not paying attention by design, having given cognitive processing to the car they’ve gone elsewhere mentally — they will have a harder time reengaging in a crisis. What are the tolerances around this?

There are a host of other ethical and legal questions, also: Should a car be allowed to regulate a human? Should a car not start without an approved driver biometric imprint? Should a car not start if there is alcohol on the driver’s breath? If an autonomous vehicle gets in an accident and is determined to be at fault, who is responsible- the driver or the maker of the AI? Should all human drivers be able to silence or shut off autonomous mode and revert to manual controls? What about in the case of minor drivers? Should the car be controllable over the internet?

And finally, are the protocols established by the NTSB even secure? These are all software, connected to sensors, and therefore manipulable and hackable in any number of ways. In a worst case scenario, a self driving car can be used as a nontraceable instrument of homicide – by hacking the proximity system, or the locator, or the navigation feed, and causing the system to drive the car into a wall. What will we call this? Homicide by autonomous navigation cyber intrusion?

Last week, I was able to spend some time with John Avery, an industry executive. While these questions have been theorized, there’s no comprehensive investigation in the wild into what the answers might be. Avery’s proposed the establishment of an objective study center, where autonomous car interactions (both with other AI behaviors and with human behaviors) can be investigated. A place where all the algorithms proposed are sharing the roads to run test scenarios.

Christine Mason
Christine maps new markets for emerging technologies, scouts for strategic expansion opportunities, and guides internal innovation strategies for leading companies.