Do You Want the Computer of a Driverless Car Deciding Whether You Live or Die?
I was enjoying dinner with my wife, an old friend and his wife when we were discussing technology, artificial intelligence and driverless cars. Things became interesting when we were chatting about the decisions we could imagine a driverless car having to make, and this is clearly something everyone should be considering.
Imagine a motor vehicle operated by a robotic artificial intelligence, in other words, a driverless car. We might hope this will reduce accidents, but we also know that some accidents will always happen. Do we want the computer artificial intelligence of a driverless car to make decisions taking into account which vehicle will suffer more damage? Or which vehicle’s occupants will receive more serious injuries? Or if catastrophic injuries or death are likely, which vehicle’s occupants to save from more serious injuries if it comes down to one vehicle sustaining worse damage? Essentially, should a driverless car be deciding who lives and who dies? Should the injuries that we receive, and the injuries of our family members, be dictated by the decision of a computer to put the safety and protections of another vehicle’s occupants over the safety and protection of you and your loved ones?
I am not comfortable with computer artificial intelligence making these decisions. I am not comfortable with the chance to save myself or my family members being tossed aside by a driverless car’s computer choosing the safety of others as the computer’s priority at the expense of my safety.
What do you think?
Follow the discussion on my Facebook page: https://www.facebook.com/LawOfficeMichaelPollack/?fref=nf