Judging Autonomous Vehicles
Jeffrey J. Rachlinski – Cornell Law School;
Andrew J. Wistrich – California Central District Court -- Would you rather be run over by a self-driving car or a car driven by a human being? Assuming a similar vehicle travelling at a similar speed, the choice should hardly matter. Most people nevertheless react negatively to new technologies that they perceive as unnatural. In fact, research on autonomous vehicles shows that consumers insist that autonomous vehicles must demonstrate an accident rate that is one-fifth that of human-driven vehicles before they will be comfortable with having them on the road. In experimental studies in which people evaluate accident vignettes, people reacted more negatively to accidents caused by autonomous vehicles than to accidents caused by human drivers, attributed more culpability to autonomous vehicles that cause accidents, and treated such accidents as having inflicted more harm.
But what about judges? Judges will play a critical role in evaluating accidents involving autonomous vehicles. In other research, we have found that judges rely on the many of the same kinds of potentially faulty decision-making strategies concerning risk that affect most people. Judges might approach liability for autonomous vehicles with the same hostility as the general public.
To assess judges’ reactions to autonomous vehicles, we conducted two experiments with 933 sitting state and federal trial judges. As more fully explained in our recent paper, “Judging Autonomous Vehicles,” the judges participating in our research assessed a vignette describing an accident in which a taxi owned by a company with a fleet of both autonomous taxis and human-driven taxis struck a pedestrian crossing the street. For half of the judges, an autonomous taxi failed to detect the pedestrian because sunlight reflecting off of a building fooled its sensors. For the other half of the judges, the accident resulted when sunlight reflecting off of a building distracted its human driver. The circumstances of the accidents and extent of the injuries were otherwise identical.
In the first study, we asked judges to assess a comparative negligence scenario in which they allocated responsibility between the taxi and the pedestrian. In this study, the pedestrian was also at fault because she was texting on her cellphone while jaywalking. Although the accidents were basically identical, judges assigned an average of 52% of the fault for the accident to the car when it was said to be an autonomous vehicle, as compared to 43% when it was said to be driven by a human. Furthermore, two-thirds of the judges evaluating the autonomous vehicle attributed at least half of the fault to the car, as compared to only half of the judges evaluating the human-driven vehicle.
In the second study, we presented a similar scenario to judges in which the pedestrian was entirely blameless, but in which the compensatory damage award was at issue. Although the materials described the injury identically in both cases, judges awarded an average of $340,000 when an autonomous vehicle caused the accident as opposed to $243,000 when a human-driven vehicle caused it.
Our results suggest that judges will disfavor autonomous vehicles. Like lay people in similar studies, they attributed more blame to the autonomous vehicle and treated the accident it caused more harshly. Hostility from consumers, regulators and the judiciary is not apt to prevent the widespread adoption of autonomous vehicles. If a new technology ultimately proves to be useful and safer, people will eventually demand it. Judicial hostility towards autonomous vehicles, however, risks unduly delaying or distorting them to the detriment of long-term public welfare.