Rely on
Redmann

Louisiana’ premier
personal injury lawyer

scroll
Request Your Free Consultation

Who is Liable for a Crash if a Vehicle is on Autopilot?

Posted in Car Accidents on May 12, 2022

Brendan McGowan sat in the driver’s seat of a Tesla Model 3. He watched the steering wheel turn as his hands hovered above it. The Tesla navigates the streets of Auburn, California using Tesla’s Full Self Driving software (FSD).

As an FSD beta-test driver, McGowan is little more than a spectator behind the wheel. Tesla aims to eliminate the need for a driver’s control at all. 

The Tesla comes to a seamless halt at a stop sign before making a wide arc of a left turn. McGowan’s brow scrunches into concern. He whispers, “Oh my gosh.”

A passenger yells, making the rushed sounds of panic. The Tesla barrels straight for a parked Ford.

McGowan grabs the wheel, snatching the car out of its collision path, barely missing the Ford. He lets out a long sigh, carrying the word “Jesus!”

The whole incident was recorded. Since testing began on Tesla’s FSD-controlled cars in October of 2020, plenty of these videos highlight the remarkable technology of Tesla’s autonomous cars. Other videos have shown more close-calls like this, and there are far too many dash-cam videos serving as evidence in the litigation of car crash cases.

What happens if McGowan could not pull the car out of its kamikaze course and the collision occurred? Who would be at fault? Is McGowan to blame? Should Tesla be held responsible? Is it possible to sue the developers of the FSD software? What about the company that made and supplied the navigational sensors and cameras?

Just as this technology evolves, corrects glitches, and improves, the question of liability morphs and expands along the way. Clarifying how these cases are decided and reaching a consensus is happening in real-time across the U.S. as increasingly relevant cases play out in courtrooms. 

Defining the Breadth of the Robot

Google’s parent company Alphabet also embarked in its own autonomous vehicle program with its subsidiary Waymo. In 2016, their self-driving vehicles took to the streets of Arizona’s East Valley.

Waymo recently reported that in 2019 and 2020, their self-driving vehicles were involved in at least 18 accidents—collisions with other vehicles, cyclists, pedestrians, and other objects. Another 29 accidents were prevented by human safety drivers disengaging and taking control of the vehicle.

Both Waymo and Tesla contend that the goal of these programs is to hone self-driving technology until human error is eliminated, introducing a safer, convenient, hands-off mode of travel. As of now, the technology has not safely reached this targeted total autonomy.

And still, the marketing surrounding these vehicles makes grand claims. Tesla even named their self-driving system as a fully autonomous system—a hands-off experience. Other car companies tout vehicle models with similar advanced driver-assistance systems (ADAS). None of these systems are truly autonomous.

In 2020, the AAA Foundation for Traffic Safety published a study surveying drivers who test drove a vehicle with driver assistance features. The drivers were split into two groups, both used the same ADAS, but the systems were explained to drivers in diverse ways. Researchers referred to one of the systems as AutonoDrive and described how the system’s features made the ride more convenient for the driver. The other system was called DriveAssist, and its description included the system’s limitations while explaining its enhanced driver responsibility.   

More than 40% of the surveyed AutonoDrive drivers overvalued the system’s features. Only 11% of the DriveAssist test drivers put the emphasis on the AutonoDrive’s capabilities. All these drivers reported that while using the ADAS, they had a strong urge to “engage in potentially distracting or risky behaviors.” 

They also showed more trust in the system being able to prevent a collision, even when no evidence was given to indicate that the system would be able to prevent a collision and was unable to prevent a crash.

Currently, self-driving cars only assist a driver in various driving elements, and drivers are being held responsible for not paying attention to the road or their surroundings in cases where injuries and fatalities occurred.

Courts must apply existing laws to modern technology on a case-by-case basis, while examining facts and the specific details of the case. As technology continues to improve to the point of truly autonomous, the tides may change in terms of liability, flowing toward software designers and car companies.

Human vs. Robot: Determining Liability 

More autonomous vehicles will inevitably lead to more accidents. As the robots take more control of vehicles, the courts will need to shift how liability cases are litigated. These cases may become subject to vicarious liability—A legal term used in some circumstances to find companies responsible for their employees’ actions and the quality of their provided products.

Vehicle manufacturers could be held liable if a car’s hardware or software fails and results in a crash. Even in cases where human drivers make a mistake or have an error in judgment, the self-driving system’s performance would be evaluated for defects.

The system’s evaluation would typically look at both its action and reactions in specific circumstances and the system’s overall performance. Specific systems would also be compared to other ADAS, to judge performance and establish a safety standard.

Vicarious liability cases could also lead to evaluating self-driving vehicles by determining if newer systems performed better than the previous ones that had safety issues. Robots do not learn from human error, but robots do learn from other robots. 

When a system fails, gathered data of those failures can be garnered from thousands of cars that have logged thousands of distinct situations to learn from. This presents developers and car companies with foreseeability—the ability to predict risks because of previous situations and outcomes.

If tech companies intend to make robot drivers safer than human drivers, robots should be able to learn from their mistakes and not be allowed to make the same mistake after they’ve had time to correct it.

It could take courts years to create consensus on cases involving autonomous vehicles, but a consensus would establish how these cases are managed by courts, the auto industry, local governments, law enforcement, and insurers. Clarity on determining liability for these new world problems allow for claims to be settled without years of legal confusion.