The advent of driverless cars on the roads of Maryland and other states will probably not completely eradicate traffic collisions. It could, however, change the way insurance and safety industries work to protect motorists and passengers. Some accidents, such as those caused by programming errors or software, may raise unique questions of who ought to be held accountable. New forms of insurance fraud could even place some of the responsibility for accidents on hackers who intentionally cause wrecks in order to win payouts.
Many of the dangers of self-driving vehicles remain unknown, which could make it harder to predict the technology’s safety ramifications. Volvo says that by the year 2020, nobody will get hurt or die while riding in one of its vehicles, and Tesla claims it can already reduce accident incidents by 50 percent with its Autopilot system.
Observers also say that the large amount of code used to run autonomous vehicles could increase the difficulty of identifying errors that might result in personal harm. It may also make it harder to perform testing that faithfully replicates the many potential situations a vehicle’s AI could face in the real world. Although transparency and new legislative mechanisms have been proposed to combat such problems, these solutions are likely to take time to implement effectively.
As the technology evolves, injured victims’ options are likely to evolve as well. Those who get hurt in motor vehicle accidents that involved autonomous cars may want to meet with an attorney to see if seeking compensation from the vehicle manufacturers themselves is a viable option.