Late last month an Apple engineer died after his Tesla Model X crashed into a roadside barrier. Within days Tesla published blog posts based, it said, on logs recovered from the computer inside the vehicle.
“The driver’s hands were not detected on the wheel for six seconds prior to the collision,” Tesla said, adding that he had received a number of “hands-on warning” earlier in the journey down California's Route 101.
“The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken,” the company added.
In short: he wasn’t following the guidelines and paying attention, despite our efforts to make him.
The company has been selective with the information it has put out, omitting for example whether its system had detected the barrier. The US National Transport Safety Board has since criticised Tesla for releasing information that forms part of its ongoing investigation.
The incident and the resulting communication from Tesla expose a significant problem with increasingly autonomous vehicles.
While such vehicles are packed with sensors and record lots of data, enough to confidently attribute liability in the case of a tragedy – there are vested interests at play around the availability, sharing and accuracy of that data.
“Who is to blame when a vehicle crashes? In conventional human-driven cars, the answer is simple: The driver is responsible because they are in control. When it comes to autonomous vehicles, it isn’t so clear cut,” says Raja Jurdak, research leader of CSIRO’s Distributed Sensing Systems group.
“More sensors and more data is going to help. But thinking of the different parties that have potentially competing interests, there are some trusts risks. Do we trust the data? And who has interest to alter the data?”
What’s needed is a trusted, shared data record from which liability can be accurately determine. The solution, says Jurdak and his team from Data61 and the University of New South Wales, could be found on the blockchain.
“Emerging blockchain technology has the potential to underpin a new liability framework for autonomous vehicles as it provides trustless consensus,” Jurdak et al write in their paper A Blockchain Based Liability Attribution Framework for Autonomous Vehicles.
The idea is that for every journey made, autonomous vehicles would enter journey data onto a blockchain. Since that would generate a huge blockchain, if an accident didn’t occur the chain would be erased.
In the event of an accident, however, there would be a blockchain holding the “comprehensive evidence” required to attribute liability.
The data on the blockchain would include “relevant interactions between an autonomous vehicle and other potential liable entities such as the instruction to execute a [software] update and the execution of the instruction to identify a potential case of negligence; the report of a maintenance officer on the maintenance conducted in an autonomous vehicle; the data generated by an autonomous vehicle in the event of a traffic accident; and data generated by an autonomous vehicle events such as hard braking, over speeding and crossing a red light are triggered” the authors explain.
Although not covered in the paper, Jurdak told Computerworld that potentially the blockchain would be communicated from the vehicle via roadside infrastructure.
“It wouldn’t all be kept on the vehicle. There’s a lot of roadside infrastructure being rolled out these days. You could imagine the car uploading that data to some secure server when people park in the garage or are passing the roadside infrastructure,” he says.
The framework – based on a permissioned blockchain – brings together the various parties involved in the “autonomous automotive ecosystem” such as the auto manufacturer, software provider, service technician and the vehicle owner. It also proposes how the blockchain can be partitioned to limit access to different parties.
All parties would face some compromise in participating in the liability framework, Jurdak admits, although they each have much to gain too.
“The manufacturer may not always be liable. There may be instances where the driver managed to override some of the actions of the vehicle. In some cases other parties outside the vehicle may be responsible, such as pedestrians or other vehicles not obeying road rules. It could be the maintenance provider in terms of service or the software provider,” he says.
Those parties would also be protected from attempts at insurance fraud, Jurdak adds.
“Even if a highly tech savvy owner of the car managed to somehow try to change some records of the data of what happened that would be detected because of the underlying features of blockchain. In a case where a driver might want to blame the manufacturer or maintenance provider they wouldn’t be able to do it, they would be caught out,” he explains.
Public sentiment and government regulation could give the various players the necessary push towards participating, Jurdak adds.
The next steps for the project will be fleshing out the business model for the framework – who will pay to maintain the system – and testing it in Data61’s autonomous machines lab.
Jurdak hopes some kind of liability framework will be in place before autonomous vehicles become ubiquitous. However, its reliability in the real-world can’t truly be tested until that happens, much like the vehicles themselves.
“Ideally we would have a reliable way of understanding what happened and assigning blame before these things get on the road. What has happened in reality, just as with many of the large disrupted technologies of the last few years, is the technology is way ahead of regulation,” Jurdak says.
“[But] if you don’t have real world experiments and trials then its very hard to ascertain the maturity of the technology or to be able to gather enough data to say what’s working and what’s not. It’s kind of a chicken and egg problem.”