Autonomous Vehicles: a reflection on risks and responsibility
Authors
The arrival of autonomous vehicles is transforming mobility, but it also raises an essential question: who is responsible in the event of an accident? The passenger/consumer, the system or the manufacturer?
The transition to autonomous vehicles has enormous potential for safety, efficiency and mobility in traffic, an environment although regulated, essentially human and permeated with risks and unpredictable and uncontrollable situations. However, the issue of fault in accidents involving these vehicles is deep and multifaceted: it can fall on the passenger/consumer, on the machine system, on the manufacturer and/or developer.
Autonomous vehicles use sensors, algorithms, and artificial intelligence to drive with little or no human intervention (this is even one of the brands that differentiate the levels of automation proposed by SAE International from 0 to 5). Despite the speed and dynamism of technological development, the Brazilian legal and regulatory experience is moving at a slow pace.
Currently, we do not have in Brazil any legal or regulatory parameter that guides, for example, the testing phases, meaning whether, how and when tests can be carried out on public roads. Imagine a fatal accident during the tests of an autonomous vehicle, which had a safety driver and remote controller, how to determine and delimit the civil and criminal liability of those involved? Is the current legal system sufficient to provide satisfactory answers?
Both Bill 2338/2023, which deals with the use of artificial intelligence in Brazil, and Bill 1317/2023, which regulates the use of autonomous vehicles in Brazil, propose safety, transparency, and accountability rules, but no definitive legislation has yet been approved. The fact that they are not approved, however, offers us the opportunity to debate and improve them, especially regarding the lack of maturity with which liability problems involving technology are dealt with within the scope of Bill 1317/2023.
In terms of Bill 2338/2023, autonomous vehicles quite naturally fall into the category of high-risk artificial intelligence systems, according to the parameters established in the text. The bill defines that systems capable of generating a significant impact on fundamental rights, especially those that can affect the life, physical integrity or safety of people, require a stricter governance regime.
Article 17, item VIII, of Bill 2338/2023 reinforces this logic by providing control and transparency measures for systems that operate in sensitive contexts, subject to relevant risks, or that make critical decisions. The system makes autonomous decisions, i.e. without human intervention, in fractions of a second about acceleration, braking, lane change and responses to emergencies. This means that, for example, a programming failure or a reading error of the sensors can directly result in serious and even fatal damage.
Still in this context, however, none of the bills can provide answers about what it defines and how to identify a programming error, especially when faced with problems called Black Box, when it is impossible to go back and interpret the system's decision-making process. Is the Black Box Problem an inherent risk? Or is it the result of a defect?
Even more delicate problems, it seems to us, are the well-known dilemmatic situations, or moral dilemmas, which consist of imminent and inevitable accidents in which any decision of the system will impose the injury of some of those involved. In other words, it is not a programming error or malfunction, but a "conscious" decision by the system to impose damage on one of those involved, even if to save the other. Should the system always protect the passenger/consumer, even at the cost of a pedestrian's life? Should the system always protect the community, even at the cost of the passenger/consumer's life? Who is responsible for this choice? Is this result attributable to any human agent? Which?
For this reason, self-driving cars are one of the most emblematic examples of systems that should be treated as high risk. However, what grounds are the guilt of those involved? Who are those involved who can be held responsible? How is the guilt of each one delimited? How to hold human agents accountable for autonomous decisions made by algorithms? How do you ensure transparency and security without stopping innovation?
The attribution of liability, whether civil or criminal, must consider, among other criteria, the level of automation, the operation of the system, conditions of use of the vehicle, the adequacy of the infrastructure, telemetry records and terms of use. Although the Brazilian legal and regulatory scenario is moving at a slow pace, it is certain that the manufacturer must adopt the measures within its reach so that the vehicle is as safe as possible, adopting transparency, robust governance, clear documentation, constant and diligent maintenance/monitoring; regulators must clearly define test, operation, insurance and liability rules; passengers (when they exist) must understand their duties and responsibilities, in addition to being sufficiently informed of the risks to which they are subjected.
These are some of the fundamental points that make it possible to reconcile innovation and safety, encouraging the adoption of autonomous vehicles in a responsible way.
For car manufacturers, autonomous vehicles represent a technological and commercial opportunity, capable of changing the system of traffic and making it safer, since it removes the human factor, the main cause of accidents, from the equation. On the other hand, it also poses several significant new legal, reputational, and financial risks.
Because automated driving algorithms make critical decisions in real time, any failure – whether of software, sensors, or integration between systems – can result in serious accidents, putting manufacturers under possible liability. On the other hand, accidents can still be inevitable and the result of "conscious" decisions of the system, something totally new for manufacturers, consumers and legal operators. In a scenario where technology is still maturing, regulatory and legal risk is today one of the biggest strategic challenges for the automotive industry.
In short, autonomous cars promise to reduce accidents and revolutionize traffic, but they also require a new legal look. In Brazil, there is still a lack of clarity about who is responsible for failures and autonomous decisions of the machine. The tendency is for responsibility to be shared until technology and legislation mature. The challenge is to balance innovation and safety, building public trust in this new model of smart mobility.