Solving the AI Accountability Gap

I’m not so sure.

JusticeSimilarly, the penalties and remedies of the legal system are for the most part built to be levelled against humans, not autonomous computer programs.

Far from being an appendix to the legal process, the ability to impose effective penalties to those who break the law is absolutely crucial to the underlying justice of the system.

In order to be credible, it’s vital that the legal system can punish wrongdoing proportionately and effectively.

Put another way, if murderers were sentenced to absurd ‘punishments’ that didn’t affect their lives in any meaningful way, the justice system would lose its value as a deterrent, and the moral credibility built on the principle that justice is done.

And that captures the problem posed by AI: you can’t throw an AI in jail, or impose a fine on it, or make it pay compensation – no matter whether it has legal personhood or not.

None of the legal system’s penalties or remedies work on autonomous computer programs.

And without effective penalties or remedies, our legal system loses that underlying foundation of justice.

CompensationJustice for victims doesn’t just include an effective punishment for the person who inflicted harm – in many cases victims seek some kind of monetary compensation for loss suffered, even if just the recouping of legal costs.

A general legal principle is that someone who suffered harm should be placed in the position they would have been in had the act not been committed.

So if an autonomous medical device, like a robotic surgery tool, were to malfunction and harm a patient, that patient ought to be able to pursue compensation in the legal system to pay for medical care.

Again, our legal system struggles to apply a general principle (in this case, victim compensation) where the harm is inflicted by an autonomous agent.

It’s obviously impossible for a court to compel an AI – a computer program – to pay thousands of dollars to a victim to cover their medical costs.

So in order for victims to have any recourse in the legal system, they have to be able to pursue a human (or, at least, a business) with the capacity to pay compensation.

Consequently, the ‘accountability gap’ adds serious difficulty to victims looking to receive compensation through the legal system.

…These issues of causality, justice and compensation are interlinked, and collectively present a huge challenge to our legal system.

For the system to remain credible and just, there is a fundamental need to fill the AI accountability gap somehow: to attribute AI-related harm to a human or group of humans in the first instance.

With Great Power…This necessity of tying an AI’s actions (and any consequential harm) to a human, or group of humans, presents us with issues of fairness.

The decision-making of contemporary AI systems is fiendishly difficult to understand or explain – even for the developers and coders responsible for programming it.

AI algorithms and processes are so complex that autonomous decision-making is sometimes likened to a black box.

In the context of that complication, is it really fair to hold a human legally accountable?In short: yes.

My view is that AI developers – defined as the person or group of people who directly shaped the programming of the AI – should be held legally responsible at first instance for the actions (and any harm) caused by that AI’s decision-making.

Morally, AI developers are the group most responsible for the decisions made by an AI.

Despite the black box dynamic captured above, if any party ought to have been able to foresee future harm caused by an AI, it is the group that created the AI’s decision-making capacity out of nothing.

One (not unreasonable) analogy is that of a misbehaving child who causes a tantrum in a store and destroys some products – we would reasonably expect the child’s parents to pay for the destroyed products, even though they didn’t do the damage themselves and couldn’t be said to fully understand why the child inflicted the damage.

Developers, like those parents, are in the best position to prevent the harm, even if they lack full control over their creation, and a moral responsibility flows from that dynamic.

Practically, the same developers are also the actors with by far the strongest position to actively take steps around risk management and security measures with respect to the complicated process of creating the AI.

The imposition of legal liability on AI developers, like this piece proposes, creates a strong incentive on that group to strengthen security measures and adhere to a more stringent risk mitigation framework.

This kind of healthy incentive doesn’t occur to the same extent if a group like regulators or manufacturers are held responsible for AI at the first instance, as they have far less direct control over the way the AI has been developed.

AI developers have the power to make AI safer, and my proposal aligns their personal incentives with that responsibility.

In practice, I imagine this proposal looks like a ‘rebuttable presumption’ that AI developers should have to face lawsuits by any victims of that particular AI.

A rebuttable presumption is not the same thing as an admission of guilt: it just means that if an AI developer believes another party (like the chief executive of their organisation) is more responsible for harm caused by the AI, they have to prove that belief to a court in order to avoid responsibility.

Take a hypothetical example (cough) of AI developers at a given social media giant, who create an algorithm designed to be as addictive as a slot machine.

My rebuttable presumption allows someone who thinks they’ve suffered harm from that algorithm to sue those developers.

But if those developers could point to strong evidence that they were directly instructed by management to make the AI as addictive as possible, the lawsuit would instead fall at the feet of those growth-addicted managers.

No matter whether the ultimate respondent to a case of AI harm is the developer or another party that the developer can point to, the problems associated with the AI accountability gap are addressed.

The burden on establishing causation is placed on the developer (not the victim), justice can be achieved through effective penalties and remedies, and victims can seek compensation against a human party.

Develop Netflix, and ChillThere is no perfect solution to AI accountability.

One of the biggest risks with the proposal to hold developers responsible is a chilling effect on AI development.

After all, AI developers are often small actors – individuals or small companies.

Whether or not they are the most culpable when their creations cause harm, the practical nightmare of facing lawsuits every time their AI causes damage might reasonably make AI developers exceedingly wary of releasing their creations into the world (and their hedge fund investors might pause before reaching for their chequebooks).

Yet the threat of a chilling effect is not enough to outweigh the ethical and practical considerations described above.

Many developers working for large tech giants will benefit from vicarious liability, meaning the tech giants will be forced to put their legal resources to the defence of their developers.

We might ultimately see governments adopting ‘no-fault’ coverage policies (similar to New Zealand’s ACC system for covering medical costs associated with accidents) to pay for AI-related lawsuits.

Given how many lives might be saved with the deployment of self-driving cars, for instance, there are certainly strong incentives for the state to support the deployment of AI in this way.

Victims would lose their ability to sue individual developers, but would receive acknowledgement of harm and their medical costs covered by the government.

Alternatively, governments might reserve for themselves the ability to prosecute AI developers whose creations cause harm.

This might be a more efficient system for taking the worst offenders to court.

Victims of autonomous decisions can work through a dedicated body to bring developers to account, similar to how the police work with victims in a criminal context.

This helps prevent the floodgates being opened, and helps stop garage-based AI coders from being drowned in civil lawsuits.

What these thought experiments captures is that that the AI accountability gap can be solved – with a new presumption that the developers of AI are responsible in the first instance – without stalling the industry by bankrupting its coders.

…Judging from this month’s leaked paper, the UK will suggest one way forward with the AI accountability gap: targeting chief executives of technology giants.

This piece has laid out a different path, holding the developers responsible at first instance.

In any case, it is growing more and more necessary to adapt our legal system so it can process cases with autonomous agents in a way that is morally and legally fair.

Is this kind of structural course correction easy?.Of course not.

But it replaces an alarming accountability gap with a public policy debate, and that surely is a step in the right direction.

.. More details

Leave a Reply