The EU Commission on Wednesday (28 September) proposed rules to make it easier for both consumers and companies to sue makers or operators of drones and robots using artificial intelligence (AI) for compensation for any damages.
As more-and-more AI-operated tools surround us — think of package-delivery drones, self-driving lorries, cleaning robots — the EU executive set out a bill to alleviate the burden of proof for customers.
The draft AI Liability Directive is complementary to the AI Act, an upcoming legislation on artificial intelligence as it becomes a bigger part of society’s life.
The draft rules, which still needs to be approved by EU governments and the European Parliament, make it explicit that injured people can claim compensation if software or AI systems cause damage, whether to life, property, health or privacy.
“We want the same level of protection for victims of damage caused by AI as for victims of old technologies,” justice commissioner Didier Reynders told reporters in Brussels.
While victims will still need to prove that somebody’s fault caused the harm, the commission proposed two ways to make it easier for consumers to do that.
The new rules would require companies to disclose information about high-risk AI systems so that the victims could identify the liable person and the actual fault that caused the damage.
“This is to avoid victims being confronted with a black box of AI,” Reynders added.
Another tool is a “presumption of causality”, which means that victims only need to show that the manufacturer or user failed to comply with certain requirements and obligations which caused the harm.
If companies fail to provide that information then a presumption is made that they had not done what they were supposed to do under these requirements and obligations.
Products under the draft rules include anything from software updates — that for instance render victims’ smart-home products unsafe or when manufacturers fail to fix cybersecurity issues — to medical health apps, and self-driving cars.
The new draft rules do not yet allow compensation for breaches of fundamental rights — for instance, if someone failed a job interview because of discriminatory AI recruitment software. That will be dealt with under the AI Act currently under negotiations.
Reynders dismissed concerns that the liability rules would hamper innovation of AI tools.
He said AI tools needed to be safe and trusted by consumers to really catch on.
“New technologies, such as drones, delivery services based on AI, can function, only when consumers feel safe and protected when using them,” he said.
“If we want to make the most of large potential of AI, we have to cater for its risks,” Reynders added.
Those with unsafe products originating from non-EU countries will be able to sue the manufacturer’s EU representative for compensation.
Discussion about this post