Munich Re logo
Not if, but how

Explore Munich Re Group

Get to know our Group companies, branches and subsidiaries worldwide.

Who is liable when robots cause damage?
The subject of artificial intelligence (AI) fascinates many people. Media coverage and public debate are accordingly intense.
Robot head
© Andriy Onufriyenko
    alt txt

    properties.trackTitle

    properties.trackSubtitle

    However, even though AI is still in the early stages of development and there are currently only a few applications in everyday life, the possibility of AI being used is already raising a number of questions: Who is liable when AI causes damage or injury to third parties? Will the use of AI lead to changes in liability law? And last not least, what impact will AI have on the insurance needs of the parties involved?

    Legal framework today

    First of all, we need to be clear that the emergence of a new technology doesn’t necessarily mean that changes to liability law will be required. Liability law is one of the oldest fields of law of all. Thanks to its flexibility, despite countless changes in technical and other conditions over the centuries, it has essentially survived unchanged. The emergence of a new technology doesn’t lead to a “legal vacuum” that makes new laws compulsory.

    Even liability for autonomous decisions of third parties is nothing new for liability law. The laws of ancient Rome already regulated liability for slaves. Many civil law acts, like e.g. the German Civil Code (BGB), contain provisions on liability for assistants, children and animals. Even under current law, anyone suffering loss or injury as a result of AI therefore has many options for asserting claims for compensation against the manufacturer, owner, keeper, user, network provider, software provider or other party involved in an AI application. Such claims may arise from tort law or from special laws like road traffic acts, product liability regulation, or anti-discrimination laws, if the AI application derives discriminatory consequences from the data it uses. In addition, liability can of course also be the result of the contractual relations between the injured party and one of the other parties involved.

    The best example of the comprehensive protection of injured parties that already exists today is the liability for autonomous vehicles in most European and some Asian countries,  since the keeper of a vehicle is liable when its operation results in death, personal injury or property damage. For this, it is irrelevant whether the vehicle was driving in automated mode or being controlled by a person. But even where such comprehensive strict liability for AI applications is missing, in the case of damage or injury the problem will often have been not so much artificial intelligence as natural intelligence, or rather the lack thereof, with all the liability law consequences.

    Balance between innovation and consumer protection

    The question is, however, whether the results obtained under the current legal situation are in line with what is desirable in political, economic and social terms. The main challenge here is to strike an appropriate balance between encouraging innovation and protecting consumers, as liability that is too wide-ranging will hamper the development and spread of new technologies and therefore Europe’s competitiveness. On the other hand, putting too many obstacles in the way of asserting claims for compensation, especially following personal injury, would jeopardise the legitimate interests of consumers. The need for legislators to act with specific regard to AI could therefore arise primarily from the difficulties of proof to which a person suffering loss or injury as a result of AI applications is exposed – for example, why has an algorithm arrived at the results obtained? What exactly was the error? Who could have prevented it or foreseen the harmful event? AI applications today are often like a black-box, where you only know the underlying algorithm and the result, but not the processes in between. Whoever has the burden of proof – which is usually the claimant – therefore has little chance of success in a dispute.

    Search for EU-wide solutions

    As there is agreement that, for issues of this kind, a (at least) European solution  would be preferable to a national approach, discussions on this are being held primarily at EU level. A number of reforms that are also relevant to AI have already been implemented, for example the General Data Protection Regulation of 2016 governing the protection of personal data, or the amendments to copyright law of 2019, which also contain regulations on the use of data to train algorithms. The examination of the specific liability framework, on the other hand, is still ongoing. Initial approaches are to be found, for example, in the EU Parliament’s resolution on “Civil Law Rules on Robotics” of 2017 and in the “Ethics Guidelines for Trustworthy AI” of 2019. These discuss, amongst other things, the introduction of new strict liabilities and the extension of manufacturers’ product liability. Broadening the scope of compulsory insurance or introducing new compensation funds, instead of or in addition to insurance solutions, is also considered.

    The introduction of an “e-person” was also proposed. As AI applications become increasingly autonomous, this would practically allow to hold robots themselves liable for any damage or injury they cause. How these would compensate damage or obtain liability insurance cover, however, was not mentioned in these proposals. In order to take account of the complexity of the subject, besides the usual assessments of the current situation and hearings, two expert commissions were finally set up. One of them was to check what adjustments to the Product Liability Directive of 1985 were necessary, while the other was to check whether additional changes to liability law appeared advisable.

    It is still too early to say how EU liability requirements for AI will ultimately look like. With the Product Liability Directive, there will probably be some amendments to the wording. These are likely to refer to the circumstances in which software is to be regarded as a product, when updates lead to the creation of a new product, and who the manufacturer of an AI products is, or who “places it on the market”. It remains to be seen whether the range of product liability will also be extended. For the willingness to develop innovative AI-based products, such measure would hardly be conducive. It would make more sense, also for the sake of consumer protection, to develop standards that make it clear what degree of safety a consumer can expect from an AI product and how responsibilities are allocated between the parties involved.

    No one-size-fits-all regulation expected

    Regarding all other expected changes to liability law in connection with new technologies, the Expert Group Report of 21 November 2019 offers initial pointers. From that, it emerges that there will be no uniform regulation for all AI applications. Rather, account should be taken of the extent to which AI applications endanger third parties, without any potential losses arising out of this being covered by some already existing strict liability scheme. Where third parties are endangered, their protection should be improved through measures easing the burden of proof and logging requirements. Where third parties are particularly at risk, the introduction of new strict liabilities is also recommended. To guarantee the necessary solvency of those then liable, this could include broadening the scope of compulsory insurance.

    It should in any case be ensured that the use of AI applications does not lead to someone being less liable than they would be if they were using human assistants. Apart from this, it is recommended that the allocation of liability should primarily be based on who is most able to control AI risks and prevent damage. One positive development is that the proposal for introducing an “e-person”, which would ultimately be the same as strict liability for the person funding it, was explicitly rejected.

    Period of considerable legal uncertainty

    From a liability insurer’s point of view, it should be noted that we face a lengthy period of considerable legal uncertainty. In the interest of greater flexibility of the legal framework, which is vital given the rapid development of technical possibilities, the legislators must leave many details of developing the law to the courts. However, it will take many years for a sufficiently dense body of established Supreme Court case law to evolve. This not only leads to a high risk of legal change but, above all, to an increase in transaction costs, e.g. in connection with recourse attempts of the initially liable party against other involved parties.

    In addition, with AI applications the complexity of liability scenarios increases due to the fact that the parties involved are usually subject to many different jurisdictions. For the foreseeable future, however, harmonising the legal framework even only partially appears possible at EU level at best, but not worldwide. Furthermore, the increase in documentation, monitoring and organisational requirements is also likely to create new points of reference for extending D&O liability. All of this is bound to have an impact on liability risks and the insurance needs of the different players. AI applications are still largely in their infancy, but the course is now being set for regulation of the legal framework. The insurance industry should not miss out on the associated opportunities.

    Prof. Ina Ebert, leading expert for liability and insurance law at Munich Re