I Introduction
Amidst all the buzz about artificial intelligence (AI), liability has always played a conspicuous role. Hollywood’s science fiction industry seems to have stimulated perceptions that autonomous robots would be things with no owner but instead with their own personality that might dictate them one day to start killing people. And even people less inspired by science fiction seem to be fixated on the idea that, for instance, autonomous cars would be behaving in an entirely unpredictable way and that they, and the accidents they cause, are so different from conventional cars and accidents that we are facing a ‘legal vacuum’. It is thus the impression of humans handing over control to machines that has fuelled a fear of emerging liability gaps, leaving victims largely uncompensated for harm and allowing tortfeasors to hide behind software code nobody feels responsible for. This fear of liability gaps is one of the reasons that led the European Parliament to pass, on 20 October 2020, a resolution that includes a full-fledged ‘Proposal for a Regulation of the European Parliament and of the Council on liability for the operation of Artificial Intelligence-systems’.[1] This Proposal puts further pressure on the Commission, which, on the basis of the report submitted by the Expert Group on Liability and New Technologies – New Technologies Formation (EG-NTF),[2] had published a report on the safety and liability implications of AI, the Internet of Things (IoT) and robotics[3] in February 2020 and has been working on legislative proposals for some time.
II Liability responses to emerging technologies
While the impression of a ‘legal vacuum’ and huge liability gaps created by emerging technologies may be exaggerated, emerging technologies – and notably digital technologies – certainly pose new challenges to existing liability regimes.
A Challenges posed by emerging technologies
The challenges posed by emerging digital technologies may be divided into largely three different groups, depending on what kind of emerging technologies we are looking at.
1 ‘Autonomy’ and ‘opacity’ (AI)
When it comes to AI, the two striking new features that may call traditional notions of liability into question are ‘autonomy’ and ‘opacity’. The term ‘autonomy’ (whose use with regard to machines has often been criticised because of its ethical dimension and close connection with notions of free human will) refers to a certain lack of predictability as far as the reaction of the software to unseen instances is concerned. It is in particular when coding of the software has occurred wholly or partially with the help of machine learning (although the notion of AI should be broader and more technologically neutral)[4] that it is difficult to predict how the software will react to each and every situation in the future.[5]
Supervised machine learning is about inferring a function from labelled input-output data pairs. This function should allow the algorithm to correctly determine the class labels and decision trees for unseen instances. There are various reasons why the resulting algorithm may behave in a somewhat unpredictable manner later. Errors in labelling apart, a situation may be characterised by features not represented in the training data (eg vehicle software has been trained with numerous images of oncoming traffic, but there was no image of oncoming traffic against the backdrop of an extraordinarily blazing sundown). If the features happen to be not present in the validation and testing data either, or if they are present but outweighed by other features (eg testing data included images with blazing sun, but the oncoming traffic all included lorries), there may be rare situations where the AI simply behaves in an unpredicted way. Such effects may be even stronger where techniques of unsupervised learning are used, which look for previously undetected patterns in data sets with no pre-existing labels. Also reinforcement learning, which is about finding the best possible path of action in a specific situation in order to maximise cumulative reward (which is connected with the degree of achievement of a pre-defined goal) uses algorithms that are anything but fully immune against unpleasant surprises.
While unpredicted behaviour in new situations nobody had ever thought about may also occur with software of a traditional kind, algorithms created with the help of machine learning cannot easily be analysed, in particular not when sophisticated methods of deep learning have been used. This ‘opacity’ of the code[6] (‘black box effect’) means that it is not easy to explain why the AI behaved in a particular manner in a given situation, and even less easy to trace that behaviour back to any feature which could be called ‘defect’ of the code or to any shortcoming in the development process.
2 ‘Complexity’, ‘openness’ and ‘vulnerability’ (IoT)
With everything becoming connected in the so-called Internet of Things (IoT), further challenges for liability law arise. Where everything potentially affects the behaviour of everything, it may become close to impossible for a victim to prove what exactly caused the damage. For example, where a smart watering system for the garden floods the premises while the owner is away, this may be the effect of the watering system itself being unsafe, but there might also have been an issue with a humidity sensor the owner had bought separately, or with the weather data supplied by another provider.
It is difficult to draw a line between the phenomenon of ‘complexity’[7] and the phenomenon of ‘openness’.[8] The former arguably captures more the number of different components in digital ecosystems and their interdependencies, while the latter captures more the fact that components are not static but dynamic and are subject to frequent or even continuous change. It is in particular through the provision of over-the-air (OTA) updates for embedded or accessory software as well as through a variety of different data feeds and cloud-based digital services that products change their safety-relevant features after the product has been put into circulation.
Connectivity also gives rise to increased vulnerability,[9] in particular due to cyber security risks (as external attackers might access products remotely in order to cause harm) and privacy risks (as the data collected by connected devices may easily be transferred to third parties) as well as a number of related risks, such as risks of fraud.
3 ‘Distributedness’ and ‘anonymity’ (DLT)
Last but not least, block chain and other distributed ledger technologies (DLT) pose very specific challenges to liability regimes through the fact that risks may often no longer be attributed to one individualised party or a small group of individual parties, but that risks are created through the interaction of very large numbers of different parties, many of whom are even anonymous and not identifiable.
B Safety responses and liability responses
Responses to such challenges may be divided into safety responses and liability responses. Sometimes, the former are called ‘ex ante responses’ and the latter ‘ex post responses’, but this terminology is ambiguous as the term ‘ex post responses’ is used with different meanings.[10] Approaches as to the right balance between safety and liability differ. A purely economic approach, which has been the prevailing approach, for example, in the US, insists that safety measures must only be taken to an extent that the overall cost of these measures is still lower than the overall cost of harm likely to be caused[11] (cf, eg the ‘Learned Hand Formula’ for ascertaining the appropriate level of care[12]). Where, however, the cost of precautionary measures would exceed the overall cost of harm caused, such measures need, or should, not be taken because simply letting harm occur and compensating victims later would serve efficiency.[13] Some would go as far as saying that this holds true even where no compensation is awarded (Kaldor-Hicks formula[14]). Europe has always taken a different path, for various reasons, including that death, personal injury, and (other) fundamental rights infringements cannot simply be reduced to a monetary figure and that the purely economic approach often fails to take into account the real cost of accidents, eg the economic harm caused by a general lack of trust on the part of consumers and other collective harm as well as social concerns.[15]
These considerations mean that liability cannot, or can only to a minimal extent, replace safety requirements. It does not mean that safety responses and liability responses are entirely independent from each other.[16] Quite on the contrary, it is highly advisable to create strong links between safety on the one hand and liability on the other. This means inter alia that, where a safety risk materialises, it should be precisely the person that was responsible for avoiding that safety risk by way of precautionary measures who should also be the person that is held liable for the resulting harm. There may also be a need for further links, such as ‘safe harbour’ arrangements where companies can prove they complied with all applicable safety standards, or alleviations of the burden of proof for victims where it is clear that a company did not comply with a standard.
C Classification of liability regimes
So far, a number of very different liability responses have been discussed with a view to their ability to successfully grapple with the challenges posed by emerging technologies.
1 Electronic personhood?
One of the most conspicuous proposals has been the proposal to recognise electronic personhood, ie to recognise that particular highly sophisticated robots and software agents may themselves be the addressees of legal duties and obligations as well as the holders of legal rights. The debate has greatly been fuelled by the European Parliament adopting a Resolution on Civil Law Rules on Robotics with recommendations to the Commission[17] in 2017 according to which, in the long run, legislators should award legal personality to some very advanced AI systems. It has also been fuelled by reports about Saudi Arabia awarding citizenship to a robot called Sophia and about companies worldwide electing an AI as a board member. In the liability context, the idea roughly is that if it is increasingly difficult to trace harm triggered by AI back to any kind of human behaviour, it would only be natural to make the AI itself liable.
The proposal has met with a great deal of resistance.[18] Much of the resistance had its roots in ethical considerations, as any attempt to put machines on an equal footing with human beings and afford them the same or similar rights is apt to help delude the fundamental difference between human beings and things.[19] There was also the very pragmatic reason that, obviously, making AI itself liable does not make sense unless the AI has the financial means to pay compensation, which means the AI would have to be equipped with funds or with equivalent insurance; however, if this were the case, it would be much simpler to hold the developers, producers, operators or users of AI liable as they could take out insurance as well. Looking at the matter more closely, it seems like the only effect of recognising electronic personhood might be that we help the human beings that are responsible for the AI to escape that responsibility and hide behind an additional shield. This is why electronic personhood is hardly a solution to any kind of liability problem.[20]
2 Fault liability
Fault liability, which has been the cornerstone of extra-contractual liability in more or less all European jurisdictions[21], is poorly equipped to respond to the challenges posed by emerging digital technologies. Quite on the contrary, it is the heavy reliance of many jurisdictions on fault liability that is at the bottom of many of the problems that have been identified. Both autonomy and opacity make it difficult to trace harm back to any kind of intent or negligence on the part of a human actor. For very similar reasons, fault liability is hardly an appropriate response to the phenomena of complexity, openness and vulnerability of digital ecosystems, and the same holds true for distributedness and anonymity. However, fault liability remains an important element of liability also in the digital age.[22]
Interestingly, the EP Proposal for a Regulation on AI liability[23] does not only include a strict liability regime for ‘high-risk’ applications, but also a harmonised regime of rather strict fault liability for all other AI systems. Article 8 provides for fault-based liability for ‘any harm or damage that was caused by a physical or virtual activity, device or process driven by the AI-system’, and fault is presumed, ie it is for the operator to show that the harm or damage was caused without his or her fault. In doing so, the operator may only rely on either of the following grounds: (a) the AI-system was activated without his or her knowledge while all reasonable and necessary measures to avoid such activation outside of the operator’s control were taken, or (b) due diligence was observed by performing all the following actions: selecting a suitable AI-system for the right task and skills, putting the AI-system duly into operation, monitoring the activities and maintaining the operational reliability by regularly installing all available updates. This is problematic for a number of reasons. First of all, liability under art 8 is unreasonably strict as it seems that the operator must, in order to escape liability, demonstrate due diligence in all aspects mentioned, even if it is clear that lack of an update cannot have caused the damage. Possibly this result can be avoided by way of common sense interpretation, but it is still the case that, in the absence of any restriction to professional operators, even consumers would become liable for any kind of AI device, from a smart lawnmower to a smart kitchen stove. This would mean burdening consumers with obligations to ensure that updates are properly installed, irrespective of whether or not, in the light of their digital skills and the way the product was marketed, installing the updates could be expected of them, and possibly confronting them with liability risks they would hardly ever have had to bear under national legal systems.
It is to be hoped that this provision will never make it into EU legislation. It seems that some players supported it because they want to pre-empt stricter liability at the national level, while other players supported it in the mistaken belief that this would benefit consumers.
3 Non-compliance liability
Liability may also be triggered by the infringement of particular laws or particular standards whose purpose includes the prevention of harm of the type at hand. We find this type of liability regime both at EU level and at national level. An example for non-compliance liability at EU level is art 82 of the GDPR,[24] which attaches liability to any infringement of the requirements set out by the GDPR. At the national level, there may be both general clauses attaching liability to the infringement of protective statutory provisions[25] and specific liability regimes attaching liability to non-compliance with very particular standards.
As is illustrated by the example of art 82 of the GDPR non-compliance liability is, in principle, an appropriate means to grapple with problems raised by emerging technologies. However, non-compliance liability is always of an accessory nature, ie there needs to be a basic regime setting out in some detail the duties and obligations to be met in order to be considered compliant. Thus, non-compliance liability is an appropriate response to emerging digital technologies only if the legislator provides for quite detailed regulation as far as the use of those technologies, such as AI, the IoT and DLT, is concerned. It should also be noted that, under a number of national jurisdictions, efforts are being made to impose non-compliance liability only in cases where the potential tortfeasor was at fault.
4 Defect liability
A number of different liability regimes in jurisdictions in Europe may be described as types of ‘defect liability’, while this is not a term of art. In the extra-contractual realm, the most important and most conspicuous form of defect liability is product liability, which has been harmonised by the Product Liability Directive (PLD).[26] Product liability does not require fault, but it still requires a particular objective shortcoming in the sphere of the producer as the addressee of liability, ie that the product put into circulation was defective at the time when it left the producer’s sphere.[68] The development risk defence, which Member States were free to implement or not, moves product liability into the vicinity of fault liability, though.[69]
Product liability is only the most conspicuous form of defect liability and the one where the term ‘defect’ is in fact used. However, when looking more closely at liability regimes in national jurisdictions,[70] it becomes apparent that there is a panoply of different forms of liability that are all based on the unsafe or otherwise objectionable state of a particular object within the liable person’s sphere of control. Many of these forms of liability are somewhat at the borderline between fault liability and defect liability, as they are based on a presumption of fault, which the liable person is free to rebut under particular circumstances. Even some forms of vicarious liability under national law may be qualified, at a closer look, as forms of defect or malperformance liability, eg, vicarious liability may be based on the generally ‘unfit’ nature of the relevant auxiliary in terms of personality or skills,[27] or on the fact that the human auxiliary failed to meet a particular objective standard of care.
Defect liability remains very important also in the context of emerging digital technologies.[71] It is of the essence to make defect liability fully effective also in digital environments, which may imply, inter alia, that vicarious liability is fully extended to situations where sophisticated machines are used in lieu of human auxiliaries, ie, parties may not escape liability by outsourcing a particular task to a machine rather than to a human auxiliary.[28] However, it is also clear that emerging digital technologies, notably AI, make it increasingly difficult to identify a defect due to the autonomy of software and software-driven devices as well as to the opacity of the code. Complexity, openness and vulnerability of digital ecosystems may, in addition, make it difficult to identify the source of the defect, ie who is responsible for the defect.
5 Strict liability
While it is not uncommon to call any kind of no-fault liability ‘strict liability’, the term should arguably be reserved for such forms of liability that do not require any kind of non-compliance or defect or malperformance but are more or less based exclusively on causation. At a closer look, some further requirements beyond causation may have to be met, such as that the risk that ultimately materialised was within the range of risks covered by the relevant liability regime, and there may possibly be defences, such as a force majeure defence.[29]
Strict liability is an appropriate response to situations where significant and/or frequent harm may occur despite the absence of any fault, defect, mal-performance or non-compliance. It may also be an appropriate response where such elements would be so difficult for the victim to prove that requiring such proof would lead to under-compensation or inefficiency. The further extension of strict liability may therefore be justified for AI applications because the ‘autonomy’ and ‘opacity’ of AI may give rise to exactly the kind of difficulties strict liability is designed to overcome.[30]
Strict liability does not help the victim where causation as such is an issue, ie where it is unclear whether the harm was caused by the product or activity in question or by something else, or where the identity of the tortfeasor is unknown. However, strict liability may nevertheless be an appropriate response to features such as ‘complexity’, ‘openness’ and ‘vulnerability’ that comes with the IoT. The reason is that, while it may be difficult to identify which component of a complex and connected digital ecosystem has caused the harm by being defective, it is usually very simple to say which component ultimately caused the damage in a more physical sense. For instance, where it is unclear whether the flooding of the premises was due to a defect of the watering system itself, a humidity sensor, or a data feed, it is absolutely clear that the water came from the pipes. Thus, if the legislator introduced strict liability for smart watering systems, this could mean that whoever is the addressee of this strict liability would have to compensate victims also for harm originating other than from their own sphere (subject to redress).
Likewise, assuming that enforceable rules were in place that would force DLT systems to have an identified responsible economic operator within the EU, strict liability of that responsible economic operator for the manifestation of risks inherent in the DLT system might be a solution for problems associated with the ‘distributedness’ and ‘anonymity’ that may come with DLT.
The cornerstone of the EP Proposal for a Regulation on AI liability[31] is a strict liability regime for ‘high-risk’ applications, which are to be exhaustively listed in an Annex. Given the rapid technological developments and the required technical expertise, the idea is that the Commission should review that Annex without undue delay, but at least every six months, and if necessary, amend it through a delegated act.[32] For the applications subject to strict liability, mandatory insurance is being proposed.[33]
III A risk-based approach to strict liability
It follows from what has been said so far that strict liability may indeed be an appropriate response for many problems associated with emerging digital technologies, not just with AI. However, the question arises whether this is true for all safety risks or only for particular types of safety risks.
A Classification of safety risks
‘Safety risks’ inherent in a product comprise all risks of harm that may be caused to people or to assets and values different from the relevant product itself. For example, where an autonomous vehicle hits a pedestrian, this is clearly a safety risk. By contrast, functionality risks comprise risks of the software or other product not performing properly, ie the user not getting ‘good value for money’. Thus, if the autonomous vehicle stops running, this is a functionality risk. Functionality risks are typically a matter of contract law and will not be dealt with in this context.
1 Classification according to the type of harm caused
Safety risks can be classified into different categories, depending on the type of harm that is or might be caused.
a Physical risks
Traditionally, death, personal injury, and damage to property have played a special role within liability frameworks.[72] These special risks can be described as ‘physical’ risks. Physical risks continue to play their very special role also in the digital era, but the concept must be understood more broadly and include not only death, personal injury, and damage to property in the traditional sense, but also damage to data and to the functioning of other algorithmic systems. Where, for example, the malfunctioning of software causes the erasure of important customer data stored by the victim in some cloud space, this should have the same legal effect as the destruction of a hard disk drive or of paper files with customer data (which is not to say that all data should automatically be treated in exactly the same way as tangible property in the tort liability context).[34] Likewise, where tax management software causes the victim’s customer management software to collapse, this must be considered a physical risk, irrespective of whether the customer management software was run on the victim’s hard disk drive or somewhere in the cloud within an SaaS scheme. While this is unfortunately still disputed under national tort law,[35] any attempt to draw a line between data stored on a physical medium owned by the victim and data stored otherwise seems to be completely outdated and fails to recognise the functional equivalence of different forms of storage.
b Pure economic risks
Pure economic risks[36] are economic risks that are not just the result of the realisation of physical risks. For example where medical software causes a surgery to fail, resulting in personal injury and consequently in hospitalisation and loss of earnings, the costs of hospitalisation and the loss of earnings is an economic harm that simply results from the personal injury; this is not a ‘pure’ economic risk. Where, on the other hand, a harmful recommendation is given by AI to consumers, resulting in these consumers buying overpriced products, the financial loss caused is not in any way connected with the materialisation of a physical risk, which is why the risk of causing such financial loss qualifies as a pure economic risk. Traditionally, the threshold for the law to provide compensation for pure economic loss (as the result of the materialisation of pure economic risks) is very high. Pure economic loss is not covered by the PLD, and national tort law systems are usually rather reluctant to grant such compensation.[37]
c Social risks
Social risks (often also called ‘fundamental rights risks’) include discrimination, exploitation, manipulation, humiliation, oppression and similar undesired effects that are – at least primarily – non-economic (non-pecuniary, non-material)[73] in nature but that are not just the result of the materialisation of a physical risk either (as the latter would be dealt with under traditional regimes of compensation for pain and suffering, etc). Such risks have traditionally been dealt with primarily by special legal regimes, such as data protection law,[74] antidiscrimination law[75] or, more recently, law against hate speech on the internet and similar legal regimes.[38] There is also a growing body of more traditional tort law that deals specifically with the infringement of personality rights.[39] While the fundamental rights aspect of social risks is in the foreground, it should not be overlooked that these risks can be linked to economic risks either for the affected individual or for society as a whole (eg HR software that favours male applicants creates a social risk against female applicants by discriminating against them, but this also leads to adverse economic effects for the affected women).
Adverse psychological effects can be either physical risks[40], where the effect is a diagnosed illness according to WHO criteria (such as depression), or social risks, where the effect is not a diagnosed illness, but, for example, just stress or anxiety. It is not always easy to draw a line between the two.[41]
2 Direct and intermediated safety risks
Another important differentiation is that between direct and intermediated risks. Risks are intermediated when the harm is caused by the free decision of a third party, which was, however, in some way instigated or facilitated by the technology. Risks that are not intermediated are considered as direct risks. For instance, where a medical recommender system suggests a particular diagnosis and treatment to the doctor in charge, it is ultimately the doctor who takes the decision as to which is the right diagnosis and treatment, and it is the doctor who, if this diagnosis is wrong and the treatment causes harm to the patient, has directly caused the harm. However, it is also clear that the recommender system has created some sort of risk by suggesting the wrong diagnosis and treatment to the doctor.
3 Individual and collective (systemic) safety risks
Risks are individual when the potential harm almost exclusively affects one or several individuals (eg the victim of an accident, and perhaps her family) as contrasted with collective risks that primarily affect society or economy as a whole (eg manipulation of voting behaviour) and are more than just the sum of individual risks. Some of these risks may be described as ‘systemic’ risks because they may seriously affect bigger systems, such as the integrity of networks. Collective risks are difficult to classify into physical, pure economic and social risks, and many of these risks have elements of each category; eg, software used for trading securities may cause a stock exchange and maybe the entire economy to break down, which affects servers (and thus property, indicating a physical risk), leads to huge financial losses (a pure economic risk) and possibly to a public loss of trust in the trading system (a social risk).
Individual risks that affect a large number of people may also become collective risks, eg the manipulation of a large number of consumers may have effects on our economy as a whole. Also the added effect of many physical risks may become collective risks. For instance, where chatbot software spreads misinformation concerning the COVID-19 crisis in the internet, causing one million individuals in a given country to fall ill, this may amount to more than just one million times the harm caused to one individual, as it may mean a collapse of the medical system and possibly of the economy at large. Collective risks call for special responses on the liability side, such as strong collective redress mechanisms (eg by consumers), administrative and criminal sanctions, etc.[76]
4 Typical, general and atypical safety risks
Finally, there is also the differentiation between typical and atypical risks. A typical risk is a risk that is characteristic of the specific intended function, operation or use of a software or other product, eg a smart watering system floods the premises.
A general risk is a risk that is not typical in this sense but that is still characteristic of a broader range of products, such as all connected products. For instance, cyber security risks or privacy risks would be general risks associated with the IoT (eg due to a security gap someone hacks the smart watering system, thereby gaining access to the whole smart home framework, deactivating the alarm system and committing a burglary). While they are not characteristic of the specific intended function, operation or use of a software or other product (after all, the same effect might have been achieved if the burglars had hacked a smart water kettle or fridge), they are characteristic of the fact that products are connected.
An atypical risk is a risk that is not at all characteristic of the specific intended function, operation or use of the software or other product, eg a person cuts their finger due to a sharp edge on the watering system’s handle. While flooding the premises is characteristic of anything that involves water, and in particular the spreading of water, a person could cut their finger with thousands of different things that have handles, ranging from a bag to a vacuum cleaner.
B Strict liability – for which types of risks?
The following safety risk matrix provides illustrations for different categories of risks, depending on whether the risks are primarily of a physical (purely) economic or social nature, whether they are typical or atypical of the relevant product, and whether they are direct or intermediated.[42] The question arises for which of these safety risks posed by emerging technologies strict liability is an appropriate response.

1 Restriction to physical risks in the broader sense
Strict liability should, in any case, cover physical risks in the broader sense, ie not only including death, personal injury and property damage in the traditional sense but also including damage to data and networks as well as psychological harm that amounts to a recognised state of illness.[43] The manifestation of such risks can be ascertained objectively and should be prevented in any case, as we do not readily tolerate any technology causing uncompensated harm to human health and property.
Where, however, pure economic risks or social risks are concerned, strict liability is hardly appropriate unless further conditions are added, such as non-compliance with mandatory legal standards or some defect or malperformance, which would, however, mean that the regime is no longer strict liability in the proper sense. On 20 October 2020, the European Parliament nevertheless passed a report with a Proposal for a Regulation,[44] whose art 2(1) reads: ‘This Regulation applies ... where ... an AI-system has caused harm or damage to the life, health, physical integrity of a natural person, to the property of a natural or legal person or has caused significant immaterial harm resulting in a verifiable economic loss.’, and art 3(i) provides for a corresponding definition of ‘harm or damage’. While life, health, physical integrity and property are not surprising, the inclusion of ‘significant immaterial harm resulting in a verifiable economic loss’ is. If immaterial harm (or the economic consequences resulting from it, such as loss of earnings due to stress and anxiety that do not qualify as a recognised illness) is compensated through a strict liability regime whose only threshold is causation,[45] the situations where compensation is due are potentially endless and difficult to cover by way of insurance. This is so because there is no general duty not to cause significant immaterial harm of any kind to others, unless it is caused by qualified non-compliant conduct (such as by infringing the law or by intentionally acting in a way that is incompatible with public policy). For example where AI used for human resources management (assuming such AI were qualified as a ‘high-risk’ application) leads to a recommendation not to employ a particular candidate, and that candidate therefore suffers economic loss by not receiving the job offer, full compensation under the Proposal for a Regulation would be due even if the recommendation was absolutely well-founded and if there was no discrimination or other objectionable element involved. While some passages of the report seem to choose somewhat more cautious formulations, basically calling upon the Commission to conduct further research,[46] Recital 16 explains very firmly that ‘significant immaterial harm’ should be understood as meaning harm as a result of which the affected person suffers considerable detriment, an objective and demonstrable impairment of his or her personal interests and an economic loss calculated having regard, for example, to annual average figures of past revenues and other relevant circumstances. It is to be hoped that this proposal will not make it into any final EU legislation.
2 How to deal with intermediated risks
If only physical risks in the broader sense should be covered, this may still mean very different things. In particular, the question arises whether intermediated physical risks should lead to strict liability. Intermediated risks may be typical risks (such as the risk of a wrong recommendation by a medical expert system, which then leads to a wrong diagnosis and to damage to health)[77], or general risks (such as the risk of a device being hacked, which then leads to a burglary and property damage).
The situation with cyber security risks seems to be rather straightforward. If such risks were not included simply because they materialise only where a third party hacks itself into the system and causes harm, the victim would normally go uncompensated as the hacker will often never be identified. So this type of intermediated risk should certainly be compensated.
Things are more difficult with recommender systems as it is usually an identified human actor who takes the full responsibility for the ultimate decision.[78] All sorts of factors may influence human decisions, and we need to delineate relevant and irrelevant intermediated risks. For instance, a doctor’s husband may have caused the doctor to make the wrong decisions by breaking up the relationship and inflicting emotional stress on the doctor, but it is clear that the husband cannot be liable for the harm thus caused to the patient (and national tort law would avoid this result, using different lines of argumentation, including that this effect was too remote and that there was no specific duty on the part of the husband to protect the health of his wife’s patients). While this may be a rather clear-cut case, it is less clear whether the provider of an online medical database, which contains faulty information on the symptoms of a particular disease, thus prompting the doctor to make the wrong decision, can become liable for the harm thus caused to an individual patient. There is, at the moment, a similar case pending before the CJEU, and the Court will hopefully clarify in a preliminary ruling whether this type of scenario may lead to liability under the PLD.[47]
Arguably, a line needs to be drawn between functionality and mere content. Where a medical recommender system suggests a particular diagnosis and treatment, this suggestion is generated by the system and its specific functionality. This can be compared to the functionality of a traditional medical device, such as a thermometer – if a thermometer falsely indicates that the patient’s body temperature is normal, while really the body temperature is 41 degrees Celsius, there would be not the glimpse of a doubt that this falls potentially under the PLD, and the same should hold true for strict liability. Where, however, a medical journal publishes scientific articles online, and one of the articles includes wrong information, the functional equivalent would be a printed book and this would be merely a question of content displayed. Subject to what the CJEU will rule in the end, the latter should not be part of strict liability regimes.
Having said this, intermediated risks cannot just be dealt with in exactly the same manner as direct risks. For instance, where the producer is strictly liable, it must be able to rely on defences, eg there should not be liability in the case of recommender systems where it was entirely unreasonable for the person that made the decision to rely on the recommendation and such a use of the recommender system was not within the range of possible forms of use which the producer had to take into account.
3 Excluding atypical risks
Things are somewhat different with entirely atypical risks. In the context of other types of liability, the question of whether a risk is typical, general or atypical is absorbed by other elements, such as reasonable foreseeability,[48] probability[49] or scope of the rule.[50] Where the elements of fault liability are fulfilled, there is no reason why the atypical nature of the risk should give rise to an exclusion from liability, eg where somebody negligently produced the handle of a robot in a way that it was foreseeable that people would cut their fingers, there should be liability. The same holds true for defect liability, including product liability under the PLD, as it is the lack of safety the public at large is reasonably entitled to expect that gives rise to liability of the producer.[51] However, atypical risks should be excluded from strict liability in the proper sense, eg, assuming that the EU legislator qualified AI-driven watering systems as ‘high-risk’ AI-systems and therefore attached strict liability to damage caused by AI-driven watering systems, this should definitely not include damage caused by a sharp edge on the handle.
4 Risk-response matrix
At the end of the day, and referring to the risk matrix introduced above before 1, this leads to the following risk-response matrix. It illustrates that, compared with the overall range of risks posed by AI and other emerging technologies, it is only a relatively narrow group of risks that should be addressed by way of strict liability. All other risks may be extremely important, in particular the social risks posed by AI, but they must be dealt with under other regimes of liability such as non-compliance liability.

C Strict liability – for which level of risk?
In the context of new technologies, a risk-based approach is usually considered to be appropriate. Calling for a risk-based approach is basically making a proportionality argument, claiming that liability should only be imposed to the extent that this is justified by the risk posed.[52] Where the risk posed differs significantly across the potential scope of legislation, this is either an argument for taking a sectoral approach right away, or for otherwise taking more targeted action, such as by differentiating between different regimes of liability within the scope of an instrument.[53]
1 A risk-based approach to regulation
There are various different techniques for achieving a more risk-based approach, such as exclusions from scope (eg low value transactions below a particular threshold amount, vehicles below a particular maximum speed) or setting up specific conditions for specific measures within an instrument. Where a very broad range of different risk classes needs to be tackled, separate risk classification may be an option, which is a technique used by the Medical Device Regulation[54] and suggested for all sorts of algorithmic systems by the German Data Ethics Commission.[55]
A similar technique that is currently being discussed in the context of AI liability and recommended, inter alia, by the European Parliament in its Proposal for a Regulation is a combination of a general legal instrument that provides, in a rather abstract manner, provisions for ‘high-risk’ applications and for other applications. What counts as ‘high-risk’ follows from an enumerative list in an Annex, which may be updated at regular intervals by delegated acts or in similar ways.[56] The list of ‘high-risk’ applications could either refer to sectors (such as energy or transport) or to types of applications (such as HR software or personal pricing software), or to both.[57] The need to reduce the scope of liability regimes, which are generally justified according to general principles of the law of non-contractual liability, may depend, eg, on the following factors: (a) the gravity of the risk created by the product placed on the market by a particular party; (b) the likelihood that such a party becomes liable despite having taken optimal safety precautions; and (c) considerations of clarity, certainty and practicality of the law.
Considering these factors, it becomes clear that there is never a reason to reduce the scope of fault liability, because where the relevant party is at fault, the value of (b) is automatically zero, and (c) militates against differentiations anyway, so fault liability must apply in all cases. With forms of defect liability that require proof of both a defect and causation (such as product liability), ie where it is clear the product placed on the market is defective and has caused the damage, the value of (b) is likewise minimal. However, there is still a certain likelihood that the relevant party becomes liable despite optimal safety precautions, even though this likelihood is very low (but it may justify certain restrictions). Where, however, proof of causation is not required, the likelihood that the relevant party will become liable even though, for example, a different risk has in fact materialised, are quite high. This risk is particularly high for strict liability, ie where the victim does not even have to prove that there has been a defect. This is why the scope of strict liability needs to be very narrow and restricted to cases where the value of (a) is particularly high and further considerations under (c), such as to foster trust in a new technology, militate in favour of its introduction.
2 Definition of ‘high-risk’ AI applications
If strict liability needs to be restricted to cases of ‘high-risk’ AI applications, the question arises as to how these ‘high-risk’ applications should be defined. Article 3(c) of the EP Proposal for a Regulation on AI liability defines as ‘high risk’ a ‘significant potential ... to cause harm or damage ... in a manner that is random and goes beyond what can reasonably be expected.’[58] According to the Proposal, the significance of the potential depends on the interplay between (a) the severity of possible harm or damage, (b) the degree of autonomy of decision-making, (c) the likelihood that the risk materialises and (d) the manner and the context in which the AI-system is being used.
For safety purposes and essential requirements to be met by AI systems (eg in the new regulatory framework for AI), any risk-classification of AI should consider social risks and possibly pure economic risks, including risks of a collective nature. However, as has been explained in some detail under B.1, strict liability should be restricted to physical risks in the broader sense.
AI systems that pose ‘high risks’ in a physical sense are probably very similar to the devices that have already been strong candidates for strict liability under many national legal systems before the emergence of AI. Applications such as road traffic vehicles or bigger drones should remain subject to specific legislation where AI is involved. This is because victims should be treated alike if they are exposed to and ultimately harmed by similar dangers.[59] For the time being, candidates for strict liability are primarily objects of a certain minimum weight, moved at a certain minimum speed, that move outside confined boundaries and may therefore expose a larger number of persons to risk.[60] This could, for example, include AI-driven delivery or large cleaning robots or big lawnmowers in public spaces.
IV Addressees of strict liability
Having defined the risks for which strict liability for emerging technologies might seem appropriate, the next step is the identification of the appropriate addressee of liability. The answer to the question of who should be the addressees of liability might indirectly also determine the overall shape which any future legislation in the field will take: if liability is on the producer within the meaning of the PLD, much is to be said in favour of a solution that integrates the relevant provisions in the PLD (despite the fact that the PLD otherwise provides for a regime of defect liability). If liability is on other parties, this could speak in favour of introducing an entirely new legislative instrument.
A The possible addressees
There are different possibilities as to who should bear the burden of strict liability: either, liability could be on the producer (or other party responsible for product safety), or liability could be on the operator of the device, and if the latter, there might be different parties potentially qualifying as operator. Other parties, such as distributors, providers of online marketplaces, or users that are not operators may possibly come into the equation under very specific circumstances, while it is plainly unrealistic (not least because of the requirements of insurance) to hold them strictly liable also under normal circumstances.
1 Producer or other party responsible for product safety
If the producer or any other party that is responsible for product safety and post-market surveillance in lieu of the producer under product safety legislation (such as the Market Surveillance Regulation[61]) were the party that assumes strict liability for emerging technologies such as AI, this would ensure that safety and liability is treated in a parallel manner. The liable party would be exactly the party that has to make sure the technology is safe when put into circulation and remains safe throughout its life-cycle (and, if the technology becomes unsafe, has to take appropriate action). This seems to be a very plausible solution, in particular as the producer is the party with the highest degree of control concerning the development of AI, concerning any software updates or further machine learning, and concerning the interaction of the product with other components of digital ecosystems.
The downside of this solution is, however, that existing strict liability regimes in the national jurisdictions are almost exclusively about strict liability of the owner or other long-term deployer of a device.[62] This holds true, for example, for strict liability for motor vehicles, which has been introduced by the vast majority of Member States. The same holds true for strict liability for aircraft (including larger drones) and many other dangerous facilities, such as nuclear plants. If strict liability specifically for AI-driven devices were to be introduced, channelling this liability to the producer would mean a split regime.
2 Frontend operator
Considering the desire to achieve coherence with existing strict liability regimes under national law, it would be only logical to introduce strict liability for the owner or other long-term deployer of a device, ie the party that decides about the concrete use of the device, bears both the economic risk of its operation and derives the economic benefits. Following work conducted by the author of this paper in the context of the EG-NTF, which finally made it into the EG-NTF report,[63] this party has also come to be called the ‘frontend operator’.[64] Making this party liable would be equally logical, given that it is the frontend operator that usually decides about when, and where, and how often, and under what circumstances a device is used. This in turn greatly influences the degree to which others are exposed to risk (eg, the owner of an autonomous vehicle who uses that vehicle every day to and from the workplace causes a risk that is many times higher than the risk posed by another owner of the same type of vehicle who only uses the vehicle once a week for shopping).
While a lot is to be said in favour of holding frontend operators strictly liable for AI to the extent that they are already strictly liable under national law for functionally equivalent devices, it is questionable whether the same should hold true for types of devices that have never previously been included in strict liability regimes under national law. In particular, if plans to have a highly dynamic system should materialise, under which the European Commission or another body would decide within short intervals (eg every six months) whether to add new types of devices to the list of ‘high-risk’ applications or to strike applications from the list,[65] it would be very difficult to make frontend operators strictly liable as these parties would have to constantly monitor the development of legislation and adapt their insurance coverage accordingly. Private parties can hardly be expected to do so, or can only be expected to do so on very rare occasions when an entirely new technology enters the market. Thus, if the decision were made to hold the frontend operator liable, this should normally not affect consumers, but only professional frontend operators.
3 Backend operator
In the course of the discussions about strict operator liability, the author of this paper had, within the work for the EG-NTF, also introduced the term ‘backend operator’, which has likewise made it both into the report of the EG-NTF[66] and into the EP Proposal for a Regulation on AI liability[67]. The latter defines ‘backend operator’ as ‘any natural or legal person who, on a continuous basis, defines the features of the technology and provides data and an essential backend support service and therefore also exercises a degree of control over the risk connected with the operation and functioning of the AI-system’. The EG-NTF stresses that a more neutral and flexible concept of ‘operator’ should be preferred, which refers to the person who is in control of the risk connected with the operation of emerging digital technologies and who benefits from such operation. ‘Control’ is explained to be a variable concept, meaning, in the case of a frontend operator, activating the technology and determining the output or result (such as entering the destination of a vehicle or defining the next tasks of a robot). The more sophisticated and more autonomous a system, the more control shifts to the backend operator, who defines and influences the algorithms, for example by continuous updates.
The beauty of a solution that relies on liability of the backend operator is that it can, in most Member States, largely be implemented by way of a simple extension of existing schemes of strict liability of the frondend operator. However, as these schemes stand today in many Member States, they include a range of defences, exceptions and exclusions that may not be appropriate for emerging digital technologies, which is why harmonisation would be highly desirable.
B Towards a differentiated system
As there appears to be a tie in the ‘match’ between the proponents of frontend operator liability and the proponents of backend operator liability or producer liability, other considerations, in particular considerations of legal certainty, coherence of the law, and overall practicality may come to the fore.
1 ‘Traditional candidates’ for strict liability
Considerations of legal certainty, coherence of the law, and overall practicality may call for a differentiated approach. In particular, we have to take into account that there exists a range of devices with a functional equivalent in the pre-AI age for which there is already strict liability under the majority of national legal systems.
a Avoiding disruption and inconsistencies
With regard to devices of a type for which the (frontend) operator is already now strictly liable under the majority of legal systems (eg motor vehicles, larger drones), strict liability should be the same irrespective of whether or not a device contains AI. The main reason for this is that the effect for the victim is by and large the same, regardless of whether an accident was caused by an AI component or another component of the vehicle. Having different liability regimes depending on whether the vehicle was below or above a particular level of automation does not seem truly convincing, in particular not if this might result in different levels of compensation for the victim.
This is not to say that a vehicle’s level of automation may not have any effect at all on liability. Of course, considering that with a rising level of automation control over the risks posed by the vehicle shifts from the frontend operator to the backend operator, the legislator may decide that the burden of liability and/or insurance shifts from the frontend to the backend operator when the level of automation exceeds a particular threshold.
Also the EG-NTF decided that, where there is both a frontend and a backend operator, strict liability should fall on the one who has more control over the risks posed by the operation. Ideally, in order to avoid uncertainty, the legislator should define which operator is liable under which circumstances, eg, the legislator could decide that for autonomous vehicles with a level of automation of 4 or 5, it is the provider running the system and who enters the vehicle in the national registry who is liable. This provider would therefore also take out insurance and could pass on the premiums through the fees paid for its services. Where several providers fulfil the function of backend operators, one of them would have to be designated as the responsible operator. If in doubt, and in particular for devices for which there is no official registry, the producer or designated representative, who is also in charge of product safety obligations, should be considered the backend operator.
b The need for harmonisation (not restricted to AI)
If, for ‘traditional candidates’ for strict liability, an important consideration is to avoid disruption and to have a coherent regime of liability across the board, it is likewise obvious that, ideally, the legislator should not restrict itself to harmonising strict liability for AI-equivalent devices but for all devices of a particular kind. This would mean, in particular, that the EU legislator would introduce harmonised strict liability for all sorts of motor vehicles (possibly differentiating, when it comes to the scope, between AI-driven vehicles and other vehicles because the threshold of weight and speed may be higher for the latter) as well as aircraft (including drones, again possibly differentiating between AI-driven and other drones with a higher threshold in terms of size and weight for the latter).
2 ‘New candidates’ for strict liability
With regard to devices without a functional equivalent in the pre-AI age, or where the risks posed by a functional equivalent were hardly significant enough to make the device a candidate for strict liability, the situation is different. Coherence with existing strict liability regimes in national legal systems is not much of an issue. Given that it is the AI-specific risks of such a new product which recommend its inclusion into a regime of strict liability, and as the AI-specific risks are predominantly controlled by the backend operator or producer, it should be the backend operator or producer who is strictly liable for damage caused. This is underlined by the argument that the highly dynamic regulatory model foreseen by current Proposals, which suggest that a list of ‘high-risk’ applications is to be updated at regular intervals, is only practicable where backend operators or producers are the addressees of liability.
Under no circumstances must frontend operators who are consumers be confronted with such strict liability, nor, by the way, with any other regime of enhanced fault liability that comes rather close to strict liability, as has been suggested by art 8 of the EP Proposal for a Regulation on AI liability, which is misguided in several respects.
C Strict liability for AI – is it worth the effort?
Looking more closely at what has been said so far, the question arises of whether introducing strict liability for AI and other emerging technologies is really worth the effort. A small tweak in the PLD, introducing an element of strict liability into what is otherwise defect liability, might fulfil the same purpose, maybe even better, while causing much less disruption and legislative effort. For example, a rule might be introduced into the PLD according to which all a victim that has been hit by an AI-driven device needs to establish is that the damage is of a type that might have been caused by the AI (eg because the cleaning robot made a move towards the victim as contrasted with the victim stumbling over the powered-off robot).
It may still be strategically wise to have strict AI liability (for backend operators or, at most, professional frontend operators), but the main benefit would be its strong ‘symbolic’ value and the fact that it is likely to both enhance public trust in the mass roll-out of AI and to put an end to uncertainty and discussions at national level, which are detrimental for the technology and might result in market fragmentation.
V Summary
Strict liability within the meaning of this paper is liability that arises independently of fault or of a defect, malperformance or non-compliance with the law, be it on the part of the tortfeasor or the tortfeasor’s auxiliary. It is an appropriate response to situations where significant and/or frequent harm may occur despite the absence of any fault, defect, malperformance or non-compliance. It may also be an appropriate response where such elements would be so difficult for the victim to prove that requiring such proof would lead to under-compensation or inefficiency.
The further extension of strict liability may be justified for AI applications because the ‘autonomy’ and ‘opacity’ of AI may give rise to exactly the kind of difficulties strict liability is designed to overcome. However, also other features of modern digital ecosystems, such as the ‘complexity’, ‘openness’ and ‘vulnerability’ that come with the IoT, and the ‘distributedness’ and ‘anonymity’ that may come with DLT, may require lowering the threshold for victims to receive compensation. Strict liability should therefore not be discussed exclusively within the context of AI, but in a much broader context.
AI may pose different types of risks, in particular physical risks (often called ‘safety risks’) and social risks (often called ‘fundamental rights risks’). While the social risks posed by AI may be the most important and the ones primarily deserving legislative attention, strict liability for AI should be limited to the manifestation of physical risks. These are death, personal injury or property damage, including damage to data and digital environments as well as psychological harm that amounts to a recognised state of illness. Strict liability for immaterial harm and its economic consequences, as has recently been put forward by the European Parliament in the context of AI liability, is problematic as the range of situations where compensation is due is potentially endless.
Strict liability for AI should include intermediated typical risks (eg medical recommender systems), and intermediated general risks (eg cybersecurity), subject to certain defences, but not fully atypical risks.
Concerning the controversial question of who should be the addressee of strict AI liability, much is to be said in favour of a solution that allocates liability with the party that has the highest degree of control over the risk created. With regard to the ‘autonomy’ and ‘opacity’ that gives rise to AI-specific risks, this is the party that has developed the AI (producer), or the party that defines, on a continuous basis, the features of the product by providing updates and other digital services (backend operator). However, the owner or other long-term deployer (frontend operator) remains in control of more traditional risks, ie when and where the AI is deployed and how many persons are exposed. The latter are considerations that have led national legal systems to introduce strict liability for owners or long-term deployers, in particular for objects of a particular weight, or running at a particular speed, that move in public spaces (eg motor vehicles, bigger drones). In the light of the certain tie between liability of the frontend operator and of the producer/backend operator, considerations of legal certainty, coherence of the law, and overall practicality come to the fore.
With regard to devices of a type for which the (frontend) operator is already now strictly liable under the majority of legal systems (eg motor vehicles, larger drones), strict liability should be the same irrespective of whether or not a device contains AI; however, the legislator may decide that the burden of liability and/or insurance shifts from the frontend to the backend operator from a particular level of automation. The EU legislator should determine the cornerstones of such strict liability irrespective of whether or not a device is AI-driven, harmonising strict liability for particular risky applications across the board.
With regard to devices without a functional equivalent in the pre-AI age, or where the risks posed by a functional equivalent were hardly significant enough to make the device a candidate for strict liability, the dynamic regulatory model foreseen by current Proposals (suggesting that a list of ‘high-risk’ applications is to be updated at regular intervals) is only practicable where producers/backend operators are the addressees of liability. Under no circumstances must frontend operators who are consumers be addressees of such strict liability, nor of otherwise enhanced liability (as is suggested by art 8 of the EP Proposal).
Taken together, this raises the question of whether the approach of introducing specific strict liability for AI applications (and, ideally, IoT applications) is the right approach, or whether a small tweak in the PLD (introducing an element of strict liability into what is otherwise defect liability) would not fulfil the same purpose, maybe even better, while causing much less disruption and legislative effort. It may still be strategically wise to have strict AI liability, but the main benefit would be its strong ‘symbolic’ value and the fact that it is likely to both enhance public trust in the mass roll-out of AI and to put an end to discussions at national level, which are detrimental for the technology and might result in market fragmentation.
© 2020 Christiane Wendehorst, published by Walter de Gruyter GmbH,Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Frontmatter
- Frontmatter
- Foreword
- Preface to this Special Issue on Liability for Emerging Digital Technologies
- Articles
- Liability for Emerging Digital Technologies: An Overview
- Producers’ Liability in the EC Expert Group Report on Liability for AI
- Strict Liability for AI and other Emerging Technologies
Articles in the same Issue
- Frontmatter
- Frontmatter
- Foreword
- Preface to this Special Issue on Liability for Emerging Digital Technologies
- Articles
- Liability for Emerging Digital Technologies: An Overview
- Producers’ Liability in the EC Expert Group Report on Liability for AI
- Strict Liability for AI and other Emerging Technologies