1Abstract
Recent years have seen significant legal developments, notably due to the widespread use of artificial intelligence (AI) and reforms in domestic legal systems. In Belgium, for instance, extra-contractual liability law underwent a fundamental reform with the entry into force of Book 6 of the New Civil Code on 1 January 2025. The new regime codifies existing case law, while introducing clearer concepts aimed at improving legal certainty. This article examines whether the specific provisions on fault-based liability take into account the reality of AI, comparing the new provisions with the former rules under the old Civil Code. Whereas the new provisions of Book 6 contain some elements that will turn out to be useful for AI, many unclarities on certain notions remains. The analysis highlights some of the elements that (Belgian) scholars, courts and policymakers need to consider with regard to fault-based liability and AI. By doing so, it puts forward a research agenda and offers ways forward to ensure that the new provisions on fault-based liability will survive the challenges related to AI.
I Introduction
The recent years have been characterised by several important (legal) evolutions. One can think of the (disruptive) impact on different legal frameworks caused by the widespread use of artificial intelligence (AI).[1] Nearly all legal domains are affected by the consequences and characteristics of AI (eg its complexity, opacity, 2autonomy and data-drivenness[2]). Countries have also faced changes to their domestic legal orders. An example is extra-contractual liability law in Belgium. Belgian extra-contractual liability law remained largely unchanged for many years since the creation of the Code Civile in 1804. Extra-contractual liability was mainly covered by arts 1382–1386bis of the old Civil Code (OCC).[3] As a consequence, Belgian extra-contractual liability law was largely based on judicial decisions, which were shaped by various societal developments.[4] Practitioners had to rely on an extensive body of scholarship and case law to understand the meaning of essential concepts (eg ‘fault’, ‘damage’ or ‘causation’).[5] However, on 1 January 2025, a new extra-contractual liability regime entered into force. It is incorporated in Book 6 of the new Civil Code (CC).[6] Book 6 CC codifies existing principles stemming from case law, while also including well-defined concepts. By doing so, it aims to bring more structure to persisting unclarities and to enhance legal certainty.[7] Yet, it remains to be seen whether Book 6 will eventually bring any improvement in these respects as many provisions leave room for conflicting interpretations.[8]
In this article, we will assess whether and to what extent the specific provisions on fault-based liability in the CC take into account the reality of AI. Scholars have in the past already analysed, among others, the application of the former arts 1382–1383 OCC – dealing with fault liability – in the context of AI-related damage, and 3thereby identified some challenges.[9] This raises the question as to whether the new rules on fault-based liability remedy some of these challenges and, as a consequence, may be ‘better’ equipped to deal with the reality of AI. In other words, the analysis will allow us to evaluate whether these fault-based liability rules are future-proof.[10] We will first give a background to the reader with regard to Book 6 CC (sec II). We will then focus on the application of the three constituting elements of fault-based liability in an AI context, namely fault, damage and the causal link between these two elements (sec III). Based on this analysis, we will conclude with a brief assessment and highlight some points of attention that require further clarification (sec IV). Our analysis is based on desktop research and compares the provisions on fault-based liability in the OCC and the CC in light of the challenges posed by AI. Our evaluation relies on case law, relevant Belgian scholarship and the preparatory works of Book 6. When relevant, we will also refer to contributions on extra-contractual liability for AI-related damage going beyond Belgian scholarship, especially as much has been happening at the EU level.[11] Our analysis goes beyond the state of the art in two ways. On the one hand, it is one of the first international academic publications focusing on some of the new provisions in Book 6.[12] On the other hand, it is the first in-depth publication assessing whether and how the new provisions in the CC dealing with fault-based liability fit with(in) the AI reality.
II 4Book 6 CC in a nutshell
Book 6 CC consists of seven major parts covering the preliminary provisions (ch 1), the facts generating liability (ch 2), causation (ch 3), damage (ch 4), the consequences of liability (ch 5), order or prohibition (ch 6) and the special liability regimes (ch 7). Considering the limited scope of this article, we will only discuss some major novelties.
One important change relates to the application of extra-contractual liability in a contractual relationship. Already in 1973, the Court of Cassation held that it is in principle not possible for a contracting party to bring a claim for extra-contractual liability against another contracting party.[13] Article 6.3, § 1, first paragraph, CC now reverses this premise. In principle, it does become possible to bring an extra-contractual liability claim against a contracting party, unless the law or the contract provides otherwise. The drafters of Book 6 primarily justify the revision of the doctrine of concurrent liability regimes on legal-technical grounds. The traditional prohibition of concurrence is often based on the idea that parties are presumed to have intended to exclude extra-contractual liability when entering into a contract.[14] However, such a presumption is at odds with art 1.12 CC,[15] which explicitly provides that a waiver of rights cannot be presumed. Article 6.3, § 1, second paragraph, CC further stipulates that if the injured party claims compensation based on extra-contractual liability for damage caused by the non-performance of a contractual obligation from their co-contractor, that co-contractor may invoke the defences arising from their contract with the injured party, the legislation on specific contracts, and the special limitation periods applicable to the contract. This does not apply to claims for compensation for damage resulting from an infringement of physical/psychological integrity or from a fault committed with the intention to cause harm.[16]
Article 6.3, § 2, CC stipulates that, unless otherwise provided by law or contract, the legal provisions on extra-contractual liability also apply between the injured party and the auxiliaries (auxiliaire) of its contracting parties. This provision consti5tutes the end of the so-called quasi-immunity of auxiliaries.[17] In the past, it was, in principle, not possible to hold the auxiliary of one’s contracting party directly liable. This has now been changed. However, if the injured party claims compensation based on extra-contractual liability for damage caused by the non-performance of a contractual obligation from an auxiliary of its contracting party, the latter may invoke the same defences as its principal may invoke with regard to the obligations in the performance of which the auxiliary is involved. The auxiliary may also invoke the defences which they themselves may assert against their contracting party in this regard.[18]
Whereas the first chapter of Book 6 is rather general, chapter two covers facts leading to liability. Three grounds of extra-contractual liability are listed: liability for one’s own acts (arts 6.5–6.11), for the acts of another (arts 6.12–6.15) and for things and animals (arts 6.16–6.17). Liability for one’s own acts – fault-based liability – and especially its application in an AI context will be discussed in detail below. The following paragraphs will, therefore, briefly focus on the two other grounds.
Liability for the act of another is divided into four regimes. The first regime concerns the liability of persons with parental authority for damage caused by their minor children through fault or another liability-triggering event. As opposed to the situation prior to Book 6, this liability is now strict if the damage is caused by a child younger than the age of 16 (art 6.12, first paragraph, CC). Parents can in those cases no longer escape liability by showing that they raised the child properly, exercised adequate supervision or were not subject to a duty of supervision.[19] The situation is different for minors older than the age of 16. Parents generally have less influence over the behaviour of such ‘mature’ minors.[20] For this reason, the Belgian legislator decided to give holders of parental authority an opportunity to escape liability if their child is 16 or older. The former are not liable if they prove that the damage is not due to any fault on their part (art 6.12, second paragraph, CC).
The other three regimes for liability for the acts of another concern the liability of persons responsible for the supervision of others (art 6.13), the liability of the principal (art 6.14) and the liability of legal persons for their governing bodies and 6the members thereof (art 6.15). Article 6.13 in particular is interesting as the first paragraph introduces a new ground of liability. It stipulates that the person who, by virtue of a statutory or regulatory provision, a judicial or administrative decision or a contract, is entrusted with the general and continuous organisation and supervision of the lifestyle of other persons is liable for the damage these persons cause to third parties through their fault or another act giving rise to liability while under their supervision. These persons are not liable if they prove that the damage is not due to a fault in their supervision. Since the introduction of the Code civil in 1804, society has changed significantly. For instance, children are no longer solely under the supervision of their parents or teachers. Other persons now also exercise supervision, such as foster guardians (tuteurs officieux), foster caregivers (accueillants familiaux) or institutions to whom the minor is entrusted by judicial order of the juvenile court or by administrative decision. There are also various situations in which adults are placed under the supervision of another person or an institution (eg a person with a mental illness residing in a psychiatric institution). It is, however, always required that the supervision be exercised in a continuous way. This means that merely short-term supervision is insufficient. Hence, sports clubs, youth organisations, childminders, babysitters, grandparents or the guardian of a protected person do not fall under this legal provision. In certain cases, these individuals or entities may exercise supervision over another person, but they only do so with regard to specific aspects and for a limited period of time.[21]
Book 6 also introduces provisions dealing with strict liability of the custodian of a defective thing (art 6.16 CC) and strict liability for the custodian of an animal (art 6.17 CC). The broad application of art 6.16 CC rendered a provision specifically targeting liability of owners for collapsing buildings (cf former art 1386 OCC) redundant.[22] Some changes occurred with regard to the liability of custodians for damage caused by defective things. The custodian of a thing is the individual who has a non-subordinate power of direction and control over the thing. The owner is presumed to be the custodian, unless they prove that custody lies with another. The notion of defectiveness of course is crucial, also in an AI context. Previously, a defect was defined as an abnormal characteristic or state of an object that makes it capable of causing harm in certain circumstances. The abnormality of a characteristic meant that the object deviates from its normal model.[23] This definition gave rise to inter7pretation problems.[24] A thing is now considered defective in Book 6 CC when, due to one of its characteristics, it does not provide the safety that one is entitled to expect under the given circumstances. This corresponds with the consumer expectation test as relied upon in the context of product liability. The legislator chose to define ‘defectiveness’ of things in the same manner to ensure a uniform interpretation. The drafters of Book 6 also concluded that case law on product liability suggests that the definition of a defect rarely gave rise to problems.[25]
As mentioned above, chapter 5 of the CC discusses the consequences of liability. Article 6.30 CC confirms the basic principle that the person who is liable for damage is required to provide full compensation (restitutio in integrum), taking into account the actual situation of the injured party (remedy in concreto).[26] Article 6.31 CC further contains the objectives and methods of compensation, both for pecuniary and as well as non-pecuniary damage. Compensation for pecuniary damage is intended to place the injured party in the position they would have been in if the event giving rise to liability had not occurred. As non-pecuniary damage is, by definition, not quantifiable in money, the legislator considered it more appropriate to refer to fair and appropriate compensation.[27] Compensation for non-pecuniary damage aims to grant the injured party a fair and appropriate reparation for such harm. Compensation takes place either through restoration in kind or through the payment of damages. These forms of compensation may be awarded simultaneously if necessary to ensure full compensation. When the liable party has intentionally and for the purpose of making a profit infringed upon a personal right of the injured party or has harmed their honour or reputation, the court may award the injured party additional compensation equal to all or part of the net profit realised by the liable party (art 6.31). In line with the case law of the Court of Cassation,[28] art 6.32 CC provides that the extent of the damage needs to be determined on the date that most closely approximates the time of effective compensation.[29]
The final chapter of Book 6 is devoted to ‘special’ liability regimes. For the time being, only the Product Liability Act[30] is included. Other special liability regimes could also be added in the future (eg strict liability for fires and explosions in estab8lishments accessible to the public).[31] It remains unclear why these provisions were not already incorporated in Book 6 from the outset. More importantly, a new (revised) Product Liability Directive (PLD) entered into force on 9 December 2024.[32] Member States must transpose it into their national laws and implement the changes by December 2026.[33] The revision aims to ensure that the rules are future-proof and adapted for cases involving any type of product, ranging from traditional products to the newest technologies such as AI.[34] The revised PLD contains several modifications. For instance, the concept of product is broadened and includes digital manufacturing files, raw materials and software.[35] The revised PLD does, however, not apply to free and open-source software developed or supplied outside the course of commercial activities.[36] The revised PLD also incorporates new circumstances to assess the product’s defectiveness (eg the effect on the product of any ability to continue to learn or acquire new features after it is placed on the market or put into service) and introduces provisions regarding presumptions of defectiveness and causation.[37] It also aligns the economic operator’s defences to the AI reality (eg ‘later-defect’ defence[38]). The national transposition of this Directive will require an update of this chapter in Book 6 CC in Belgium.
III 9The constitutive elements of fault-based liability
While the previous part touched upon some provisions of Book 6, we will now focus on the articles dealing with fault-based liability and their application in an AI context. Article 6.5 CC stipulates that everyone is liable for the damage they cause to another through their fault. We will first assess the concept of fault (sec A). Afterwards we will analyse how the new provisions deal with damage (sec B) and causality between the fault and damage (sec C).
A The notion of fault
Article 6.6, § 1, CC defines a fault as the violation of a legal rule that imposes or prohibits certain behaviour or of the general duty of care that applies in social conduct. As opposed to the situation before the adoption of Book 6, the so-called subjective component of a fault no longer seems required and, at first sight, constitutes a major change (sec 1). The objective component of a fault in turn refers to the wrongful behaviour in itself and is similar to the situation under former arts 1382–1383 OCC including a violation of a general standard of care (sec 2) and of a legal rule (sec 3).[39]
1 The subjective element (?)
The subjective component under former art 1382 OCC required that the fault could be attributed to the free will of the person who has committed it (‘imputability’), and that this person generally possesses the capacity to control and to assess the consequences of their conduct (‘culpability’).[40] Article 6.6 CC does not specify that the violation of the behavioral norm must also have been committed knowingly and willingly, as has often been required by the Court of Cassation.[41]
10The fact that the subjective element no longer seems required under art 6.6 CC to establish a fault may bring some benefits in the context of AI, for instance with regard to imposing liability upon the AI system itself (cf legal personality and AI[42]). Some have previously argued that ‘[i]n order to be liable for their own fault, AI systems would also need to obtain a capacity for moral discernment. This, however, appears to be intrinsically human and difficult to reconcile with the technical constitution of AI’.[43] As a subjective element is no longer required to assess a potential fault, the only thing that needs to be assessed is whether the AI system has committed a wrongful act in itself. Although several challenges of course remain when addressing this question and, more generally, regarding granting legal personality to AI – making its implementation unlikely – at least the subjective element no longer constitutes a (potential) hurdle in Belgium. Another illustration as to why the lack of a subjective element to assess fault may be useful relates to the implementation of a potential supranational AI liability regime into Belgian law, such as the earlier proposed AI Liability Directive (AILD).[44] Research showed that the AILD proposal was problematic for several reasons.[45] For instance, it was unclear whether the notion of fault as referred to in the AILD proposal required a subjective element to be present and/or allows for national law to require this. The minimal harmonisation provision of art 1.4 did not answer this question.[46] A provision as the one included in the proposed AILD would thus result in a potential misfit with the notion of fault as it was previously understood under the former arts 1382–1383 OCC, which included a subjective element. The fact that a subjective element is no longer required under the current CC may more easily enable the implementation of (certain provisions of) supranational AI-related liability frameworks.
11At the same time, however, a more nuanced approach regarding the alleged absence of a subjective element is needed. According to some scholars, the subjective requirement – even when not explicitly mentioned in the CC – will still continue to apply. Kruithof, for instance, argues that the transition from the (former) view, in which fault is defined as an attributable wrongful act – meaning the fault has an objective component, namely that the act is unlawful, and a subjective component, namely the blameworthiness of the person – to a view in which fault is the wrongful act, for which there is liability unless the act cannot be attributed to the person especially in cases of force majeure or certain types of mistake (cf arts 6.7 and 6.8 CC), comes down to the same thing.[47]
The grounds for exemption of fault under the former art 1382 OCC have indeed been incorporated into the list of grounds for exemption of fault-based liability, for example if the defendant is in a situation of force majeure (art 6.7) or for acts based on a legal order or an order from the government (art 6.8, second paragraph, 4°). Especially the application of force majeure remains interesting when damage is related to AI. Article 6.7 stipulates that there is force majeure when it is impossible to comply with the applicable rule of conduct. The person who is unable to comply with the applicable rule of conduct is not liable under art 6.5 unless this impossibility is due to their own fault. When assessing this impossibility, consideration is given to the unforeseeable or unavoidable nature of the event that prevents compliance with the rule. However, these are only two of the criteria that a court may consider when determining the existence of impossibility. The legislator in this regard refers to the previously reformed law of obligations (cf Book 5 CC)[48] in which it was already established that unforeseeability and unavoidability are no longer independent requirements for force majeure.[49] They are merely indications that the impossibility cannot be attributed to the liable party. In the explanatory notes to Book 5 CC, it is expressly stated that these criteria do not apply cumulatively or alternatively so that the absence of fulfilment of these criteria does not prevent 12concluding that the impossibility to comply with a behavioural duty is not attributable.[50]
The fact that both requirements are not cumulative – and that it is not explicitly mentioned as such – implies that reliance on force majeure may become more successful in an AI context. Considering the opacity of AI systems – and hence the potential unforeseeability of the damage – it is conceivable that defendants may be more successful in relying on this defence. Take the example of damage involving autonomous vehicles. A dangerous movement of an autonomous vehicle may be unforeseeable for the driver who only supervises the vehicle, especially when that movement also constitutes a violation of the Highway Code. Compliance with traffic rules will be included in the encoding and programming of the operating system used in autonomous vehicles (cf Lessig’s ‘code is law’[51]). The autonomous vehicle ignoring traffic rules could, therefore, constitute an event that is external to the will of the driver and which the latter cannot foresee, similar to a sudden loss of consciousness or a heart attack. As such, it may be conceivable that courts will accept an appeal to force majeure if an autonomous vehicle commits a violation of the Highway Code.[52] Another example relates to the situation for providers of (high-risk) AI systems. A provider is defined in the AI Act as ‘a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge’.[53] If the operation of such an AI system results in damage, the provider could claim that this was unforeseeable (eg because of the black box), potentially resulting in force majeure.
2 The objective element: violation of general standard of care
Article 6.6, § 1, CC defines a fault as the violation of a legal rule that imposes or prohibits certain behaviour or of the norme générale de prudence that applies in social conduct. Article 6.6, § 2, first paragraph, CC stipulates that the general standard of care requires behaviour that corresponds to that of a cautious and reason13able person placed in the same circumstances. This provision should be interpreted as a standard serving as the yardstick for judging whether or not conduct is negligent rather than imposing a pre-existing obligation to behave like a fictious person would have done.[54] It has already been argued that specific rules of conduct will probably not be able to precisely regulate all possible human conduct surrounding the wide diversity of potential AI systems. A general and open category of negligence will thus remain necessary to achieve the comprehensive scope and ‘catch all’ function of liability for fault.[55]
Article 6.6, §§ 1 and 2 CC are not so different from the situation before the adoption of Book 6.[56] The conclusions of previous research regarding this ground of liability and its application for AI-related damage thus remain largely unaltered.[57] To assess whether behaviour corresponds to that of a cautious and reasonable person placed in the same circumstances, external factors (eg time or place) can be considered.[58] Previous research showed that the ‘external circumstances’ will likely become the most important and challenging aspects of an evaluation of the standard of care when the defendant is an AI system’s producer or ‘backend operator’.[59] Backend operators are the persons who continuously define the features of an AI system and who provide essential and ongoing backend support to its functioning within the specific usage contexts decided by individual frontend operators (eg the user of an autonomous vehicle).[60] The significance of backend operations and the specific external conditions under which AI systems function stem from their factual complexity and interconnected nature. These systems comprise algorithmic code, datasets, network connectivity for monitoring, updates, and cybersecurity, as well as links to other computer systems or physical devices, many of which may be supplied by third parties. AI may be embedded in hardware (eg robotic applications), integrated into broader IoT systems (eg smart building management) or exist purely as digital systems (eg advisory chatbots). Even digital-only systems are not necessarily less complex or less connected, as they still operate within network infrastruc14tures, interact with users’ digital and physical environments and often rely on third-party data and services.[61]
Consequently, the highly complex, interconnected and opaque context in which AI systems operate not only complicates the task of establishing causation but must also be carefully considered when assessing potential breaches of a general duty of care. There is thus a tension between the inherently human standard of the general duty of care and the complex, technical and opaque nature of AI systems, which are also capable of autonomous action and development. These systems create a challenging yet unavoidable technological framework within which the actions of backend operators – and their compliance with the general standard of care – must be assessed and judged.[62] Although this has already been reported as a challenge and some solutions have been proposed by scholars that can be relied upon to make that assessment,[63] the new provisions – or preparatory works – of the CC do not make any explicit reference to it.
As opposed to external circumstances, a court cannot in principle consider a defendant’s internal circumstances. These are varying personal/internal traits such as character, emotionality, intelligence and upbringing. However, criteria such as the defendant’s profession, experience and education may be taken into account.[64] This reflects the fact that case law increasingly considers the (non-)professional capacity of the defendant, their factual knowledge, their age or the presence of physical or mental impairments in either the defendant or the injured party.[65] These latter circumstances are considered fairly objective and can be important factors in determining the user’s/deployer’s negligence in cases where AI systems cause damage.[66] Some scholars have already concluded that the application of this ground for users/deployers of AI systems may not be as problematic as the situation for backend operators discussed in the previous paragraph.[67] Several hypotheses can be made in this regard.[68]
Think, for instance, of a situation in which the choice to use an AI system under certain external circumstances constitutes a breach of the general duty of care. For example, using an autonomous motor vehicle in extreme weather conditions for 15which the vehicle may not be sufficiently trained.[69] In this regard, it is also worth referring to a recent case in the United States in which lawyers were fined after submitting fake citations generated by ChatGPT in a court filing. Judge Castel held that ‘[t]echnological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings’. The involved lawyers ‘abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question’. Arguably, lawyers that file papers ‘without taking the necessary care in their preparation’ (eg because of blindly relying on AI) could fall within ‘abuse of the judicial system’.[70] If there is no potential breach of the general duty of care regarding the chosen context of use, one can also examine to what extent the user still has an effective opportunity for control or intervention in the specific circumstances in which the damage occurred. Think of a user of an autonomous vehicle who acts negligently by not intervening (in time) to avoid an accident or, conversely, who acts negligently by intervening when it is not necessary.[71]
More interestingly, art 6.6 CC now explicitly lists some non-exhaustive and non-cumulative criteria that can be used to assess whether someone did not act as a reasonable and prudent person placed in the same circumstances. These criteria serve as guidelines granting courts broad discretion. Most of these criteria were already used in practice but are now codified. The preparatory works accompanying Book 6 explicitly mention that these criteria will allow courts to resolve the complex problems that may result from increasing technological developments and mass consumption.[72] In light of the assessment in the previous paragraph as to whether a backend operator did not act as a reasonable and prudent person, how16ever, this starting point needs to be nuanced as it remains challenging to prove any negligence considering the complex context in which they operate. Moreover, courts and claimants may find it hard to prove that sufficient precautions were taken, may lack information to correctly assess the level of care taken or may not have full details of certain measures of care along the AI value chain.[73] That being said, the following parts will briefly focus on some of the criteria that are important in assessing potential liability in an AI-context: reasonable foreseeability of the damage (sec a), the proportionality assessment of costs/efforts and the prevention of damage (sec b), the state of technology and scientific knowledge (sec c), and standards of good craftsmanship and professional practices (sec d).
a Reasonable foreseeability of the damage
A violation of the general duty of care requires that it was reasonably foreseeable for the defendant that their conduct could result in some kind of damage.[74] This criterion assesses whether the person responsible for the harmful act could have foreseen the possibility of damage at the time of the act. It is not necessary for them to have foreseen the exact damage, the specific victims or the precise extent of the harm.[75] The reasonable foreseeability may be difficult to reconcile with the highly autonomous and opaque (non-foreseeable) nature of many AI systems.[76] This conclusion is in line with the analysis in the previous section according to which, a defendant may more easily invoke force majeure when damage is caused by AI, exactly because damage caused by AI systems may be unforeseeable. Moreover, the mere fact that damage eventually occurs does not mean that a person is automatically held liable for the act leading to it. The preparatory works of the CC stipulates that a person only commits a fault by causing damage if they could have foreseen that damage would occur and if they failed to take the necessary measures to prevent that damage.[77]
In this regard, art 9 of AI Act will be important as it deals with risk-management for high-risk AI systems. The risk-management system refers to a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It includes, among 17others, (a) the identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose; and (b) the estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose, and under conditions of reasonably foreseeable misuse.[78] As such, when a defendant has acted after developing, supplying or operating an AI system as a normal, foreseeing and prudent person would have done in those circumstances, for example by continuously taking precautionary measures, they have complied with the general duty of care.[79] However, the question arises whether the defendant may be expected to do more than what is required under the AI Act to prevent potential damage. We will examine this question in more detail in the following section.
At the same time, a different perspective seems possible as well. It has been argued that the requirement of reasonable foreseeability may already be considered fulfilled when a person develops, supplies or operates an autonomous system of which they should reasonably know that it has a capability to later develop behaviour which can cause damage (such as a chatbot considering recent incidents[80]). The autonomous and opaque nature of AI may imply that it can by definition not be excluded that damage may ultimately occur.[81] This conclusion does not only illustrate potential conflicting interpretations following the application of the new provisions in the CC to AI-related damage but also the importance that courts will eventually play in concretising these concepts in an AI context and deciding on their precise application.
b 18Proportionality assessment of costs/efforts and prevention of damage
The second criterion refers to the costs and efforts that should have been taken to prevent the damage from occurring. An explicit analysis of these costs in light of the benefits they could provide is not very common in Belgian law, but it seems useful to take them into account when deciding whether or not a fault has been committed. If it turns out that the costs that would have been incurred to prevent the damage were excessively high in light of the frequency and magnitude of the foreseeable risk, one can conclude that there is no fault, without prejudice to the existence of strict liability.[82]
In assessing the measures necessary to avoid the damage, not only their financial cost is relevant, but also the efforts and constraints they entail, as well as their potential social advantages and disadvantages. The law thus refers to a proportionality between the preventive measures and the possible disadvantages for others, but by no means aligns with a purely financial analysis of the standard of care.[83] According to Kruithof, a court can only assess whether a sufficient level of precaution was taken by weighing the advantages and disadvantages that certain precautionary measures entail.[84] There would be no other way to judge whether someone has taken reasonable precautions. Precautionary measures are, by their very nature, aimed at limiting a risk. In this context, reasonableness can mean nothing other than a balance between the advantages and disadvantages that such measures entail.[85] In other (more economic) words, the second criterion implies that the court must take into account the situation that is most efficient from the perspective of minimising social costs when determining the fault. This is the case when someone adopts a precautionary level, where the marginal costs of precaution are equal to the marginal benefits resulting from the reduction in the risk of accidents. The social cost is then minimised resulting in an efficient solution. According to Deffains, a fault occurs when someone takes less than these optimal precautionary measures.[86]
19This criterion can be useful in an AI context. The question arises as to which costs and efforts providers of AI systems should have taken to prevent the damage from occurring, and hence not act negligently. Providers of high-risk AI systems, for instance, will have to make sure that their systems comply with the requirements in arts 8–15 AI Act (eg on human oversight, data governance/quality, etc). They also have to put in place a quality management system (art 17 AI Act), keep the required documents (art 18), maintain the logs automatically generated by their high-risk AI systems (art 19 AI Act) and take the necessary corrective actions (art 20 AI Act). Taking at least these measures corresponds to the costs and efforts that may be expected from a reasonable and prudent provider to avoid the occurrence of the damage. The question arises whether providers of high-risk AI systems or general purpose AI-models (cf art 55 AI Act) may be expected to do more than what is required under the AI Act to prevent potential damage. In Belgium, compliance with standards that prescribe certain behaviour does not per se relieve a person from the general standard of care that applies to everyone.[87] A contractor allowed to perform certain demolition works under a specific permit, for instance, was not relieved from the obligation to take additional measures beyond the listed conditions in the permit to act with due care.[88] The Court of Cassation also ruled that railway companies, when constructing unguarded level crossings, must not only comply with their regulatory obligations, but also with their duty of care.[89]
However, the fact that more than mere compliance with the provisions of the AI Act may be required to prevent damage can be challenging. Of importance is always to assess whether a normal and prudent person would have incurred additional costs and made further efforts to avoid the occurrence of the damage caused/related to AI. Yet, the AI Act already imposes quite a regulatory burden on companies (cf obligations for providers of high-risk AI systems), especially in combination with the other EU digital law frameworks such as the Data Governance Act and the Data Act.[90] Let us imagine a company using an AI-driven medical diagnostic 20tool. The tool analyses medical images to detect diseases such as cancer. To minimise the risk of misdiagnosis, the company must ensure the AI system is thoroughly trained and tested. However, it could be very expensive to incorporate every possible dataset or use the most advanced safeguards (eg multiple layers of verification from different AI models or human oversight at every step). In this case, if the AI system overlooks a diagnosis, resulting in damage, the behaviour of the company may not necessarily be considered careless or negligent if the company complied with the provisions of the AI Act (and other sectorial frameworks) as well as with industry standards for training, testing, and monitoring the system. If adopting more advanced, costlier safeguards (eg additional layers of AI models or having a human review of every diagnosis) would have significantly raised costs without a proportional increase in the likelihood of preventing harm, the company may not be considered to have committed a fault.
c State of technology and scientific knowledge
Similar to the development risk under product liability,[91] a third criterion refers to the state of technology and scientific knowledge that was available at the time of the event causing the damage. A reasonable (professional) actor involved in the AI value chain knows their profession, is aware of developments in the field and makes proper use of the available technical tools.[92] The assessment of the state of knowledge must be made in abstracto and thus not be based on the specific capabilities of the person who caused the damage (cf subjective element).[93] Several elements can be considered when making the assessment. The scientific knowledge, for instance, must be reasonably accessible. The degree of certainty of the available knowledge as well as the controversies regarding the risks associated with a harmful activity may also be taken into account to determine whether or not a fault exists, ie whether another technology should have been used.[94]
This third criterion refers to the best available techniques that do not entail unreasonable costs. This indirectly refers to the commonly used expression ‘best 21available technology’. However, the requirement to use the best available technology must be weighed against the associated costs of producing it. A professional cannot be considered to have committed a fault when the investments associated with the use of the best available technology are disproportionate to the benefits this would yield.[95] For example, the absence of a specific medical device does not in itself constitute a fault if such equipment was not yet commonly available in hospitals at the time of the treatment.[96] Still in a medical context, a hospital was held liable for the burns caused to a young patient because it had provided defective equipment – specifically, an unsuitable and outdated heating pad that could no longer be used in all positions and lacked a proper thermostat.[97]
The criterion of ‘best available technology’ is of course dependent upon time, moment and context. The assessment of this criterion must be based on the scientific and technological knowledge available at that moment, not on later developments. This can be challenging in an AI context considering the pace at which technological evolutions evolve (eg in the healthcare sector[98]). The AI Act refers to the state of the art on several occasions. High-risk AI systems, for instance, need to comply with the applicable requirements, taking into account their intended purpose as well as the generally acknowledged state of the art on AI and AI-related technologies.[99] Providers of general-purpose AI models with systemic risks also need to perform model evaluations in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks.[100] Article 25.4 of the AI Act dealing with the responsibilities along the AI value chain is/will become interesting and important in assessing this criterion. Under that provision, a ‘provider of a high-risk AI system and the third party that supplies an AI system, tools, services, components, or processes that are used or integrated in a high-risk AI system shall, by written agreement, specify the necessary information, capabilities, technical access and other assistance based on the generally acknowledged state of the art, in order to enable the provider of the high-risk AI system to fully comply with the obligations set out in this Regulation’ (emphasis added). The AI Office may develop and recommend voluntary model terms for those contracts. The model clauses will thus help to determine and further finetune obligations in 22line with the state of the art and, by doing so, also play a role when assessing fault-based liability.[101]
In light of the autonomy and self-learning capacities of AI, the question arises whether the defendant’s – eg an operator or provider of a high-risk AI system – retaining control over the ‘working’ of an AI system plays a role in the assessment of negligent behaviour. This question is relevant as Belgian scholars already held that a lack of maintenance of an installation may constitute a fault.[102] The so-called development risk defence – as included in the revised Product Liability Directive – may be interesting to rely upon to find an answer. An economic operator will not be liable for damage caused by a defective product if that economic operator proves that the objective state of scientific and technical knowledge at the time the product was placed on the market or put into service or ‘during the period in which the product was within the manufacturer’s control was not such that the defectiveness could be discovered’ (emphasis added).[103] The manufacturer’s ‘control’ is defined in art 4(5) of the revised PLD and refers to two situations. On the one hand, it means that the manufacturer of a product performs or, with regard to actions of a third party, authorises or consents to (i) the integration, inter-connection or supply of a component, including software updates or upgrades or (ii) the modification of the product, including substantial modifications. On the other hand, it can refer to the fact that the manufacturer of a product has the ‘ability to supply software updates or upgrades, themselves or via a third party’. In other words, a person still risks committing a fault even when the AI system is put on the market when retaining some form of control afterwards, for instance to provide upgrades/updates.
d Standards of good craftsmanship and professional practices
The fourth point is closely related to the previous criterion and refers to good craftsmanship and sound professional practices to determine the code of conduct applicable to a professional. The preparatory works of Book 6 provide more information and guidance. Good craftsmanship and sound professional practices include the technical standards and customs that apply within a specific professional sector, as well as ethical provisions. It means that a code of conduct can be derived from these requirements, which every careful and reasonable professional must observe in their relations with clients and third parties.[104]
23These references to standards and ethics are particularly relevant in the context of AI. One can think of the Ethics Guidelines for Trustworthy AI issued by the High Level Expert Group on AI (AI HLEG) in 2019.[105] Trustworthy AI systems must comply with the law, act in accordance with ethical values and be robust. The non-binding guidelines contain seven ethical requirements that AI systems need to meet in order to be trustworthy (eg human oversight, transparency and accountability). How to comply with these Ethics Guidelines is not always clear and/or straightforward. An Assessment List for Trustworthy AI (ALTAI) was therefore added to the Ethics Guidelines. This list contains a number of questions to assess compliance with a given ethical requirement.[106] Although these ethical requirements are soft law, they are useful when it comes to regulating AI, especially as they somehow become binding through the lens of assessing careful behaviour under this fourth criterion. Likewise, standards will play an important role in concretising the AI Act.[107] Although they are contested (eg due to the lack of democratic oversight on their adoption) and often portrayed as being non-binding,[108] this can be nuanced in light of provisions in Book 6. By explicitly referring to standards when assessing whether someone acted negligently, they (indirectly) acquire a binding nature. Finally, the recent General-Purpose AI (GPAI) Code of Practice issued by the AI Office – though voluntary – may have an indirect binding effect as it will be relied upon to assess the behaviour of providers of GPAI models.[109]
3 The objective element: violation of legal rules prescribing or prohibiting specific behaviour
In addition to the lack of compliance with a general standard of care, a second type of wrong is the violation of a specific legal rule of conduct. In accordance with the case law of the Court of Cassation,[110] this primarily refers to the violation of an 24international legal norm with direct effect or a domestic legal norm that prescribes a specific behaviour (an act or an omission). When a legal rule imposes a specific behaviour, its violation constitutes a fault. In such a case, there is no need to rely on the criterion of the careful and reasonable person, nor to question the appropriateness or effectiveness of the legal rule in question.[111] Under Belgian law, violating a legal rule requiring a specific behaviour without a ground of exemption is automatically considered to be wrongful. This is the so-called ‘unitary theory’, which equates illegality and (an objective) fault.[112] It is not required that the person was aware that their conduct would be illegal. Ignorance of the law only exempts an individual from liability if it qualifies as an insurmountable error (art 6.8, 1 CC), which requires that any reasonable and prudent person would not have realised the illegality in those circumstances.[113]
The unity of a fault and the violation of a legal rule is thus confirmed in the CC, but with an important caveat. The applicable legal rule must first be analysed to determine the exact command/order it entails and to establish its content and scope. Only the person who was obliged to comply with a norm that imposes specific behaviour can be held responsible for its violation. On the other hand, some legal rules do not prescribe a specific behaviour but merely refer, in one form or another, to the duty of care that applies in social conduct. In such cases, the criteria set out in the previous part (eg reasonable foreseeability, etc) must be relied upon. Thus, not every violation of the law automatically constitutes a fault. For the same reason, the view that the obligation to comply with the law constitutes an obligation of result is rejected. There is always a duty to comply with a binding norm. However, the mandatory nature of an obligation imposed by a legally binding norm should not be confused with the precision of the command/order it contains.[114]
This exception is relevant in light of the AI Act as it will first have to be established whether a specific provision of the AI Act contains an order/command that does (not) require an additional assessment of a duty of care. The AI Act imposes several obligations on different actors along the AI value chain. Article 26 of the AI Act, for instance, stipulates that deployers of high-risk AI systems need to take ‘appropriate technical and organisational measures to ensure they use such systems 25in accordance with the instructions for use accompanying the systems’ (own emphasis). To assess whether appropriate measures were taken, which measures a reasonable and prudent deployer placed in the same circumstances would have taken will have to be evaluated. In other words, this provision refers to a standard serving as the yardstick for judging whether conduct is negligent or not. More generally, and also following from the analysis of the other criteria, there is still some legal uncertainty regarding the question as to which ‘level of care’ providers of high-risk AI systems are expected to consider. It is in this regard worth considering drafting some guidelines regarding the duties of care of different actors during the AI lifecycle/supply chain, for instance via an update of the Principles of European Tort Law in a digital environment. One may in this regard find inspiration from the Principles for AI in Contracting. They are a draft set of proposed legal principles to guide the application of existing law and the development of new law in relation to automated contracting. They may serve as a source of inspiration for legislators and courts, as well as for those involved in preparing industry standards, codes of conduct or terms and conditions of use, and for contracting parties themselves.[115]
B The notion of damage
Damage was previously defined as a sufficiently certain injury to a legitimate personal interest.[116] This was seen as a rather broad concept of damage. Some have in the past therefore argued that there was no urgent need to amend or adopt new rules in Belgium, for example regarding damage to data. As a consequence, ‘[w]hen an AI system such as an automated curator of a government database or an advanced fintech platform would delete, contaminate, modify, encrypt or leak data to which one or more persons have a factual interest, these persons may claim compensation for such damage without being required to demonstrate a property or personal right to the aforementioned data’.[117] George and others conclude in this regard the ‘[l]’origine du dommage peut en effet tantôt résulter d’une défaillance 26dans la programmation des algorithmes, tantôt dans le processus d’apprentissage lui-même, voire encore d’un usage inadéquat’.[118]
Pursuant to art 6.24 CC, damage now consists of the economic and non-economic consequences of the infringement of a legally protected personal interest. Damage is thus no longer the mere infringement of an interest itself, but rather the consequence thereof. This means that damage is now of a dual nature.[119] On the one hand, damage always has an infringed interest as its basis. Without an infringed interest, damage cannot exist. However, infringed interests can exist without necessarily resulting in damage.[120] On the other hand, damage covers the consequences of the infringement of a personal interest. With this new definition of damage, the legislator introduces considerable uncertainty. It is unclear how an ‘interest’ and ‘consequences’ should be understood. Although the explanatory memorandum gives various examples of interests, including physical or psychological integrity, health or an owned asset,[121] a precise definition of both concepts is not provided.
Yet, having clarity on the concept and scope of recoverable damage in national law is important. Various forms of damage will only be eligible for compensation through national liability law. The revised PLD, for instance, restricts recoverable damage to: (i) death or personal injury, including medically recognised damage to psychological health; (ii) damage to, or destruction of, any property except the defective product itself, a product damaged by a defective component that is integrated into, or inter-connected with, that product by the manufacturer of that product or within that manufacturer’s control, or property used exclusively for professional purposes; and (iii) the destruction or corruption of data that are not used for professional purposes.[122] As such, it does not allow compensation for, among other things, infringements of fundamental rights, ‘for example if someone failed a job interview because of AI recruitment software resulting in discrimination’.[123] The AI Act aims to prevent such infringements from occurring, eg through requirements on data governance and quality (art 10). Where such damage ultimately occurs, the European Commission indicates that ‘people can turn to national liability rules for compensation, and the AI Liability Directive could assist people in such 27claims’.[124] That is why, at least according to some, the proposed AILD was useful as it also potentially covered fundamental rights and primary financial losses.[125] The AILD proposal, however, has been revoked in the meantime. Victims will thus need to turn to national liability rules to recover these and other types of damage not covered by the revised PLD. This shows the need for clear rules on the concept of damage in national laws.
An interesting question in the context of AI relates to the infringement of a subjective right. According to many Belgian scholars, the infringement of a subjective right constitutes a fault.[126] There is, however, no consensus on the way in which the infringement of a right constitutes a fault. According to Bocken, such an infringement is a specific form of violation of a particular behavioural norm and thus a fault. Subjective rights grant their holders certain entitlements that others have to respect. Others may not prevent the right holder from exercising those rights normally. To do so would go against the distribution of competences established by objective law. Whoever infringes another’s right therefore violates a norm prohibiting something and, as a consequence, acts unlawfully.[127] Various scholars endorsed this perspective.[128] The drafters of Book 6 shared this view as well and, therefore, saw no need to explicitly include the infringement of a subjective right as a separate category of fault.[129]
At the same time, however, the drafters wrote that the infringement of an intellectual property right or a personality right are examples of infringed interests at the basis of damage.[130] That means that the legislator did not qualify these infringements as faults. This raises unclarity regarding the compensability of infringed rights under national Belgian liability law. This ambiguous status of infringed rights 28also has significant practical implications. If the infringement of a right indeed constitutes a fault, a court must, in principle, still determine whether an interest was infringed as a result of that fault. In other words, the existence of damage remains entirely open. If, on the other hand, the infringement of a right is qualified as a form of damage, it is at the basis of economic or non-economic consequences. A court only needs to determine the consequence of the violated interest, which must of course still be causally linked to a fault. Let us take the example of the right to one’s voice, which is a personality right.[131] Suppose someone creates a deepfake video in which one’s voice is imitated by using AI. If ones claim compensation for this, it is important to know exactly what must be proven. Must the damage be demonstrated as a result of the infringement of this right? Or must a fault be proven that caused the misuse of the voice? It seems most logical to us, in line with the dominant doctrine, to consider this infringement of a right as a fault. A court will then have to determine what specific damage the injured party has suffered as a result of the misuse of their voice. The same question arises with regard to the qualification of other potential infringements of fundamental rights resulting from the use or development of AI.[132]
C The notion of causal link
A third element of a fault-based liability claim is the causal link between the fault and the damage. Article 6.18 CC contains the basic rule for establishing causality between the fault and the incurred damage. While the starting point is still the conditio sine qua non test and the equivalence of conditions (sec 1), some exceptions have been introduced (sec 2).
1 The basic rule: conditio sine qua non and equivalence of conditions
Article 6.18, § 1, first paragraph CC stipulates that a fact leading to liability is considered the cause of the damage if it is a necessary condition for that damage. A fact is a necessary condition for the damage if, in the specific circumstances existing at the time of the incident, the damage would not have occurred in the way it did without that fact. This is the conditio sine qua non or ‘but-for’ test. It is the legal version of the 29counterfactual analysis, one of the methods used in logic and the philosophy of science to identify factual causal links between individual events. Causation is established when the loss incurred by the plaintiff would not have occurred in the way it did without that defendant’s wrongful act.[133] Once it has been established that the fact leading to liability is a necessary condition for the damage, the inquiry into causality under Belgian law is, in principle, concluded (cf factual causality). Unlike most other countries, Belgium does, in principle, not apply a second evaluation of the causal link based on legal criteria (cf legal causality). All causes are considered equally and lead to liability in equal measure. This is known as the theory of equivalence of conditions.[134] The court is thus required to accept legal causality between a fact and the incurred loss if it has been established that the invoked fact was necessary in the given circumstances for the loss to occur. This theory can be far-reaching in an AI context. An example is when a driver of an autonomous vehicle does not accept a software update and the vehicle subsequently causes an accident resulting in the release of toxic substances in a river.[135]
When applying the conditio sine qua non test, a court needs to hypothetically reconstruct the event leading to the loss and leave out the defendant’s wrongful act. If the loss remains the same, a defendant’s wrongful act was not a necessary condition for it to occur. This is referred to as the théorie de l’alternative légitime.[136] In the 30case of fault-based liability, it must thus be determined what the defendant should have done in order to act lawfully. In other words, the court must identify a lawful alternative to the defendant’s fault. If damage would still have occurred in that hypothetical situation, there is no causal link. When applying this theory to AI-related damage, a conceptual problem may potentially arise. Suppose that an AI system causes damage, for instance a chatbot that starts hallucinating and tells the user that they have killed their partner.[137] This clearly can affect someone’s reputation and cause damage as a consequence. Yet, in an AI context it may be challenging, on the one hand, to identify the specific fault of the specific defendant due to the very complex, interconnected and opaque factual context in which particular AI systems operate[138] as well as the involvement of many actors in the AI value chain (cf ‘many hands’ problem). AI systems present a challenging but unavoidable technological lens through which the conduct of these persons must be assessed and evaluated.[139] On the other hand, it also remains unclear whether the damage following a hallucination would not have occurred anyway even when the defendant had not committed a fault. AI may possess capabilities to act and develop itself autonomously, and thus potentially cause damage without an initial fault of the defendant.[140] Obviously, the mere fact that a chatbot starts hallucinating is a necessary condition for the (reputation) damage to occur. Yet, when applying the theory of the lawful alternative, there may at least be a theoretical chance that a causal link may not be established between a defendant’s potential behaviour and the hallucination (as the hallucination and the consequential damage would exist even with a lawful alternative, namely the fact that a defendant would not have committed a fault). With regard to the application of the lawful alternative, it can thus not be established with certainty that the hallucination would not also have occurred in the absence of the defendant’s fault.
2 31Exceptions to the basic rule
The provisions of causality in the CC, however, introduce two important exceptions to the principles regarding causality discussed in the previous paragraphs. These concern an exception to the conditio sine qua non test (ie a correction to factual causality) and an exception to the theory of equivalence of conditions (ie a correction to legal causality).
As to the first exception, art 6.18, § 1, second paragraph, CC stipulates that if an act leading to liability is not a necessary condition for the damage solely because one or more other concurrent acts, individually or together, are sufficient conditions for the damage, it remains nevertheless also a cause. More important in the context of AI-related damage is the second exception. To correct the sometimes unjust result following from the equivalence theory,[141] art 6.18, § 2 CC stipulates that there is no liability if the factual causal link between the fact leading to liability and the suffered damage is so remote that it would be manifestly unreasonable to attribute the damage to the person being held liable. In making this assessment, particular consideration is given to two criteria: the unlikely nature of the damage in light of the normal consequences of the act leading to liability and the fact that this act did not meaningfully contribute to the occurrence of the damage. These criteria are not exhaustive. A court can also consider other elements to determine whether it would be manifestly unreasonable to attribute the damage to the defendant.[142] The term ‘manifestly’ indicates that this is a marginal test, whereby the court can only reach this conclusion in exceptional cases.[143]
Unfortunately, the legislator did not specify how the two indicated criteria should be interpreted. This entails a risk of significant interpretation issues and legal uncertainty.[144] During the hearings of the Justice Committee of the Belgian Federal Parliament, reference was made to the Poncin judgment as an example of the first criterion.[145] In that case, a man suffered a fatal heart attack after witnessing 32a traffic accident in which his car – but not he himself – was involved. This damage can be considered unlikely in light of the normal consequences of a traffic accident in which one is not physically involved. An example of the second criterion, namely the fact that an act did not meaningfully contribute to the occurrence of the damage, is the situation in which the owner of a car left the keys in the ignition and the car was subsequently stolen, after which an accident was caused with it. Based on the criterion of meaningful contribution, only the thief, and not the owner, will be held liable for the subsequent accident, as only their fault contributed to the damage in a meaningful way.[146]
These exceptions are important in the context of new technologies such as AI.[147] As mentioned above, there may be a risk that individuals will be held liable in seemingly unjust ways (eg the driver of the autonomous vehicle failing to accept a software update). It is, therefore, desirable that the court, based on the new provisions in the CC, can sever this causal link. The ultimate question remains in which situations damage is so remote that it would be manifestly unreasonable to impose liability. Although this will ultimately be a court-based decision, one can think of different situations of damage involving AI that could be so remote as to justify liability. For instance, imagine a junior analyst at a financial firm who inserted slightly incorrect economic indicators into a machine-learning model used for highfrequency trading. Months later, after numerous updates to both the data and the model by other teams (eg developers, computer scientists, etc), the algorithm executes a series of automated transactions that contribute to a sudden market disruption, causing major (financial) losses to third parties. In this scenario, it may be manifestly unreasonable to hold the junior analyst liable, as their contribution was neither meaningful to the actual occurrence of the damage nor was such damage a likely consequence of their original act.[148]
Yet, these adjustments to the theory of causality does not change the fact that it can remain difficult for victims to pinpoint the source of a problem/harm in an AI context. It is difficult to establish that some hardware defect caused damage to a person, but it is more difficult to establish that the underlying cause was a flawed algorithm. It becomes even more complicated if the algorithm suspected of causing 33harm was developed or modified by the AI system itself, using machine learning and/or deep learning techniques, while being fuelled by external data that it has collected since the start of its operation.[149] In this regard, the CC contains two provisions that are relevant in an AI context. Whereas art 6.22 CC deals with uncertainty regarding the causal character of the fault, art 6.23 covers situations of uncertainty about the identity of the liable party and alternative causes.[150]
An example can illustrate the situation referred to in art 6.22 CC. Imagine a healthcare provider using an AI-driven diagnostic tool to prioritise patients for emergency treatment. The tool was initially trained on a large dataset but continues to modify its algorithm through machine learning, using external patient data it collects during its operation. One day, a critically ill patient is deprioritised and receives delayed treatment, resulting in serious harm. An investigation reveals that the AI system autonomously adjusted its decision-making logic based on a distorted data pattern – perhaps reflecting demographic bias or an anomaly in recently acquired hospital data. No (human) person directly implemented the faulty logic: the developer did not foresee this outcome, the hospital followed standard protocols, and the source of the skewed data cannot be precisely identified. In such a case, it remains uncertain whether the damage was necessarily caused by the hospital’s lack of oversight, the developer’s failure to impose stricter safeguards or the presence of flawed input data. However, the injured party who suffered damage is entitled to partial compensation. The mechanism of proportional liability included in art 6.22, first paragraph, CC could potentially help to overcome this hurdle. When it is uncertain whether the fault of the person being addressed is a necessary condition for the damage – because the damage could also have occurred if this person had acted lawfully instead of committing a fault – the injured party is entitled to partial compensation for the damage, in proportion to the likelihood that the fault caused the damage (eg the hospital’s failure to monitor AI outputs or the developer’s insufficient testing). For the application of this proportional liability, it is not required that the degree of certainty reaches a specific minimum threshold, but it must not be merely hypothetical or based solely on assumptions.[151] This mechanism thus helps overcome the evidentiary challenges in complex AI-related harm cases. This solution is actually not new in Belgian law. The same result could previously have been achieved through the application of the doctrine of ‘loss of a chance’.[152]
34As many parties are generally involved in the supply chain of AI systems (cf ‘many hands’ problem), the allocation of liability between all of them may become an important issue as well. It may be unclear who/what exactly caused the damage. Such a situation is covered by art 6.23 CC. If multiple similar acts, for which different persons are liable, have exposed the injured party to the risk of the damage that eventually occurred, but it cannot be established which of these acts actually caused the damage, each of these persons is liable in proportion to the likelihood that the act for which they are responsible caused the damage. However, anyone who proves that the act for which they are responsible was not a cause of the damage is not liable. This provision thus applies to cases in which multiple persons/actors, through their individual actions, could have caused the damage, but it cannot be proven who exactly caused it. Consider a scenario in which a self-driving car is involved in an accident because its AI system fails to correctly identify a pedestrian (and leaving aside potential insurance coverage). The development and functioning of that AI system may involve multiple actors across the supply chain: the developer of the core algorithm, the supplier of the sensors (eg LIDAR or cameras), the manufacturer of the vehicle, the company responsible for training the AI on image data, and the end-user or owner of the vehicle, who may have failed to install a crucial software update. Following the accident, it is clear that the damage was caused by a malfunction in the AI system, but it cannot be established with certainty which component or party was the actual cause of the error. It is possible that the error resulted from a flaw in the algorithm, inaccurate sensor input, biased or incomplete training data, or a failure to update the software. In such a case, art 6.23 CC provides a useful solution: each party can be held liable in proportion to the likelihood that their act caused the damage. Only those who can prove that their conduct did not contribute to the occurrence of the harm will be exempt from liability.
IV Concluding remarks
This article discussed some provisions of the CC dealing with fault-based liability and examined their application in an AI context. The main research question was whether the new rules on fault-based liability remedy some of the old challenges and, as a consequence, may be ‘better’ equipped to deal with the reality of AI. Based on our research, we identified several ways in which Book 6 of the CC indeed represents a step forward in addressing damage caused by AI. However, we also identified several challenges that will need to be tackled in the coming years and already provided some points of attention to set the agenda.
35Foremost, it is striking that the preparatory works of Book 6 only refer to AI once, namely when mentioning the revision of the PLD.[153] Of course, legislation should ideally be technology-neutral.[154] At the same time, more attention regarding the impact of AI would have been welcomed in the preparatory works considering its major disruptive effect, the fact that so much is already happening at the EU level on this topic (eg AI Act, revised PLD, etc), and especially the fact that the drafters of Book 6 mentioned the complex problems that may result from increasing technological developments. Continuous scrutiny will also be needed for the implementation of the revised PLD into Belgian law and the remaining unclarities (eg notion of ‘control’).
It also remains unclear what role the subjective element of a fault will ultimately play in the new Belgian liability law. Although it has formally been removed, some argue that it implicitly remains a constitutive element of fault. In any case, the – at least theoretical – removal of this subjective element appears to be a significant step forward as previous research already highlighted the difficulties posed by this element when applying fault-based liability to AI systems or when implementing supranational liability regimes into Belgian law. As mentioned, however, Kruithof, among others, points out that although the subjective aspect has been formally abolished, it implicitly remains. That is why any potential supranational AI liability initiative should consider, or at least clarify, the subjective element of a fault, in particular for instances of force majeure. Article 6.7 CC provides for an exclusion of liability in cases of force majeure. To invoke force majeure, it must have been impossible for a person to comply with a rule of conduct. It is thereby no longer required that an event be both unforeseeable and unavoidable. This is relevant for cases involving damage related to AI systems. Considering the opacity of AI systems – and hence the potential unforeseeability of the damage – defendants may be more successful when invoking this defence. At the same time, our analysis also showed that uncertainty remains concerning the reasonable foreseeability of the damage and the potential liability. We therefore call for more clarity and research as to whether an unpredictable output of an AI system would amount to force majeure and, if so, under which circumstances.
In this article, we discussed, or at least referred to, several frameworks that are relevant to AI liability. Some of these are binding, such as the AI Act or the revised PLD. Many other frameworks are considered soft law and thus – as a principle – not binding. Yet, our research calls for a more nuanced view when it comes to relying 36on soft law to assess fault-based liability. We illustrated this with several examples such as standards, ethical frameworks and codes that become binding in a more indirect way as they are relied upon to assess a party’s potential negligence (eg standards of good craftsmanship and professional practices). More generally, the concept of care will remain important, not only to assess whether an actor in the AI supply/lifecycle chain acted negligently but also to assess whether certain provisions of the AI Act have been violated as they may require an additional assessment. In this regard, we emphasised the importance of reconsidering the Principles of European Tort Law and to adapt them to a digital environment accordingly.
We also examined the concept of damage and causation as included in the CC. Book 6 of the Civil Code is currently unclear as to whether it conceptualises an infringement of a right as a fault or as damage. As AI systems may potentially pose a threat to many fundamental rights, legal certainty requires that we clearly understand how a right infringement by an AI system should be classified. This determines whether a court must still ascertain that the victim suffered damage from an infringement or whether the infringement itself results from a fault. While the starting point to establish causation is still the conditio sine qua non test and equivalence of conditions, the provisions of the CC contain some exceptions to this rule that may be relevant in an AI context when the damage is so remote that it would be manifestly unreasonable to attribute it to the person being held potentially liable. This provision addresses the risk that individuals may be held liable in seemingly unjust ways. At the same time, several unclarities remain that require additional clarification, such as the notion of manifest unreasonableness as well as when damage is considered too remote in an AI context. From a more conceptual point of view, determining a lawful alternative pursuant to the théorie de l’alternative légitime may create some hurdles when applying it to an AI context considering AI’s autonomous capabilities. It is unfortunate that the legislator noted in the explanatory memorandum that the théorie de l’alternative légitime remains the guiding principle for establishing causality, but did not clarify in the law itself how this theory should be applied, for instance in the context of self-learning systems. Even with adjustments to the theory of causality, it can remain difficult for victims in an AI context to identify the precise source of damage. Articles 6.22 and 6.23 CC provide some ways to remedy this, for instance, when there is uncertainty regarding the causal character of the fault or uncertainty about the identity of the liable party. This certainly is relevant in an AI context.
The main conclusion in this article is thus rather a mixed one. Whereas the new provisions of Book 6 contain some elements that will turn out to be useful for AI (eg the removal of the subjective element of fault, an overview of criteria to assess potential negligence and some adjustments with regard to causality), many unclarities on certain notions remains. Our article highlighted some of the elements that 37(Belgian) scholars, courts and policymakers need to consider with regard to fault-based liability and AI. By doing so, it puts forward a research agenda and offers ways forward to ensure that the new provisions of the CC on fault-based liability will survive the challenges related to AI.[155]
© 2026 the author(s), published by Walter de Gruyter GmbH, Berlin/Boston
Articles in the same Issue
- Frontmatter
- Frontmatter
- The Fault-Based Liability Regime in the New Belgian Civil Code and Artificial Intelligence
- The Concept of Damage under the 2024/2853 Product Liability Directive – A Comparison with Turkish Law
- Product Liability and Pharmaceutical Products Between EU Directives and Italian Law
Articles in the same Issue
- Frontmatter
- Frontmatter
- The Fault-Based Liability Regime in the New Belgian Civil Code and Artificial Intelligence
- The Concept of Damage under the 2024/2853 Product Liability Directive – A Comparison with Turkish Law
- Product Liability and Pharmaceutical Products Between EU Directives and Italian Law