Home A Comparative Study of Automated Quantification in Digital Insurance
Article Open Access

A Comparative Study of Automated Quantification in Digital Insurance

  • Marta Infantino

    Marta Infantino (PhD, Palermo University; LL.M. New York University) is Associate Professor of Comparative Law at the University of Trieste. She is Associate Member of the International Academy of Comparative Law and has held visiting professorships in prominent universities in Canada, Colombia, France and Germany. Her research themes include comparative tort law, comparative contract law, digital vulnerability and social quantification, particularly through indicators. Full cv at https://www.units.it/data/curricula/12009.pdf.

    ORCID logo EMAIL logo
Published/Copyright: March 29, 2024

Abstract

Insurance companies have always been at the forefront of developments in the processing of large volumes of data. This paper investigates in a comparative perspective the implications of the increasing reliance by insurers on automated quantification, examining developments of insurance law and technology in continental Europe, the common law (particularly the United States), and mainland China. The paper sheds light on the challenges brought by automated quantification in digital insurance, reviews the regulatory options that may address such challenges and inquires into the regulatory approaches pursued in different regions of the world. The comparative analysis of the strategies pursued will show that, when thinking about regulatory options for digital insurance, it is important to keep in mind that the shift to automated quantification, although global, raises different risks and opportunities depending on the contexts and the legal frameworks in which it takes place. The variance of contexts and legal frameworks explains why the impact of automated quantification in insurance is for the time being strong in the common law world, present but less intrusive in China, and proceeding at an even slower pace in continental Europe.

1 Introduction

Ever since the birth of modern insurance, insurers have been evolving complex methodologies for assessing risks and injuries, and modelling the world into numbers. Not by chance have insurance companies, in the last two centuries, been at the forefront of developments in the processing of large volumes of data and in the mathematical calculation of probability (Bouk 2015; Clark et al. 2010; Daston 1998, 15–33, 162–181; Porter 1995, 89–113). Now, with the slow but steady digitalization of insurance markets, and with the new scenarios opened up by the spread of connected devices and Artificial Intelligence (AI), it is claimed by many that a new era for insurance is being brought to life – an era in which the ability of insurance companies to track, count and govern human behaviour is enormously augmented.

These claims are strongly heard in common law jurisdictions. One can for instance read that, in Australia, supermarket chains have customer loyalty programs that collect information on (not only on spending habits, but also) members’ health and fitness through smartphones and smartwatches, and that are “associated with a major private health-insurance company that offers benefits to insured clients who regularly upload health and fitness data onto their platform” (Lupton 2016, 123). In England, it is noted that “[a]lgorithms are now available to insurers to identify which prospective insureds would be a good bet and which should be avoided” (McGurk 2018, 27). Still in England, another commentator observes that “[t]he real-time data obtained from individuals with high health-related risks (not induced by their own lifestyle choices) would mean that they will face high and potentially unaffordable premiums which would no doubt limit their access to basic medical service provision, leading to a further deterioration of their condition” (Soyer 2022, 186). In the United States, a best-selling author wrote more than five years ago that “already insurers are using data to divide us into smaller tribes, to offer us different products and services at varying prices” (O’Neil 2016, 164), and predicted that “[a]s insurance companies learn more about us, they’ll be able to pinpoint those who appear to be the riskiest customers and then either drive their rates to the stratosphere or, where legal, deny them coverage” (O’Neil 2016, 171). As another authoritative scholar commented, “[a]s certainty replaces uncertainty, premiums that once reflected the necessary unknowns of everyday life can now rise and fall from millisecond to millisecond […] Rates based on actual behavior are a big advantage in being able to price appropriately. This kind of certainty means that insurance contracts designed to mitigate risk now give way to machine processes that respond ‘almost immediately’ to nuanced infractions of prescribed behavioral parameters and thus substantially decrease risk or eliminate it entirely” (Zuboff 2019, 213). A number of real-world successful experiments with 24 × 7 behavioural insurance confirm this trend. Suffice it to think of the commercial success of the US-based company Lemonade Inc., that provides round the clock connected insurees with personalised property and casualty insurance (McFall, Meyers, and Van Hoyweghen 2020, 3–4; Talesh and Cunningham 2021, 978–980), and of the even more astonishing exploits of Vitality, a branch of the South-African financial service group Discovery Limited, that nowadays sells its self-tracking products for health and life insurance in the US, the UK, Australia, and some Asian countries (Jeanningros and McFall 2020, 6–12).

Against such a framework, the aim of this paper is twofold. On the one hand, I would like to shed light on the challenges brought by automated quantification in insurance, as well as on the possible legal strategies that may be deployed to address these challenges. On the other hand, I will try to demonstrate that both the impact of automated quantification in insurance and the need for its legal regulation are unevenly distributed across legal traditions; the paper will in particular take into consideration continental Europe, the common law (especially the US), and mainland China. To this purpose, I will rely on insurance studies as much as on the literature on social quantification and on comparative law. Such a methodological background explains why, rather than looking at current trends through the magnifying glass of ‘digitization’, ‘digitalization’, and ‘algorithmification’, I would rather employ the related, yet broader lens of ‘automated quantification’. Related, because digitization, digitalization and algorithmification all imply the conversion of atoms to bits and of qualitative information into quantitative one. Broader, because the process of transforming qualities into quantities – the art of counting – is a technique as old as human civilizations, and the development and progressive automation of which have followed (and arguably contributed to) the transition from ancient to contemporary societies. Accordingly, research on the effects of quantification on societies predates digital studies, and offers everlasting insights on current trends and paradigm changes. The domain of insurance proves to be an optimal field for testing such methodology, inasmuch as insurance is all about numbers, is present everywhere and is everywhere going increasingly digital.

The paper will start precisely with a reminder of the variability of the relationship between numerification, insurance, and the law. As Section 2 will show, since the birth of insurance, there have always been limits about what insurers could count and how, and these limits have always been determined not only by technological infrastructures, but also by changing perceptions of what was considered from time to time to be legitimate. Keeping these caveats in mind, Section 3 will present some of the features that make social quantification a powerful governance tool, while Section 4 will delve into the major implications of the rise of automated quantification in contemporary insurance practice. This will lead us to explore, in Section 5, the regulatory options that are in principle available for controlling the side effects of reliance on automated quantification by insurance market actors. Section 6 will then argue that, when thinking about helpful regulatory options, it is important to not forget that automated quantification in digital insurance raises different risks and opportunities depending on the contexts and legal frameworks in which the shift to automated quantification takes place. The variance of contexts and legal frameworks is an important variable explaining why the impact of automated quantification in insurance is currently strong in the common law world, less intrusive in China, and proceeding at an even slower pace in continental Europe. On the basis of the above findings, Section 7 will offer some conclusions about the usefulness of ‘automated quantification’ as a lens to view the impact of digital technologies on contemporary society, and about the need of enriching current debates on digitalization with a heightened attention to the contexts and frameworks in which technological shifts occur.

2 Numbers, Insurance, and the Law: A Variable Relationship

As anticipated, some brief observations about the relationship between quantification, insurance, and the law, are in order to set the basis for the following discussion.

One may be tempted to think that the relation between numbers and insurance in the legal perspective is a stable and objective one, for insurance companies base their daily work on the findings and insights of actuarial science. Nothing, however, could be farthest from the truth.

Even non-historians would remember that, in continental Europe, the evolution from proto-forms of insurance to modern insurance contracts in the Middle Ages was for a long time slowed down by the strong suspicion that insurance contracts implied a transgression of the Church-driven prohibition of usury (van Niekerk 1998, 5–6). Until the mid-nineteenth century, life insurance in particular remained condemned throughout the European continent as an incitement to fraud and murder, and as an impious conflation of the sacred sphere of human life with profane operations of the marketplace (on early developments in continental Europe, Clark 1999, 8–10, 13–32; on the illegality of life insurance in France until the Nineteenth century, Thiveaud 1989). Yet, these doubts never prevented European slave owners from protecting their investment in valuable property by insuring slaves for transport (Berry 2017, 114–119; Clark 2010, 52–74; McFall and Moor 2018, 199–202; Savitt 1977). As clearly stated by the renowned French jurist Pothier in the ‘Treatise on Insurance Law’, written in the second half of the Eighteenth century, slaves were at that time considered “des choses qui sont dans le commerce, et qui sont susceptibles d’estimation” [“things that can be sold and that can be estimated”] (Pothier 1810, 35–36). Seen from a historical perspective, insurance thus clearly becomes a social construction, a policy-imbued tool, bound up with worldviews and subservient to the beliefs, hopes and fears of the societies using it (Baker 2001; Bussani and Infantino 2015, 102–103; Ewald 2019).

The cultural dependency of insurance is evident in contemporary times too. Even today, the areas and the extent to which quantification of human-related features is legitimate vary between times and places. What can be legally quantified in a given time and place, may turn out to be outlawed at a different time in a difference place. These changes occur exactly because social quantification is imbued with political and cultural values that cannot be reduced to a measurement or to a scientific formula. Consider, for instance, the following examples concerning the possible reliance by insurance companies on ethnic, genetic and gender-related data.

The use of data about people’s ethnicity in insurance is uneven. Reliance on such data is poorly documented in China and quite fragmented in Europe, where some countries collect statistics on ethnicity, while others do not (Merry 2016, 14; Tin 2014). By contrast, in the US ethnicity-based data have been long used by insurance companies to segment their internal market (Bouk 2015, 31–54, 183–208). Despite the ban on race-based insurance policies by the 1964 Civil Rights Act, life insurance in the US is still a two-tiered system disadvantaging non-White people, since the lower socioeconomic status and lower income associated with non-White subscribers all too often implies higher mortality rates and lower indemnity amounts (Wiggins 2020). Although statistical data apparently justify differentiated treatment, this practice is now increasingly perceived as violating the equal protection clause in the US Constitution (Chamallas and Wriggins 2010, 155–182).

The interplay between values and insurance companies’ postures towards social quantification is also made clear by the use of genetic and gender data. The use of genetic information may be very useful in health and life insurance to help insurers determine individual risk. Reliance on genetic data is therefore possible in a number of countries, including China, Australia and India (Joly et al. 2020). However, the fear of genetic discrimination has led many other countries to ban or restrict the use of genetic data by insurance companies: compare, for instance, the French loi Kouchner of 2002 (Code de la santé publique, article L1141-1, as amended; see also Béguinot 2014), the US Genetic Information Nondiscrimination Act (GINA) of 2008, and the German Gendiagnostikgesetz of 2009 (Gendiagnostikgesetz § 18; see also Armbrüster and Obal 2014).

As to gender, actuarial science suggests it is perfectly reasonable for insurance companies in motor insurance to take into account the connections between gender and traffic accident rates. Sex is indeed one of the many variables that insurance companies working in motor insurance regularly take into account when proposing insurance products. Yet, in the European Union, the practice has been outlawed since 2011, when the Court of Justice of the European Union, in its decision Association Belge des Consommateurs Test-Achats and Others (C-236/09, ECLI:EU:C:2011:100), held that proposing different insurance premiums for women and men in motor insurance is invalid inasmuch as it constitutes a prohibited sex discrimination.

Many other examples could be added. But the above ones suffice to shed light on the cultural dependency of the relationship between quantification, insurance, and the law. By determining what counts and how much, insurance works as a policy tool that contributes to shaping, and is at the same time shaped by, the cultural context in which it is used. It is now time to see in more detail implications of social quantification, before looking at the manner in which these implications change when insurance gets automated.

3 The Power of Social Quantification

The reasons why social quantification matters (and matters even more when it is automated) are many. This section will outline some of the main features that make the techniques for quantifying the social world a powerful governance tool, while the next section will investigate what happens to these very same features when quantification by insurance companies gets automated.

Perhaps the most outstanding characteristic of social quantification, especially vis-à-vis scientific measures, is its reactive and reflexive quality. When one measures an object, the measurement changes the object only to a minimal, and perfectly determinable, extent. By contrast, measuring society changes it in complex and unpredictable ways. This effect of social measurements is undisputed in a number of disciplines – from sociology to psychology to economics – and goes by different names. It is sometimes called the ‘Hawthorne effect’, from the discovery made in the Fifties in a factory called the ‘Hawthorne Works’ by the sociologist Henry A. Landsberger, who realised that the productivity of the workers in a factory increased whenever the workers knew they were being monitored (Landsberger 1958). Other times it is called ‘Campbell’s law’, from the name of Donald T. Campbell, a social psychologist who in 1976 wrote that “[t]he more any quantitative social indicator (or even some qualitative indicator) is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor” (Campbell 1976, 49). Still other times social reactivity to measurements is described as ‘Goodhart’s law’: Charles Goodhart was an economist who in 1981 noted that “[a]ny observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes” (Goodhart 1981, 116). Whatever the label, all these social scientists have observed that, whenever a social measurement is regularly reiterated, people tend to adapt their behaviour to (what they perceive are) the expectations of those who measure them. Repeated social measurements identify quantifiable targets and monitor progress; by doing so, they easily re-orient the agendas and priorities and lines of action of those who are measured, and stimulate changes in their behaviour. This mode of intervention also often ends up validating specific visions about the world, about what is important and what is not: only what is counted matters, while everything that is not counted, simply, does not count. Equally well known is that social quantification spurs rank-seeking behaviour and gaming strategies; once measurements are in place, it is easy for people to understand and take profit of the possibility of cheating the system (Bussani, Cassese, Infantino 2023a, 324–327).

Another distinctive feature of social measures is that they have the tendency to entrench the past and project it into the future, giving rise to ‘self-fulfilling prophecies’ by which reactions to a social measure confirm the expectations or predictions of the measure itself, thus increasing ex-post its original validity (Espeland and Sauder 2007). Moreover, being expressed in quantities, social numbers can easily travel far away from the place in which they are produced, and end up being used in distant contexts and by people who have zero to little knowledge of their original meaning. The problem is that, the further the distance between the place of production and the place of use, and the further the expertise of the final users of social numbers from the field in which numbers were originally collected, the more likely it is that numbers are misinterpreted (Merry 2016, 27–35). A very common form of misrepresentation occurs when social quantification is used by experts who are not familiar with the context from which data originates to examine correlations and to infer causal patterns; this way of working is well-known to lead to frequent violations of the old adage ‘correlation is not causation’ (Merry 2016, 183–184; McGrogan 2016, 627, 632–633; Matthews 2000). But this is not all. The faith in numbers as carriers of objective truths also implies that, after a social measurement is put in place and is somehow successful, it becomes very hard to dismantle it and to challenge its results. The irrefutability of quantitative findings is another well-known effect of social commensuration: the simplicity and apparent scientific-ness of numbers make quantitative statements much harder to contest than qualitative judgments. This is also because contesting quantitative statements require access to information – such as the variables and parameters used for the measurements, the data relied on, the methodology for treating and aggregating such data – that is often very complex and completely undisclosed (Borges Fortes 2023; Broome, Homolar, and Kranke 2018; Espeland and Sauder 2007, 16–22; Jerven 2013; Merry 2016, 20).

To be clear, all the above features are also strengths. The performativity, the ability to silently nudge people out, the apparent scientific-ness, and the irrefutability of social quantification make it a powerful ‘technology of distance’: a useful tool to manage communities of strangers in which other techniques for guiding and controlling behaviour (such as those based on intimate knowledge and personal trust) are better replaced by objective and standardised methods of social governance (apart from the masterful studies of Desrosières 2000; Porter 1995, see Broome and Quirk 2015; Couldry and Mejias 2019, 122–151; Rodríguez de las Heras Ballell 2023).

4 Automating Quantification in Digital Insurance

What happens when quantification practices get automated? As said above, in Section 1, insurance provides an optimal field for investigating how such question should be answered. In the insurance sector, the enhanced availability of granular information and the growing sophistication in collecting and processing big data, also through AI, are fostering the increasing automation and mass customisation of insurance products and services (IAIS 2020, 4). Consumer insurance, in particular, provides us with plenty of examples and observations that are relevant for our inquiry.

As noted in the previous section, exercises in social quantification usually trigger human reactivity. In insurance, many have noted that the ability of insurers to monitor people 24 × 7 gives rise not only to intrusive forms of techno-surveillance, but also to the deployment of hyper-nudging techniques, i.e., the development of subtle systems of incentives that gently guide people toward making decisions that they would have not made otherwise (Hildebrandt 2015, 2018; Ulbricht and Yeung 2022; Yeung 2016; Yeung and Lodge 2019). An illustration may help. Let us imagine that a health insurance company wants to treat customers differently on the basis of their actual amount of exercise and food consumptions habits, as tracked by connected devices and wearables. Once the insurees realise the reward mechanism applied by the insurer, they will likely choose to engage in the behaviour that is being rewarded, for instance undertaking sports more often and eating healthier, using a connected sport equipment rather than a conventional one, and modifying their food habits in light of the benefits associated with their tracked choices. All this may sound good, but it is not necessarily so. The choice of pursuing the algorithmic reward is always made at the cost of other choices, some of which may be equally legitimate and good for wellbeing. Exercising is good for health, but so is reading, relaxing, and caring for others. Eating healthy food is good for health, but so it is occasionally fasting and socialising over dinners. Especially when the pursuit of the reward becomes disconnected from its ideal purpose of creating incentives for a healthier life style, unintended effects may occur that may deter efficient behaviour. Exercising is good, but exercising too much or without preparation increases the risk of physical injury. Eating healthy food is good, but strictly sticking to a healthy diet is no guarantee of physical and mental well-being. Further, as is typical of social quantification, unintended effects may include cheating. Once customers realise that their insurers keep track of information such as their heart rate and food intake, they may find creative ways to manipulate the system, for instance by putting their Fitbit on their dog’s collars or by paying cash when purchasing unhealthy food products. Of course, algorithms may continuously evolve to detect and reduce manipulations, but no technological solution can prevent people from adapting and reacting to technology itself (Latzer 2022; Morozov 2013).

The case of fitness apps additionally shows how social measurements can create self-fulfilling prophecies. In a metric society, people committed to self-surveillance use fitness tracking software to generate data about their body and health that are then collected and used for the purpose of generating metrics by insurance companies. Who are the people most likely to use these apps? Arguably, young and adult people with average or good health and with enough resources and time to buy smart devices and to care about their physical well-being. Sporting apps are on average not used by older, unhealthy and poorer people. In a world in which insurance companies have access to the repositories of health data generated by these apps, people who exercise regularly are likely to set the standard for good health, leaving those who do not exercise to be charged with higher prices and slowly put at the margins of the insurance market (Mau 2019; McFall and Moor 2018, 198, 206; Neff and Nafus 2016, 146; Sax 2021). More in general, it has been noted by many that, in a constantly connected world, insurance companies will likely require customers to provide them with full access to their devices and digital selves, and only rich people will be able to afford the luxury of a non-omniscient insurance (Cevolini and Esposito 2020, 5; O’Neil 2016, 5). With time, these fully informed insurance companies will reasonably offer insurance only to the customers presenting less risk, and refuse to insure or offer astronomical premiums to marginalised and riskier groups. This may replicate and reinforce the divides currently existing in society, leading to the discrimination and financial exclusion of those in greater need. This may as well end insurance as we know it, since the growing capacity of insurance companies to drill down to the level of individual behaviour undermines the traditional work of classifying aggregated risks and of practicing risk pooling upon which traditional insurance is based (Cevolini and Esposito 2020; EIOPA 2021, 11–13; IAIS 2020, 11–12, 17, 19–20, 23, 27; Mau 2019, 69–74, 151–153; Prainsack and Van Hoyweghen 2020, 130–131).

Even if automated quantification does not end insurance as we know it, it will for sure change its way of working. Insurance companies have historically relied on human-made categorisations built on tested causal links between standardised rating factors and the probability of people suffering certain losses. By contrast, big data analytics and AI-driven techniques suggest machine-driven, sterotypical correlations between factors and accident proneness that may be all but proven (in general, Burk 2021, 1165; with specific regard to insurance, EIOPA 2019, 2, 6, 34; McGurk 2018, 54–55; McFall and Moor 2018, 201–206; Prince and Schwarcz 2020, 1316). When the use of facial analysis in life insurance suggests that a given hairstyle is correlated with a longer and healthier life, can the variable ‘hairstyle’ be considered in determining the price of insurance (Prince and Schwarcz 2020, 1316)? When big data analytics in car insurance shows that owners of orange cars are less accident-prone than owners of cars of a different colour, can ‘colour’ enter the criteria for insurability, even if there is no causality proven between them and accidents (Cevolini and Esposito 2020, 7–8; EIOPA 2021, 7, 34–35)? At the opposite end of the spectrum, emerging AI-driven systems are increasingly able to spot out connections with factors that are associated with prohibited grounds of discrimination. The ability of AI to discriminate ‘by proxy’ implies the automated discovery of predictive characteristics that apparently have no relationship with a protected category (such as race and gender), and yet are correlated to it (Drechsler and Benito Sánchez 2018, 3, 6–7, 11, 12–14; Marelli, Lievevrouw, Van Hoyweghen 2020, 455–456; McGurk 2018, 54–55; Prince and Schwarcz 2020, 1275–1276; Soyer 2022, 178–180). To illustrate, let us take the European rule that prohibits the use of gender as a variable for determining the price of behaviour-based motor insurance policy (above, Section 2). Even if the ‘gender’ component is eliminated from the data collected by car insurance companies and the data processing algorithms are mandated not to take into account gender-related results, insurance algorithms may still spot and use gender as a relevant variable, detecting it from neutral information within the dataset – such as geo-locational data or driving habits – and identifying proxies that, although not framed in terms of gender, actually stand for the prohibited variable (EIOPA 2021, 28–29; McFall and Moor 2018, 207–208; Verbelen, Antonio, Claeskens 2018, 1295).

As mentioned in Section 3, questioning this way of working tends to be particularly hard. In contemporary digital economies, people often lack the time, the resources, and the willingness to lodge complaints. Automated quantification provides additional layers of difficulty to the ordinary hurdles faced by consumers and data subjects in the enforcement of their rights. For people to react to misgivings, it is fundamental that they are able to perceive that they suffered an injury. Yet, in a world in which connected users are constantly interacting with their own personal digital screens and autonomous chatbots, they have little chance to compare each other’s experiences and realise that something went wrong with their own (Marelli, Lievevrouw, and Van Hoyweghen 2020, 458; see also Spencer 2020, 998; Willis 2020, 153). “That’s the thing about being targeted by an algorithm: you get a sense of a pattern in the digital noise, an electronic eye turned toward you, but you can’t put your finger on exactly what’s amiss” (Eubanks 2018, 5).

Let us take the example of automated discrimination by a pricing algorithm in an insurance contract. To win a discrimination claim, digital users would have to gather the evidence necessary to demonstrate that they have been victims of a differentiated treatment vis-à-vis other categories for illegitimate reasons. To do that, they would need to obtain explanations about the algorithmic process they have been subject to, or to reverse engineer the automated final decision; they would also need to collect evidence about the treatment and decisions concerning other categories of people. But, even before that, digital users would need to suspect that something was wrong with what was being offered to them. What characterises automated discrimination, making it more subtle, intangible and difficult to detect than traditional forms of discrimination, is that it relies on BDA-based ‘collectives’ (e.g., the group of consumers using such version of that browser or pausing frequently when typing on their devices) that people do not associate with, and do not even perceive as attributable to them. The abstract nature of digital data, the endless possibilities of combination of different data types, the opacity of the categories used for customisation, make it nearly impossible for digital users to recognise themselves as part of a group that is being discriminated against other groups (Marelli, Lievevrouw, and Van Hoyweghen 2020, 458; McFall and Moor 2018, 208; Prainsack and Van Hoyweghen 2020, 141–142). As many have noted, the trend towards automated quantification in insurance provides many benefits to consumers but also leaves them in conditions of automated hardship (Cappiello 2020, 9; Lynskey, Micklitz, Rott 2021, 94; Południak-Gierz and Tereszkiewicz 2023).

5 What Has Law Got to Do with It?

As the illustrations in Section 2 showed, the law has say and sway over what insurance companies can quantify, how, and for what purposes. What is then the posture of the law vis-à-vis the increasing reliance on automated quantification in the insurance sector? This section argues that there are several ways of intervening on the matter (for an overview, see Borges Fortes, Baquero, Restrepo Amariles 2023; Infantino and Bussani 2023). The jurisdictions under review have adopted all of them, often in combination with one another, although clearly each region has a distinctive pattern in its regulatory imprint. A legal system may, for instance, outright ban certain forms of social quantification. Or social quantification may be allowed provided that some ex-ante requirements are met or that a few rights and remedies are granted to interested parties. Or a legal system may refrain from intervening in business practices impliedly delegating the task to other, softer forms of (self-) regulation. In what follows, we will see some illustrations of the postures just-mentioned, and will investigate how the different regions herein surveyed – continental Europe, the United States, and China – approach each of them.

At one extreme, it is certainly possible to ban some forms of social quantification. Section 2 for instance mentioned that the use of gender in insurance pricing is forbidden in the EU, while the US and many other European countries prohibit the use of genetic information for insurance purposes. Another, more general example arises from the EU General Data Protection Regulation (GDPR) of 2016, under which personal data cannot be collected by insurance companies unless there is consent (article 6 GDPR). The same rule is provided in China by the EU-influenced Personal Information Protection Law (PIPL) of 2021 and the Civil Code of 2021 (article 13 PIPL and article 1035 Civil Code). No similar law exists in the US. Differently from other common law jurisdictions that have enacted privacy regulations (such as the Australian Privacy Act 1988 and the Indian Digital Personal Data Protection 2023), there is no general regulation of consumer privacy in the US at the federal level. Some US states, such as California, have adopted a legislation that recalls the GDPR: see the 2018 California Consumer Privacy Act (CCPA), as amended by the 2020 California Privacy Rights Act and the 2023 California Delete Act. Yet, even under the CCPA, there is no ban on unauthorised data processing, inasmuch as consumers’ consent is not required for data collection and sale. Bans are not foreign to some US state laws, though. Again, in California, personalisation of prices on the basis of people’s willingness to pay is excluded in property and causalty insurance by a notice issued by the local Insurance Commissioner (State of California, Department of Insurance 2015). In life insurance, the New York Department of Financial Services in 2019 has prohibited the use of rating factors based on statistical correlations with no demonstrable causal link between the classification and increased mortality, when the rating guideline has a disparate impact on protected classes (New York Department of Financial Services 2019). However, bans are problematic from many points of view. They imply a restriction of insurance companies’ freedom to do business, as the power of the government limits the range of choices available to insurance companies. Whenever bans are not uniform across all territories, they oblige companies to abide by different rules depending on the place where business is conducted (or to comply everywhere with the strictest rule so as to avoid market fragmentation, which is exactly what happened when the EU adopted the GDPR) (Bradford 2020). Moreover, ensuring respect for bans is very costly inasmuch as it requires the establishment of an authority entrusted with investigatory functions and endowed with enough resources to fulfil its mission.

Rather than bans, regulators may prefer to set up a normative framework requiring companies to abide by some ex-ante obligations, which are supposed to guarantee that everything is properly done. This is for instance the main approach underlying the GDPR. The GDPR requires that data processors, including insurance companies, put in place a number of measures and control mechanisms – such as the establishment of adequate safeguards, the appointment of a data protection officer, the drafting of a data protection impact assessment – that are thought to be conducive to a stronger protection of privacy (articles 25–35 GDPR). A similar approach is adopted in China by the Chinese PIPL and the Civil Code (articles 51–59 PIPL and article 1035 Civil Code). Some ex-ante duties are imposed on businesses also by the CCPA (section 1798.100 CCPA on the duty to implement reasonable security procedures). The same idea – it should be noted – is now underlying the soon-to-be-approved EU Artificial Intelligence (AI) Act, the final text of which is currently being negotiated. Besides banning a very limited range of AI uses under article 4 and letting the great majority of AI uses continue under the gentle push of soft law (article 69(1) AI Act), the AI Act proposal for the most part deals with the so-called high-risk uses of AI, as defined under articles 6 and 7 and Annex III. For instance, “AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance” are presumed to be high-risk (AI Act, Annex III, no 5, lit d, last version of the AI Act proposal). Producers and/or users of high-risk AI are required to comply with a number of ex-ante obligations. They have to put in place risk management systems, to use relevant datasets, to write down technical documentation, to provide for human supervision, and to undergo a conformity assessment (AI Act, articles 9–16), under the clear assumption that compliance with these norms ensures safety. As in the case of bans, however, the establishment of ex-ante obligations increases the cost of doing business (particularly where these obligations are not uniform across jurisdictions) and requires the presence of persons or authorities endowed with the power and the resources necessary to monitor compliance.

Another useful strategy to deal with automated quantification, that can be used alone or in combination with ex-ante obligations, is to impose ex-post obligations, i.e., to provide for a remedy when something goes wrong. Once again, good examples of this approach are offered by the European GDPR and the Chinese PIPL. Both the acts provide data subjects with the right to access, rectify, erase, restrict, and get back their own data (articles 15–21 GDPR; articles 44–50 PIPL), with the right to object to fully automated decisions (article 22 GDPR; article 24 PIPL), and with the right to claim compensation in case of violation of these provisions (articles 81–82 GDPR; articles 68–70 PIPL). Inasmuch as the GDPR is concerned, however, it should be noted that several states have made use of the possibility, set out by article 22(2), lit (b) GDPR, of carving out exceptions to the data subjects’ right to object to automated decision-making for the benefit of insurance companies. For instance § 37 of the German Bundesdatenshutzgesetz (BDSG) now provides that the rights mentioned by article 22 GDPR do not apply to automated decisions made in the context of insurance services whenever requests by clients are accepted or whenever the decision concern the payment of medical expenses. Article 41(1a) and 41(1b) of the 2019 Polish act implementing the GDPR empowers insurance companies to use systems of automated decision-making in individual cases for assessing risks and to determine the amount of loss and compensation as well as other amounts payable to parties entitled under insurance contracts. In California, the CCPA provides consumers with the right to delete, correct and access their personal information, the right to know to whom the information is sold, and the right to opt out of sale or sharing of personal information (sections 1798.105, 1798.106, 1798.110, 1798.115, 1798.120 CCPA); the Act also foresees that, in case of unauthorised disclosure of consumers’ account data, companies may face statutory damages of between $100 and $750 per consumer, per incident or actual damages, whichever is higher (section 1798.150 CCPA). Under all the three regulations, the power to react to violations is also entrusted to public authorities (cf article 83 GDPR; article 66 PIPL; section 1798.199.40 CCPA). This approach is more market-friendly than bans and ex-ante obligations inasmuch as it does not target all businesses but rather punishes only those who misbehave on a case-by-case basis. Yet, this approach has some drawbacks as well. For such a model to work, it is important that enforcement actions are pursued. Yet, as said above, in Section 4, people are often not in a position to fight for their rights, also because they may easily remain unaware of their infringement. Not by chance, most of the litigation so far promoted in Europe and in the US against the use of automated decision-making have been brought through class actions and by non-governmental organisations rather than by individuals (as to class actions in the US, cf TransUnion LLC v. Ramirez, 594 U.S., 141 S. Ct. 2190 (US 2021); K.W. v. Armstrong, 180 F. Supp. 3d 703 (Idaho 2016); as to actions brought by NGOs, cf, in the US, cf. ACLU v. Clearview AI, Inc., 2020 CH 04353 (Cir. Ct. Cook City., Ill.), settlement agreement of May 9, 2022; Leaders of a Beautiful Struggle v. Baltimore Police Department (5 novembre 2020, 2 F.4th 330 (4th Cir. 2021)); in Europe, Court of Justice of the European Union [CJEU], OQ v Land Hessen, C-634/21, 7 December 2023; Court of Justice of the European Union [CJEU], Meta v BVV, 28 April 2022, C-310/20, ECLI:EU:C:2022:322; Conseil d’État, 10ème chambre, 26 April 2022, n° 442364, ECLI:FR:CECHS:2022:442364.20220426; Juzgado Central de lo Contencioso Administrativo, número 8, 30 December 2021, n. 143, https://www.consejodetransparencia.es/dam/jcr:80688e50-c994-4850-8197-4f19dc46a6ad/R128_S143-2021_CIVIO.pdf (accessed January 18, 2024); Conseil d’État, 10ème-9ème chambres réunies, 4 November 2020, n° 432656, ECLI:FR:CECHR:2020:432656.20201104).

Still another option is to do nothing, letting industries and interested parties set non-binding principles and standards for the market. This is, for instance, the approach adopted by the US as far as AI governance in general is concerned. In the US, the Executive Order no. 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, adopted by the US President in October 2023, only directs federal bodies and agencies to help develop guidelines and best practices for the safe use of AI. In China too, general rules on AI are missing, but the Cyberspace Administration of China has been very active in enacting sector-specific rules that regulate, for instance, the management of deep synthesis data and technologies (Cyberspace Administration of China 2022) and generative artificial intelligence services (Cyberspace Administration of China 2023; on both these measures, see Franks, Lee, and Xu 2024). In the EU, the proposed AI Act will ban a few practices and impose some ex-ante obligations on producers/users of high-risk AI. Interestingly, the AI Act will establish some form of public supervision of the AI market by specialised agencies but will provide no or little private avenues for reacting to non-compliance (Ebers et al. 2021, 598–600). Moreover, even under the AI Act, providers/users of AI which is not high risk (which covers the absolute majority of AI uses) will be subject to no obligation, since they are only invited to adhere to technical codes of conducts (article 69(1) AI Act). In recent years, many industry-wide and public interest organisations have adopted declarations of principles and technical standards applying to AI, some of which are with specific regard to the insurance sector (cf, in general, IEEE 2018; ISO 2023; High-Level Expert Group on Artificial Intelligence 2019; OECD 2019; on insurance, see EIOPA 2021). Besides minor differences, these texts all rely on the same basic ideas: AI producers/users should ensure the robustness of the datasets used, guarantee the transparency of the decision-making process and the explanation of the results, avoid discrimination, and provide for human involvement to some extent or human oversight. Reliance on soft law clearly has many benefits, inasmuch as it is flexible and sensitive to business needs. Yet, the main problem with this approach is the absence of prospects of enforceability (unless a court decides at some point to make soft law enforceable) and lack of precision. Let us take, for instance, the ever-present requirement of transparency of AI decisions. Algorithmic transparency does not mean opening the internal properties of algorithmic processing, because this would clash with the proprietary regimes of corporate secrecy often applicable to AI, and would not help people who are illiterate in computer science (Brkan and Bonnet 2020, 38–46; Infantino and Wang 2019, 318; Selbst and Barocas 2018, 1093–1094; with specific regard to insurance, IAIS 2020, 11; Marelli, Lievevrouw, and Van Hoyweghen 2020, 454). Transparency is rather understood as requiring an explanation of algorithmic decisions: people must receive clear and comprehensible informations about the basic logic underlying the algorithmic processing, and the main reasons explaining the automated outcomes (in general, Brkan and Bonnet 2020, 33–38; High-Level Expert Group on Artificial Intelligence 2019, 18; with regard to insurance, Drechsler and Benito Sánchez 2018, 4–5; Marelli, Lievevrouw, and Van Hoyweghen 2020, 454). The most common way suggested to give reasons for automated decisions is to provide addressees with counterfactual explanations, that is, with a few examples of adjacent but hypothetical datapoints that would have determined a different result. However, ideal counterfactuals do not exist: counterfactuals are always many. Disclosing all counterfactuals of course is not possible since this would lead to information overkill. Some counterfactuals have to be preferred over others. But who is going to select which counterfactuals should be disclosed to provide people with meaningful explanations? On the basis of which criteria should the selection be made? For the time being, there is no consensus on how to answer these questions (De Vries 2021; Selbst 2020; Wachter, Mittelstadt, and Russell 2018).

6 Contexts Matter

The above overview of possible regulatory postures shows that all the legal systems under review are now working to respond to the challenges arising out of the increased reliance on automated quantification, in general and with particular regard to the insurance sector. What should now be emphasised is that the struggle to face the challenges raised by contemporary technological shifts is universal, but the economic, legal and cultural contexts in which insurance and its regulations develop are different. Contrary to what is often taken for granted in technology-related scholarly discourses, i.e., that the impact of technology on society and on the law is largely the same everywhere, context matters. Insurance practices are shaped by the structure of the insurance market, the varieties of digital capitalism, and the thickness of regulation on insurance and technology implemented in each tradition. Plus, insurance practices are also shaped by legal institutions that are not per se focused on insurance, and yet contribute to determining how insurance works.

We need to start with a reminder. In the last two centuries, insurance companies have developed solid methodologies to develop customer grouping and segmentation. Paradigms for social quantification that are based on widely shared and historically well-rooted methodologies are subject to continuous refinement, and cannot be revolutionised overnight (Barry and Charpentier 2020, 8–9; Cappiello 2020, 3; Cevolini and Esposito 2020, 5; Eling and Lehmann 2018, 370–371; Jeanningros and McFall 2020, 7–10; McFall, Meyers, and Van Hoyweghen 2020, 4; McFall and Moor 2018, 198). Socio-technical inertia explains why insurance is more resilient than other sectors (e.g., finance) to the new hype of Big Data analytics and AI. This, however, holds true especially for incumbent companies in long-established and heavily-regulated markets, such as the European one; it holds less true for newcomers, such as unconventional insurance providers, and for markets that are either relatively new (like the Chinese one) or are based on soft regulations (like the US one) (Eling and Lehmann 2018, 370–371).

The areas under examination indeed embrace different approaches to regulation and supervision of insurance. In the US, the federal government has a modest footprint in insurance regulation and supervision, which is mostly left in the hand of US states (Boehning 2023; Liskow 2023, 9–25; Mulhern, Manske, Mancuso 2023). China’s insurance regulatory and supervisory system has developed rapidly in recent years, especially after the enactment of the Insurance Law of 2009, the adoption of many judicial interpretations by the Supreme People’s Court and the issuance of regulations and guidelines by the China Banking and Insurance Regulatory Commission. Yet, the overall framework remains quite uneven, with some areas receiving a lot of attention and others being left uncovered (Chen et al. 2013; Chen, Yan, and Liu 2023; Yang 2023). In Europe, by contrast, insurance is highly regulated and subject to strong supervision and control (Directive 2009/138/EC of the European Parliament and of the Council of 25 November 2009 on the taking-up and pursuit of the business of Insurance and Reinsurance (the so-called Solvency II Directive), and Directive (EU) 2016/97 of the European Parliament and of the Council of 20 January 2016 on insurance distribution) (Purves 2023).

It is also a truism to note that the three traditions herein surveyed have all developed a distinct governance model for their own digital economies and technological empires, which is reflected in the regulatory models they have adopted at home and promoted abroad. As recently described by Anu Bradford, the three regulatory models represent “three varieties of digital capitalism, drawing on different theories about the relationship between markets, the state, and individual and collective rights” (Bradford 2023, 7). Under such a view, the US has pioneered a largely market-driven model, imbued by techno-optimism and an uncompromising faith in the free market. China embraces a state-driven model, in which the enthusiasm for technological innovation is channelled by a state that maintains primary responsibility for, and control of, the digital infrastructure. The European regulatory model is distinctly rights-driven, in which the digital transformation is often slowed down by protections for fundamental rights and democratic values (Bradford 2023, 7–11).

All the above matters in determining how legal traditions look at automated quantification in digital insurance. However, there are other factors that are apparently incidental to insurance, and nevertheless affect the posture of legal systems vis-à-vis social quantification. Although oft-forgotten, such features directly influence the size of the market for automated quantification in insurance. Examples of such features include the rate of language diversity, the size of data markets, the availability of welfare structures, and the level of consumer protections.

Common law jurisdictions and China are monolingual jurisdictions, meaning that the variety of languages that are spoken across the country coexist with a dominant idiom. This implies that the market for AI-powered programs, which still largely rely on text reading and interpretation, is big and wide. Automated quantification largely relies on textual data: the wider and easier the availability of textual data, the better it works. Commonality of language, from this point of view, represents a built-in advantage. While insurance companies in mono-lingual jurisdictions can apply BDA on large corpora that span borders, language diversity creates a monumental barrier for intra-European data flows and analytics, meaning that the European market is fragmented internally not only because of national boundaries but also because of its 22 official languages (AI4Lawyers 2021, 27–28).

Data regimes also influence the speed of technological developments in the insurance sector. It is well-known that Europe has in place a complex regulatory framework for collection and treatment of personal data, that China has recently adopted legislation with similar safeguards, and that in the US nothing comparable exists at the federal level (see above, Section 5, as well as Bradford 2023, 324–325, 334–335, 362–364). What should be stressed upon now is the situation concerning non-personal data. In Europe, a vast amount of this data, often not in digitalized form, are held by public and private entities; only recently disclosure obligations have been enacted through new pieces of EU legislation (Regulation (EU) 2018/1807 on a framework for the free flow of non-personal data in the European Union; Directive (EU) 2019/1024 on open data and the re-use of public sector information (so-called Open Data Directive); Regulation (EU) 2022/868 on European data governance (so-called Data Governance Act), especially articles 3–9; Regulation (EU) 2023/2854 on harmonised rules on fair access to and use of data (so-called Data Act), especially articles 33–36). Although judicial decisions are public documents, until recently only few European countries, for instance, had digital databases making courts’ opinions publicly available in machine-readable format (D’Andrea et al. 2021). As a result, European insurance companies have recently commenced exploring the possibility of deploying Big Data analytics on large volumes of judicial decisions in order to predict the likelihood of litigation and infer the amount of compensation attainable by the insuree (EIOPA 2019, 28). The situation is entirely different in the US and in China, although for opposite reasons. In the US, the state is minimal and the limitations placed on Big Data processing and sharing are thin (Bradford 2023, 54–55, 362–364). To keep up with our example of judicial decisions, it has been a long time that courts’ judgements and opinions in the US have been digitally collected in private legal information platforms which can be accessed on a subscription basis, and then reworked in combination with other data (Lamdan 2023, 72–93). In China, the state is everywhere, and political control lies at the heart of data policy. Incidentally, this has fostered the creation of enormous data infrastructures. For instance, the rapid technologization of justice services in China has contributed to the monitoring of courts, but has also produced a staggering amount of digital data that are publicly and freely available (Cardillo 2023, 181–187; Chen and Li 2020, 1–58; Ng and Chan 2021, 255–281). More in general, the slow but steady experimentation, from the 2000s onwards, with multiple forms of ‘social credit’ initiatives has resulted in the establishment of many forms of (more or less automated, more or less technologically-enhanced) metrics, producing flows of public and public-private records about virtually everything (Bradford 2023, 87–90; Chen 2019; Daum 2019; Infantino and Wang 2021).

If language and data regimes have an impact on the rate of technological development in insurance practices, the availability of welfare structures and the level of consumer protections impinge on the scope and intensity of automated insurance.

As to the first point, it is quite clear that the space for insurance, especially in the life and health sectors, is inversely correlated with the depth and breadth of public welfare and social insurance schemes: the higher the number of social institutions and mechanisms dealing with statistically frequent and serious injuries, the lower the need for life and health insurance. This is evident in Europe, where the broad and accessible forms of public welfare and social insurance coverage that are available means that Europeans often have the chance of relying on the state for reducing the risks they are facing (Jutras 2021; Magnus 2003; Oliphant and Wagner 2012; van Boom and Faure 2007). China does not subscribe to European welfare universalism, but, especially in the urban context, the state and local governments have been creating wide mechanisms for social protection to support and provide relief to those in need (Hu 2016; Wang 2017). By contrast, in the US as well as in many other common law jurisdictions, market is the only alternative, making people much more desperate for other sources of aid, and more exposed to the danger of dubious corporate deals (in general, Nowotny 2021, 11; as to insurance, Jeanningros and McFall 2020, 6–7; Liskow 2023, 108–117; Lupton 2016, 124).

Similar considerations stem from consumer protection laws, which matter particularly in business-to-consumer (B2C) insurance contracts. Consumer protection per se belongs to a different branch of law, and yet, the stronger the pro-consumer measures, the higher the cost of doing business is and the more limited contractual creativity is. In Europe, B2C contracts are subject to EU-derived legislation that curbs corporate freedom and provides legal safeguards for the weaker party by prohibiting a few corporate practices that are deemed to be particularly harmful to consumers (cf the Directive 93/13/EEC on unfair terms in consumer contracts (so-called Unfair Contract Terms Directive) and the Directive 2005/29/EC concerning unfair business-to-consumer commercial practices in the internal market (so-called Unfair Commercial Practices Directive)). As it happened for privacy, China has followed suit, enacting rules that, at least on paper, are very close to those of the EU (see in particular the Chinese Law on the Protection of Consumer Rights and Interests (LPCR) of 1993 and further amendments). While similar rules exist in some common law jurisdictions (such as England and Australia: cf the Consumer Rights Act 2015 in England and Wales and the federal Consumer Law (Schedule 2 of the Competition and Consumer Act 2010) in Australia), consumer protection remains an exception in the institutional structure of the common law, and particularly in the US, which is historically based on freedom of contract, equality of arms, and individual private enforcement (Bradford 2023, 7–8; Coleman 2021).

7 Conclusions

The analysis carried out in the previous sections has hopefully showed that the lens of ‘automated quantification’ is useful as a way to look at and understand current developments in the insurance sector. As discussed in Sections 3 and 4, automating social quantification in insurance opens up the possibility of personally-tailored risk assessments and dynamically adjusted premiums. Yet, the same trend also raises concerns about the reduction in people’s autonomy in decision-making, the possible perpetuation through algorithms of historical bias, and the financial exclusion of riskier customers.

When evaluating the pros and cons of these scenarios, it is important not to underestimate the relevance of the law and its diversity. On the one hand, historical and present data, as seen in Sections 2 and 5, tell us that in the relationship between social quantification, insurance, and technology, there is always room for the law to intervene. On the other hand, the brief comparative overview of contemporary infrastructures of insurance made in Section 6 demonstrates that there is not a single way to intervene in such relationships, particularly because they occur in contexts that are very different from one another. In some jurisdictions – the best example in this regard being the US – the idea of automated personalisation of insurance prices, continuous (self-)monitoring and real-time adjustment of contractual terms is an everyday reality. The nightmares of insurance-led surveillance capitalism reported in the US and often echoed by scholarship in other common law jurisdictions, have however little reason to be exported elsewhere. In China, these scenarios involve different sets of actors, being linked to state digital surveillance and intrusive social credit practices, whose impact on insurance is however not clear. In Europe, a number of institutional, linguistic, and legal factors contribute to constrain the disruptive potential of Big Data analytics and AI on the core business of the insurance industry.

The above considerations were developed with regard to the rise of automated quantification in the insurance field. While future research may test whether similar conclusions apply beyond this field, we hope that the approach herein adopted, combining insurance studies with literature on comparative law and social quantification, showed that technology may have a disparate impact in different regions. Too often legal debates take for granted that the prospects and challenges associated with new and emerging technologies call for the same answers everywhere, and that solutions developed in one place can (or should) be easily transplanted somewhere else. Yet, problems, opportunities and constraints do not exist in the air; they exist in form that are highly context-dependent. The growth of private-led, surveillance capitalism in the United States raises hopes and fears that are different from those related to State-controlled corporate scoring in China and to the backward looking, heavily regulated insurance industry in Europe. With regard to insurance, and arguably in other fields as well, more attention to context is needed to avoid nurturing hopes and fears that cannot materialise and pursuing regulatory reforms and approaches that may not be aligned with the needs of the societies they are expected to serve.


Corresponding author: Marta Infantino, Department of Political Science, University of Trieste, Piazzale Europa 1, 34127 Trieste, Italy, E-mail:

Award Identifier / Grant number: Digital Vulnerability in European Private Law - 20

About the author

Marta Infantino

Marta Infantino (PhD, Palermo University; LL.M. New York University) is Associate Professor of Comparative Law at the University of Trieste. She is Associate Member of the International Academy of Comparative Law and has held visiting professorships in prominent universities in Canada, Colombia, France and Germany. Her research themes include comparative tort law, comparative contract law, digital vulnerability and social quantification, particularly through indicators. Full cv at https://www.units.it/data/curricula/12009.pdf.

Acknowledgments

Earlier drafts of this article were presented at two workshops on ‘European Insurance Contract Law in the Age of Digitalization’, held at the Jagiellonian University of Krakow (Poland), respectively on May 5, 2022 and January 19, 2023, as well as at the seminar ‘Insuring the Uninsurable – Emerging Risks as a challenge for the Insurance Sector’, held at the Freie Universität Berlin (Germany), on June 1–2, 2023. The author wishes to thank Christian Armbrüster, Roger Brownsword, Mauro Bussani, Özlem Gürses, Pierpaolo Marano, Kiriaki Noussia, Katarzyna Południak-Gierz, Cristina Poncibò, Teresa Rodríguez de las Heras Ballell, Bariş Soyer, Piotr Tereszkiewicz, and all the participants in the above mentioned workshops and meeting for their insightful comments, as well as Maitreyi Misra for the language editing. The usual disclaimers apply.

  1. Research funding: The author acknowledges funding from the Italian Ministry of University and Research, under the project ‘Digital Vulnerability in European Private Law’ (DiVE), 2022-2025.

  2. Research ethics: Not applicable

  3. Author contributions: The author accepts responsibility for the entire content of this manuscript and approves its submission.

  4. Competing interests: The author states no conflict of interest.

  5. Data availability: Not applicable.

References

AI4Lawyers (European Lawyers Foundation [ELF] and the Council of Bars and Law Societies of Europe [CCBE]). 2021. Opportunities and Barriers in the Use of Natural Language Processing Tools in SME Law Practices. The Hague: AI4Lawyers. https://elf-fae.eu/wp-content/uploads/2021/12/Report-on-opportunities-and-barriers-in-the-use-of-NLP-tools-in-SME-law-practices.pdf (accessed February 18, 2024).Search in Google Scholar

Armbrüster, Christian, and Monika Obal. 2014. “Genetic Information and Testing in the Underwriting Process of Insurance Contracts in Germany.” In The Impact of Genetic Data on Medicine and Insurance Practice, edited by C. Botta, and C. Armbrüster, 25–52. Naples: Edizioni Scientifiche Italiane.Search in Google Scholar

Baker, Tom. 2001. “Blood Money, New Money and the Moral Economy of Tort Law in Action.” Law & Society Review 35 (2): 275–319. https://doi.org/10.2307/3185404.Search in Google Scholar

Barry, Laurence, and Arthur Charpentier. 2020. “Personalization as a Promise: Can Big Data Change the Practice of Insurance?” Big Data & Society 7 (2): 1–12, https://doi.org/10.1177/205395172093514.Search in Google Scholar

Béguinot, Giulia. 2014. “Genetic Data Legislation: The Use of Genetic Data by Insurance Companies in France.” In The Impact of Genetic Data on Medicine and Insurance Practice, edited by C. Botta, and C. Armbrüster, 131–6. Naples: Edizioni Scientifiche Italiane.Search in Google Scholar

Berry, Daina Ramey. 2017. “The Price for Their Pound of Flesh. The Value of the Enslaved, from Womb to Grave.” In The Building of a Nation. Boston: Beacon Press.Search in Google Scholar

Boehning, H. Christopher. 2023. “USA.” In Insurance and Reinsurance Laws and Regulations, edited by ICLG. https://iclg.com/practice-areas/insurance-and-reinsurance-laws-and-regulations/usa (accessed February 18, 2024).Search in Google Scholar

Borges Fortes, Pedro Rubim. 2023. “Revisiting ‘Justice in Numbers’ in Brazil: Quantified Justice, Managerial Judges, and Numeroids as a Regulatory Technique.” In Comparative Legal Metrics. Quantification of Performance as a Regulatory Technique, edited by M. Bussani, S. Cassese, and M. Infantino, 21–38. Leiden: Brill.10.1163/9789004680944_003Search in Google Scholar

Borges Fortes, Pedro Rubim, Pablo Marcello Baquero, and David Restrepo Amariles. 2023. “Artificial Intelligence Risks and Algorithmic Regulation.” European Journal of Risk Regulation 13 (3): 357–72. https://doi.org/10.1017/err.2022.14.Search in Google Scholar

Bouk, Dan. 2015. How Your Days Became Numbered. Risk and the Rise of the Statistical Individual. Chicago: University of Chicago Press.10.7208/chicago/9780226259208.001.0001Search in Google Scholar

Bradford, Anu. 2023. Digital Empires. The Global Battle to Regulate Technology. New York: Oxford University Press.10.1093/oso/9780197649268.001.0001Search in Google Scholar

Bradford, Anu. 2020. The Brussels Effect: How the European Union Rules the World. New York: Oxford University Press.10.1093/oso/9780190088583.001.0001Search in Google Scholar

Brkan, Maja, and Grégory Bonnet. 2020. “Legal and Technical Feasibility of the GDPR’s Quest for Explanation of Algorithmic Decisions: Of Black Boxes, White Boxes and Fata Morganas.” European Journal of Risk Regulation 11 (1): 18–50. https://doi.org/10.1017/err.2020.10.Search in Google Scholar

Broome, André, and Joel Quirk. 2015. “Governing the World at a Distance: The Practice of Global Benchmarking.” Review of International Studies 41 (5): 819–41. https://doi.org/10.1017/S0260210515000340.Search in Google Scholar

Broome, André, Alexandra Homolar, and Matthias Kranke. 2018. “Bad Science: International Organizations and the Indirect Power of Global Benchmarking.” European Journal of International Relations 24 (3): 514–39. https://doi.org/10.1177/1354066117719320.Search in Google Scholar

Burk, Dan L. 2021. “Algorithmic Legal Metrics.” The Notre Dame Law Review 96 (3): 1147–203. https://scholarship.law.nd.edu/ndlr/vol96/iss3/6/ (accessed February 18, 2024).Search in Google Scholar

Bussani, Mauro, and Marta Infantino. 2015. “Tort Law and Legal Cultures.” American Journal of Comparative Law 63 (1): 77–108. https://doi.org/10.5131/AJCL.2015.0003.Search in Google Scholar

Bussani, Mauro, Sabino Cassese, and Marta Infantino. 2023. “Quantification of Performance as a Regulatory Technique. A Comparative Appraisal.” In Comparative Legal Metrics. Quantification of Performance as a Regulatory Technique, edited by M. Bussani, S. Cassese, and M. Infantino, 323–70. Leiden: Brill.10.1163/9789004680944_017Search in Google Scholar

Campbell, Donald T. 1976. Assessing the Impact of Planned Social Change. Hanover: The Public Affairs Center.Search in Google Scholar

Cappiello, Antonella. 2020. “The Digital (R)Evolution of Insurance Business Models.” American Journal of Economics and Business Administration 12: 1–13. https://doi.org/10.3844/ajebasp.2020.1.13, https://thescipub.com/pdf/ajebasp.2020.1.13.pdf (accessed February 18, 2024).Search in Google Scholar

Cardillo, Ivan. 2023. “Governance and Quantification of Performance in China.” In Comparative Legal Metrics. Quantification of Performance as a Regulatory Technique, edited by M. Bussani, S. Cassese, and M. Infantino, 180–203. Leiden: Brill.10.1163/9789004680944_010Search in Google Scholar

Cevolini, Alberto, and Elena Esposito. 2020. “From Pool to Profile: Social Consequences of Algorithmic Prediction in Insurance.” Big Data & Society 7 (2): 1–11. https://doi.org/10.1177/2053951720939228.Search in Google Scholar

Chamallas, Martha, and Jennifer B. Wriggins. 2010. The Measure of Injury. Race, Gender, and Tort Law. New York: NYU Press.Search in Google Scholar

Chen, Benjamin Minhao, and Zhiyu Li. 2020. “How Will Technology Change the Face of Chinese Justice?” Columbia Journal of Asian Law 34 (1): 1–58. https://doi.org/10.7916/cjal.v34i1.7484.Search in Google Scholar

Chen, Bingzheng, Sharon Tennyson, Maoqi Wang, and Haizhen Zhou. 2013. “The Development and Regulation of China’s Insurance Market: History and Perspectives.” Risk Management and Insurance Review 17: 241–63. https://doi.org/10.1111/rmir.12012.Search in Google Scholar

Chen, Frank, Bing Yan, Ernest Liu. 2023. “China.” In Insurance and Reinsurance Laws and Regulations, edited by ICLG. https://iclg.com/practice-areas/insurance-and-reinsurance-laws-and-regulations/china (accessed February 18, 2024).Search in Google Scholar

Chen, Jiahong. 2019. “Putting ‘Good Citizens’ in ‘The Good Place’.” EUI Working Paper RSCAS 94: 22–4. https://doi.org/10.17176/20190621-122918-0.Search in Google Scholar

Clark, Geoffrey. 2010. “The Slave’s Appeal: Insurance and the Rise of Commercial Property.” In The Appeal of Insurance, edited by G. Clark, G. Anderson, C. Thomann, and J.-M. Graf von den Schulenburg, 52–74. Toronto: University of Toronto Press.Search in Google Scholar

Clark, Geoffrey. 1999. Betting on Lives. The Culture of Life Insurance in England, 1695–75. Manchester: Manchester University Press.Search in Google Scholar

Clark, Geoffrey, Gregory Anderson, Christian Thomann, and J.-Matthias Graf von den Schulenburg, eds. 2010. The Appeal of Insurance. Toronto: University of Toronto Press.10.3138/9781442685888Search in Google Scholar

Coleman, Brooke D. 2021. “Endangered Claims.” William and Mary Law Review 63(2): 345–405. https://scholarship.law.wm.edu/cgi/viewcontent.cgi?article=3920&context=wmlr (accessed February 18, 2024).Search in Google Scholar

Couldry, Nick, and Ulises A. Mejias. 2019. The Costs of Connection. How Data Is Colonizing Human Life and Appropriating It for Capitalism. Stanford: Stanford University Press.10.1515/9781503609754Search in Google Scholar

Cyber Administration of China. 2023. “Interim Measures for Generative Artificial Intelligence Service Management” (生成式人工智能服务管理暂行办法). http://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm (accessed February 18, 2024).Search in Google Scholar

Cyber Administration of China. 2022. “Regulations on the In-Depth Synthesis Management of Internet Information Services” (互联网信息服务深度合成管理规定). http://www.cac.gov.cn/2022-12/11/c_1672221949318230.htm (accessed February 18, 2024).Search in Google Scholar

D’Andrea, Sabrina, Nikita Divissenko, Maria Fanou, Anna Krisztián, Jaka Kukavica, Nastazja Potocka-Sionek, and Mathias Siems. 2021. “Asymmetric Cross-citations in Private Law: An Empirical Study of 28 Supreme Courts in the EU.” Maastricht Journal of European and Comparative Law 28 (4): 498–534. https://doi.org/10.1177/1023263X211014693.Search in Google Scholar

Daston, Lorraine. 1998. Classical Probability in the Enlightenment. Princeton: Princeton University Press.Search in Google Scholar

Daum, Jeremy. 2019. “Untrustworthy: Social Credit Isn’t What You Think It Is.” EUI Working Paper RSCAS 94: 39–41. https://doi.org/10.17176/20190627-112616-0.Search in Google Scholar

De Vries, Katja. 2021. “Transparent Dreams (Are Made of This): Counterfactuals as Transparency Tools in ADM.” Critical Analysis of Law 8(1): 122–38. https://doi.org/10.33137/cal.v8i1.36283.Search in Google Scholar

Desrosières, Alain. 2000. La politique des grands nombres. Histoire de la raison statistique, 2nd ed. Paris: La Découverte.Search in Google Scholar

Drechsler, Laura, and Juan Carlos Benito Sánchez. 2018. “The Price Is (Not) Right: Data Protection and Discrimination in the Age of Pricing Algorithms.” European Journal of Law & Technology 9: 1–23. https://ejlt.org/index.php/ejlt/article/view/631/853 (accessed February 18, 2024).Search in Google Scholar

Ebers, Martin, Veronica R.S. Hoch, Frank Rosenkranz, Hannah Ruschemeier, and Björn Steinrötter. 2021. “The European Commission’s Proposal for an Artificial Intelligence Act – A Critical Assessment by Members of the Robotics and AI Law Society (RAILS).” Multidisciplinary Scientific Journal 4 (4): 589–603. https://doi.org/10.3390/j4040043.Search in Google Scholar

Eling, Martin, and Martin Lehmann. 2018. “The Impact of Digitalization on the Insurance Value Chain and the Insurability of Risks.” The Geneva Papers on Risk and Insurance 43: 359–96. https://doi.org/10.1057/s41288-017-0073-0.Search in Google Scholar

Espeland, Wendy Nelson, and Michael Sauder. 2007. “Rankings and Reactivity: How Public Measures Recreate Social Worlds.” American Journal of Sociology 113 (1): 1–40. https://doi.org/10.1086/517897.Search in Google Scholar

Eubanks, Virginia. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press.Search in Google Scholar

European Insurance and Occupational Pensions Authority [EIOPA]. 2021. Artificial Intelligence Governance Principles: Towards Ethical and Trustworthy Artificial Intelligence in the European Insurance Sector. Frankfurt: EIOPA. https://www.eiopa.europa.eu/system/files/2021-06/eiopa-ai-governance-principles-june-2021.pdf (accessed February 18, 2024).Search in Google Scholar

European Insurance and Occupational Pensions Authority [EIOPA]. 2019. Big Data Analytics in Motor and Health Insurance. Frankfurt: EIOPA. https://register.eiopa.europa.eu/Publications/EIOPA_BigDataAnalytics_ThematicReview_April2019.pdf (accessed February 18, 2024).Search in Google Scholar

Ewald, François. 2019. ““The Values of Insurance.” (Shana Cooperstein and Benjamin J. Young Transl.).” Grey Room 74: 120–45. https://doi.org/10.1162/grey_a_00266.Search in Google Scholar

Franks, Esther, Bianca Lee, and Hui Xu. 2024. “Report: China’s New AI Regulations.” Global Privacy Law Review 5 (1): 43–9. https://doi.org/10.54648/gplr2024007.Search in Google Scholar

Goodhart, Charles. 1981. “Problems of Monetary Management: The U.K. Experience.” In Inflation, Depression, and Economic Policy in the West, edited by A. S. Courakis, 111–44. Lanham: Rowman & Littlefield.Search in Google Scholar

High-Level Expert Group on Artificial Intelligence. 2019. Ethics Guidelines for Trustworthy Artificial Intelligence. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (accessed February 18, 2024).Search in Google Scholar

Hildebrandt, Mireille. 2018. “Algorithmic Regulation and the Rule of Law.” Philosophical Transactions of the Royal Society A: Mathematical, Physical & Engineering Sciences 376 (2128): 20170355. https://doi.org/10.1098/rsta.2017.0355.Search in Google Scholar

Hildebrandt, Mireille. 2015. Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology. Cheltenham: Edward Elgar.10.4337/9781849808774.00016Search in Google Scholar

Hu, Aiqun. 2016. China’s Social Insurance in the Twentieth Century: A Global Historical Perspective. Leiden: Brill.10.1163/9789004307315Search in Google Scholar

Infantino, Marta, and Mauro Bussani. 2023. “Rule by Metrics: Performance, Quantification, and the Law.” European Journal of Comparative Law and Governance 11: 1–51 https://doi.org/10.1163/22134514-bja10066.Search in Google Scholar

Infantino, Marta, and Weiwei Wang. 2021. “Challenging Western Legal Orientalism: A Comparative Analysis of Chinese Municipal. Social Credit Systems.” Journal of European Comparative Law and Governance 8 (1): 46–85. https://doi.org/10.1163/22134514-bja10011.Search in Google Scholar

Infantino, Marta, and Weiwei Wang. 2019. “Algorithmic Torts: A Prospective Comparative Overview.” Transnational Law & Contemporary Problems 28 (2): 309–62. https://mobile.heinonline.org/HOL/LandingPage?handle=hein.journals/tlcp28&div=12&id=&page=.Search in Google Scholar

Institute of Electrical and Electronics Engineers [IEEE]. 2018. Global Initiative on Ethics of Autonomous and Intelligent Systems. https://standards.ieee.org/industry-connections/ec/autonomous-systems/ (accessed February 18, 2024).Search in Google Scholar

International Association of Insurance Supervisors [IAIS]. 2020. Issues Paper on the Use of Big Data Analytics in Insurance. https://www.iaisweb.org/page/supervisory-material/issues-papers (accessed February 18, 2024).Search in Google Scholar

International Organization for Standardisation [ISO]. 2023. ISO/IEC JTC 1/SC 42 – Artificial Intelligence. https://www.iso.org/committee/6794475.html (accessed February 18, 2024).Search in Google Scholar

Jeanningros, Hugo, and Liz McFall. 2020. “The Value of Sharing: Branding and Behaviour in a Life and Health Insurance Company.” Big Data & Society 7 (2): 1–15. https://doi.org/10.1177/2053951720950350.Search in Google Scholar

Jerven, Morten. 2013. Poor Numbers. Ithaca: Cornell University Press.Search in Google Scholar

Joly, Yann, Charles Dupras, Miriam Pinkesz, Stacey A. Tovino, and Mark A. Rothstein. 2020. “Looking beyond GINA: Policy Approaches to Address Genetic Discrimination.” Annual Review of Genomics and Human Genetics 21: 491–507. https://doi.org/10.1146/annurev-genom-111119-011436.Search in Google Scholar

Jutras, Daniel. 2021. “Alternative Compensation Schemes from a Comparative Perspective.” In Comparative Tort Law. Global Perspectives, 2nd ed., edited by M. Bussani, and A. J. Sebok, 140–58. Cheltenham: Edward Elgar.10.4337/9781789905984.00014Search in Google Scholar

Lamdan, Sarah. 2023. Data Cartels. The Companies That Control and Monopolize Our Information. Stanford: Stanford University Press.10.1515/9781503633728Search in Google Scholar

Landsberger, Henry A. 1958. Hawthorne Revisited. Ithaca: Cornell University Press.Search in Google Scholar

Latzer, Michael. 2022. “The Digital Trinity—Controllable Human Evolution—Implicit Everyday Religion. Characteristics of the Socio-Technical Transformation of Digitalization.” Kölner Zeitschrift für Soziologie und Sozialpsychologie 74: 331–54. https://doi.org/10.1007/s11577-022-00841-8.Search in Google Scholar

Liskow, Richard G. 2023. U.S. Insurance Regulation. A Primer. Cheltenham: Edward Elgar.Search in Google Scholar

Lupton, Deborah. 2016. The Quantified Self: A Sociology of Self-Tracking. Cambridge: Polity.Search in Google Scholar

Lynskey, Orla, Hans-W. Micklitz, Peter Rott. 2021. “Part II. Personalised Pricing and Personalised Commercial Practices.” In EU Consumer Protection 2.0. Structural Asymmetries in Digital Consumer Markets, edited by N. Helberger, 92-145. Brussels: BEUC. https://www.beuc.eu/publications/beuc-x-2021-018_eu_consumer_protection.0_0.pdf (accessed February 18, 2024).Search in Google Scholar

Magnus, Ulrich, eds. 2003. The Impact of Social Security Law on Tort Law. Cham: Springer.10.1007/978-3-7091-6055-8Search in Google Scholar

Marelli, Luca, Lievevrouw Elisa, and Van Hoyweghen Ine. 2020. “Fit for Purpose? The GDPR and the Governance of European Digital Health.” Policy Studies 41 (5): 447–67. https://doi.org/10.1080/01442872.2020.1724929.Search in Google Scholar

Matthews, Robert. 2000. “Storks Deliver Babies (P= 0.008).” Teaching Statistics 22 (2): 36–8. https://doi.org/10.1111/1467-9639.00013.Search in Google Scholar

Mau, Steffen. 2019. The Metric Society: On the Quantification of the Social. Cambridge: Polity.Search in Google Scholar

McFall, Liz, and Liz Moor. 2018. “Who, or What, Is Insurtech Personalizing? Persons, Prices and the Historical Classifications of Risk.” Distinktion: Journal of Social Theory 19: 193–213. https://doi.org/10.1080/1600910X.2018.1503609.Search in Google Scholar

McFall, Liz, Gert Meyers, and Ine Van Hoyweghen. 2020. “The Personalisation of Insurance: Data, Behavior and Innovation.” Big Data & Society 7 (2): 1–11. https://doi.org/10.1177/2053951720973707.Search in Google Scholar

McGrogan, David. 2016. “The Problem of Causality International Human Rights Law.” International and Comparative Law Quarterly 65: 615–44. https://doi.org/10.1017/S002058931600021X.Search in Google Scholar

McGurk, Brendan. 2018. Data Profiling and Insurance Law. Oxford: Hart.10.5040/9781509920648Search in Google Scholar

Merry, Sally Engle. 2016. The Seductions of Quantification. Measuring Human Rights, Gender Violence, and Sex Trafficking. Chicago: Chicago University Press.10.7208/chicago/9780226261317.001.0001Search in Google Scholar

Morozov, Evgeny. 2013. To Save Everything, Click Here. New York: Public Affairs.Search in Google Scholar

Mulhern, John, Sara Manske, and Robert Mancuso. 2023. “USA: A Regulatory Overview of the World’s Largest Insurance Market.” In Research Handbook on International Insurance Law and Regulation. 2nd ed., edited by J. Burling, and K. Lazarus, 708–27. Chelthenam: Edward Elgar.10.4337/9781802205893.00039Search in Google Scholar

Neff, Gina, and Dawn Nafus. 2016. Self-Tracking. Boston: MIT.10.7551/mitpress/10421.001.0001Search in Google Scholar

New York Department of Financial Services. 2019. Insurance Circular Letter No. 1. https://www.dfs.ny.gov/industry_guidance/circular_letters/cl2019_01 (accessed February 18, 2024)Search in Google Scholar

Ng, Kwai H., and Peter C.H. Chan. 2021. ““What Gets Measured Gets Done”: Metric Fixation and China’s Experiment in Quantified Judging.” Asian Journal of Law and Society 8 (2): 255–81. https://doi.org/10.1017/als.2020.28.Search in Google Scholar

Nowotny, Helga. 2021 In In AI We Trust. Power, Illusion and Control of Predictive Algorithms. Cambridge: Polity.Search in Google Scholar

Oliphant, Ken, and Gerhard Wagner, eds. 2012. Employers’ Liability and Workers’ Compensation. Berlin: de Gruyter.10.1515/9783110270211Search in Google Scholar

O’Neil, Catherine. 2016. Weapons of Math Destruction. New York: Crown.Search in Google Scholar

Organisation for Economic Cooperation and Development [OECD]. 2019. OECD Principles on Artificial Intelligence. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 (accessed February 18, 2024).Search in Google Scholar

Południak-Gierz, Katarzyna, and Piotr Tereszkiewicz. 2023. “Digitalization’s Big Promise and Peril: The Personalization of Insurance Contracts and its Legal Consequences.” In Law and Economics of the Digital Transformation, edited by K. Mathis, and A. Tor, 33–40. Cham: Springer.10.1007/978-3-031-25059-0_3Search in Google Scholar

Porter, Thedore M. 1995. Trust in Numbers. The Pursuit of Objectivity in Science and Public Life. Princeton: Princeton University Press.10.1515/9780691210544Search in Google Scholar

Pothier, Robert Joseph. 1810; original edition 1777. Traité du contrat d’assurance. Paris: Roux-Rambert.Search in Google Scholar

Prainsack, Barbara, and Ine Van Hoyweghen. 2020. “Shifting Solidarities: Personalisation in Insurance and Medicine.” In Shifting Solidarities. Trends and Developments in European Societies, edited by I. Van Hoyweghen, V. Pulignano, and G. Meyers, 127–51. London: Palgrave Macmillan.10.1007/978-3-030-44062-6_7Search in Google Scholar

Prince, Anya E. R., and Daniel Schwarcz. 2020. ‘Proxy Discrimination in the Age of Artificial Intelligence and Big Data’, Iowa Law Review 105 (3):1257–318. https://ilr.law.uiowa.edu/sites/ilr.law.uiowa.edu/files/2023-02/Prince_Schwarcz.pdf (accessed February 18, 2024).Search in Google Scholar

Purves, Robert. 2023. “Europe: The Architecture and Content of EU Insurance Regulation.” In Research Handbook on International Insurance Law and Regulation. 2nd ed., edited by J. Burling, and K. Lazarus, 675–707. Chelthenam: Edward Elgar.10.4337/9781802205893.00038Search in Google Scholar

Rodríguez de las Heras Ballell, Teresa. 2023. “Trust in an ‘Omnimetric Society’? Reputational Systems in Platforms as Tools for Assessing Contractual Performance and Applying Remedies.” In Comparative Legal Metrics. Quantification of Performance as a Regulatory Technique, edited by M. Bussani, S. Cassese, M. Infantino, 266–83. Leiden: Brill.10.1163/9789004680944_014Search in Google Scholar

Savitt, Todd L. 1977. “Slave Life Insurance in Virginia and North Carolina.” Journal of Southern History 43 (4): 583–600. https://doi.org/10.2307/2207007.Search in Google Scholar

Sax, Marijn. 2021. Between Empowerment and Manipulation: The Ethics and Regulation of For-Profit Health Apps. The Hague: Kluwer.Search in Google Scholar

Selbst, Andrew D. 2020. “Negligence and AI’s Human Users.” Boston University of Law Review 100: 1315–76. https://www.bu.edu/bulawreview/files/2020/09/SELBST.pdf (accessed February 18, 2024).Search in Google Scholar

Selbst, Andrew D., and Solon Barocas. 2018. “The Intuitive Appeal of Explainable Machines.” Fordham Law Review 87 (3):1087–139. https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=5569&context=flr (accessed February 18, 2024).Search in Google Scholar

Soyer, Bariş. 2022. “Use of Big Data Analytics and Sensor Technology in Consumer Insurance Context: Legal and Practical Challenges.” The Cambridge Law Journal 81 (1): 165–94. https://doi.org/10.1017/S0008197322000010.Search in Google Scholar

Spencer, Shaun B. 2020. “The Problem of Online Manipulation.” University of Illinois Law Review 2020: 959–1005. https://scholarship.law.umassd.edu/fac_pubs/236/ (accessed February 18, 2024).Search in Google Scholar

State of California, Department of Insurance. 2015. Notice Regarding Unfair Discrimination in Rating: Price Optimization. https://www.insurance.ca.gov/0250-insurers/0300-insurers/0200-bulletins/bulletin-notices-commiss-opinion/upload/PriceOptimization.pdf (accessed February 18, 2024).Search in Google Scholar

Talesh, Shauhin S. A., and Bryan Cunningham. 2021. “The Technologization of Insurance: An Empirical Analysis of Big Data and Artificial Intelligence’s Impact on Cybersecurity and Privacy.” Utah Law Review 5: 967–1027. https://doi.org/10.26054/0d-9y6k-1t55.Search in Google Scholar

Thiveaud, Jean-Marie. 1989. “Naissance de l’assurance-vie en France.” Revue d’Economie Financiere 11: 318–33. https://doi.org/10.3406/ecofi.1989.1665. https://www.persee.fr/doc/ecofi_0987-3368_1989_num_11_3_1665.Search in Google Scholar

Tin, Louis-Georges. 2014. “Qui a peur des statistiques ethniques.” In Stat-Activisme. Comment lutter avec des nombres, edited by I. Bruno, E. Didier, and J. Prévieux, 155–66. Paris: La Découverte.Search in Google Scholar

Ulbricht, Lena, and Karen Yeung. 2022. “Algorithmic Regulation: A Maturing Concept for Investigating Regulation of and through Algorithms.” Regulation & Governance 16 (1): 3–22. https://doi.org/10.1111/rego.12437.Search in Google Scholar

van Boom, Willem H. and Michael G. Faure (eds). 2007. Shifts in Compensation between Private and Public Systems. Cham: Springer.10.1007/978-3-211-71554-3Search in Google Scholar

van Niekerk, J. P. 1998. The Development of the Principles of Insurance Law in the Netherlands from 1500 to 1800. Cape Town: Juta&co.Search in Google Scholar

Verbelen, Roel, Katrien Antonio, Gerda Claeskens. 2018. “Unraveling the Predictive Power of Telematics Data in Car Insurance Pricing.” Journal of the Royal Statistical Society: Series C (Applied Statistics) 67 (5):1275–304 (2018). https://doi.org/10.1111/rssc.12283.Search in Google Scholar

Wachter, Sandra, Brent Mittelstadt, Chris Russell. 2018. “Counterfactual Explanations without Opening the Black Box.” Harvard Journal of Law and Technology 31 (2):841–87. https://jolt.law.harvard.edu/assets/articlePDFs/v31/Counterfactual-Explanations-without-Opening-the-Black-Box-Sandra-Wachter-et-al.pdf (accessed February 18, 2024).Search in Google Scholar

Wang, Yanzhong. 2017. Social Security in China: On the Possibility of Equitable Distribution in the Middle Kingdom. Cham: Springer.10.1007/978-981-10-5643-7Search in Google Scholar

Wiggins, Benjamin. 2020. Calculating Race: Racial Discrimination in Risk Assessment. New York: Oxford University Press.10.1093/oso/9780197504000.001.0001Search in Google Scholar

Willis, Lauren E. 2020. “Deception by Design.” Harvard Journal of Law and Technology 34(1):115–90. https://jolt.law.harvard.edu/assets/articlePDFs/v34/3.-Willis-Images-In-Color.pdf (accessed February 18, 2024).Search in Google Scholar

Yang, Carrie. 2023. “China: Insurance Regulation in a Rapidly Evolving Market.” In Research Handbook on International Insurance Law and Regulation. 2nd ed., edited by J. Burling, and K. Lazarus, 787–809. Chelthenam: Edward Elgar.10.4337/9781802205893.00042Search in Google Scholar

Yeung, Karen. 2016. “Hypernudge? Big Data as a Mode of Regulation by Design.” Information, Communication & Society 20: 118–36. https://doi.org/10.1080/1369118X.2016.1186713.Search in Google Scholar

Yeung, Karen, and Martin Lodge, eds. 2019. Algorithmic Regulation. New York: Oxford University Press.10.1093/oso/9780198838494.001.0001Search in Google Scholar

Zuboff, Shoshana. 2019. The Age of Surveillance Capitalism. New York: Public Affairs.Search in Google Scholar

Received: 2024-01-18
Accepted: 2024-03-12
Published Online: 2024-03-29
Published in Print: 2024-04-25

© 2024 the author(s), published by De Gruyter on behalf of Zhejiang University

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 28.10.2025 from https://www.degruyterbrill.com/document/doi/10.1515/ijdlg-2024-0003/html?lang=en&srsltid=AfmBOorVF9NK5_7HZ2CgrY0T4R7HERwJuI7V4ohSitC6t6p_chW6SSkB
Scroll to top button