Home Mathematics Evaluating GDPR right to information implementation in automated insurance decisions
Article Open Access

Evaluating GDPR right to information implementation in automated insurance decisions

  • Timo Jakobi ORCID logo EMAIL logo , Salih Arslan and Patrick Harms ORCID logo
Published/Copyright: April 8, 2025

Abstract

Automated decision-making algorithms are increasingly prevalent in consumer-facing industries, particularly in insurance risk assessments. The traceability of these decisions is crucial for trust, acceptance, and individual autonomy. While the General Data Protection Regulation (GDPR) grants individuals the right to information about such decisions, the implementation of this right remains under-researched from a usable privacy perspective. This study employs a qualitative exploratory approach with 12 participants exercising their right to be informed about automated decision-making with German household insurers. Through interviews and observations, we investigate consumer requirements and prevailing implementation practices. Our findings unveil actual process design practices that may undermine the usability and efficacy of this data subject right. By identifying these concerns and correlating them to existing deceptive patterns, our research contributes to usable security by alerting process designers, data protection authorities, and enterprises to the significance of user-centric implementations. Furthermore, this study advances research on GDPR data subject rights, emphasizing the need for secure and usable interfaces in the context of automated decision-making systems. Our work highlights the practical challenges of safeguarding usable implementation of regulatory compliance in the realm of data protection.

1 Introduction

The deployment of algorithmic decision-making systems in both consumer and business environments has become widespread. These systems are utilized in various applications, including spam filters, credit card fraud detection, search engines, news trend analysis, market segmentation, advertising, credit scoring, and insurance risk assessments. These algorithms process personal data of individuals to generate one or more outputs, commonly referred to as classifications or classification decisions. 1 Insurance companies seek to leverage automated classification technologies to calculate individualized premium rates, as studies show enormous efficiency gains for insurers. 2 Consequently, consumers are or will soon be required to provide personal data to successfully obtain insurance coverage and let their premiums get calculated by an algorithm. In Germany alone, approximately 50 million customers hold household insurance policies, suggesting that a significant portion of the population has likely been subject to automated decision-making processes. 3 This proliferation of algorithmic decision-making systems raises important questions about data privacy, consumer rights, and the transparency of automated processes, particularly in sectors like insurance where such decisions can have significant financial implications for individuals.

The increasing processing of personal data not only offers economic benefits for consumers and businesses but also poses risks related to power dynamics and privacy. 4 , 5 As data demand and processing complexity grow, consumers risk losing oversight of their shared data and its processing. 6 , 7 Across industries, consumers already report a perceived loss of control over the use of their personal data and a lack of transparency regarding processing practices by respective controllers. 8 , 9 The General Data Protection Regulation (GDPR) aims to counterbalance this by formulating transparency and control rights for consumers, with the goal of maintaining trust in an increasingly data-driven economy. 10 The GDPR provides data subjects with an instrument to control data collection and processing. However, it deliberately remains vague in its requirements for design implementation, staying at the level of fulfilling abstract principles such as transparency, comprehensibility, and easy accessibility (Art. 12 Para. 1 Sentence 1 GDPR). Due to lack of experience and case law, an undefined playing field emerges: How can and should designers structure the information process? These questions become increasingly relevant as the GDPR requires technical and organizational protective measures taken by organizations when processing personal data to be “effective” (Art. 20, GDPR). While established design guidelines do not yet exist, initial empirical studies on information requests by Mahieu 11 and Norris 12 show that the right to information is not yet effectively informing “to obtain transparency about the processing of their own personal data” 11 from their perspective. This study contributes to bring to light existing design practices for the right to information about automated decision making. We identify poor design practices that are suitable to counter the aims of the data subject right to easily gain transparency about the nature and extend of automated decision making, and thus relate them to existing deceptive patterns. 13 , 14 , 15 , 16 With our qualitative-exploratory study, we aim to sensitize companies, supervisory authorities, and UX designers with practical insights. In this study, we asked 12 participants to submit an information request to their household insurance, as this is a widespread form of insurance in Germany. We discussed the process of requesting and receiving information as well as the received datasets with the participants in semi-structured interviews. In total, we were able to qualitatively research information requests to nine of the 10 largest insurance companies in Germany, among others. 17

Our study contributes to:

  1. Empirically enriching research into the abstract concept of “transparency” with insights from the insurance domain.

  2. Sensitizing the UX community to consumer-friendly design in data protection processes.

  3. Clarifying GDPR’s abstract requirements to guide authorities and companies in achieving transparency objectives for automated decisions.

2 Privacy and data protection in insurance

Examining data protection concepts such as effectiveness in accessibility, transparency, and information control is central to “Usable Security and Privacy” research within Human-Computer Interaction. Empirical studies on the implementation of these concepts are valuable not only for researchers and designers but also for legal scholars. They provide empirically-grounded evaluations of processes and design solutions regarding user transparency and control. This section addresses the current state of information obligations concerning automated decision-making in the insurance sector and presents the resulting research question.

2.1 GDPR: transparency and right to information about automated decision making

The right to access personal data collected is set out in Article 15 GDPR 1 h. It states that data subjects have the right to gain information about

the existence of automated decision-making, including profiling, [..] and, [..] meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject. 18

Article 12 GDPR further sets out general rules that apply to the obligation to provide such information. Accordingly, the information must be provided

[..] in a concise, transparent, intelligible and easily accessible form, using clear and plain language [..]

While there is some research on the right to access in general (see esp. Pins et al. 19 ), the specifics of automated decision making are not yet targeted, despite increasing automation. A person who is subject to automated decision-making in a particular matter, according to Article 15(1)(h), has the right to know that such automated decision-making exists and, in this case, additionally has the right to obtain:

  1. meaningful information about the logic involved, as well as

  2. the significance and the envisaged consequences of such processing for the data subject.

Obtaining information about this matter is thus part of the right of access, which, however, must provide more information, including purposes of processing, categories of data processed, recipients.

Given the lack of clear judicial precedent in the GDPR for guidelines on the information process, Wachter et al. 20 interpret the right to explanation for automated decision-making more as a right to information. While Temme et al. 21 acknowledge that algorithms can be highly complex, they argue that increased transparency could make them more compliant with data protection principles. However, Dexe 22 contends that transparency does not imply unrestricted access to all information. Instead, transparency should enable data subjects to assess whether the processing is appropriate and lawful. This approach aligns with the interests of data subjects, as they can only process a limited amount of information. 23 Consequently, Wauters et al. 24 emphasize the importance of usable privacy techniques, such as visualization methods, to present privacy information more concisely and comprehensibly.

A few empirical studies on the meaningfulness and usability of information request processes have already been conducted. These include, among others, the studies by Mahieu, 11 the multi-country study by Norris, 12 or the study commissioned by the Federal Ministry of Justice and Consumer Protection to investigate online services. 1 The studies came to very different conclusions. In some cases, non-compliance with data protection regulations was found. In some instances, the controllers did not respond at all or their responses did not contribute to more (perceived) transparency for consumers. 12 , 19 , 22 , 25 The ambiguous research results show that there is a need for further research in this area. Therefore, the present study also focuses particularly on the question of how the process of providing information can be designed in a more user-friendly manner, as Pins et al. have first prototypically defined it. 19

2.2 Deceptive patterns in GDPR

Deceptive patterns in UX design refer to manipulative user interface strategies that intentionally mislead users into making choices that benefit service providers, often at the users’ expense. These patterns have gained attention due to their ethical implications and prevalence across various digital platforms. The following sections outline key aspects of deceptive patterns based on recent research.

Nie et al. 26 classify deceptive patterns into 64 distinct types, including Complete Obstruction and Obfuscation, which hinder users from performing desired actions like account disabling. 27 Nie et al. 26 also propose a comprehensive framework, the Deceptive Pattern Analysis Framework (DPAF), to categorize these patterns and assess their impact on users.

Research indicates that deceptive patterns can adversely affect user mental health, particularly among vulnerable demographics, leading to feelings of frustration and helplessness. 28 The pervasive use of deceptive patterns necessitates further research and potential regulation, especially in the social media industry, to protect user interests and promote ethical design practices. 27 , 29

Research on deceptive patterns in data protection, particularly concerning data subject rights, has gained significant attention in recent years. The European Data Protection Board (EDPB) has played a crucial role in this area by publishing guidelines on recognizing and avoiding deceptive patterns, e.g. for social media. 30 These guidelines identify six categories of deceptive patterns: overloading (providing too many options or information), skipping (designing to make users overlook aspects), stirring (appealing to emotions or using nudges), obstructing (hindering or blocking users in their process), fickle (using design to make navigation harder), and left in the dark (hiding information or controls). Each category represents different ways in which user interfaces can manipulate or hinder users’ ability to exercise their data protection rights effectively. The guidelines also provide examples and illustrations of design patterns that violate GDPR provisions, offering recommendations specifically for social network operators to ensure compliance.

2.3 The role of transparency and automation in the insurance industry

Insurance companies, offering intangible services that are delivered in the future, rely heavily on a trusting relationship with their customers. Industry experts often refer to this as a “business of trust”, since insurance is fundamentally based on promises. Insurers sell contingent promises to pay, often at a distant and unspecified point in the future. Transparency in decision-making mechanisms is a key factor in enhancing this trust. 31

Household insurance is widely prevalent in Germany and is offered by various market participants, allowing for broad coverage of major insurance providers. Moreover, household insurance is a relatively simple form of insurance. The contract structure is essentially similar across different service providers, enabling good comparability. Another aspect is the feasibility of participant recruitment: Household insurance is widespread in Germany. Simultaneously, information about household insurance is not based on highly sensitive data, such as health data, ensuring that willingness to disclose data for the study is not unnecessarily limited. Due to the increasing demand for data and the growing complexity of calculations, the societal relevance of this topic is rising. Through the right to information, affected individuals should be able to comprehend the calculation of their household insurance. This study focuses on household insurance due to its prevalence, comparability, and the balance it offers between data sensitivity from a participants’ perspective and societal relevance, making it an ideal subject for examining transparency in automated decision-making within the insurance sector. Our research question thus is:

What current design practices in information request processes contribute to or hinder consumer understanding of automated decision-making systems and may thus pose deceptive patterns?

3 Methods

To determine how the right to information about automated decision-making is currently implemented in the use case of household insurance in Germany, we conducted a study with several test persons and several insurance companies. In the following, we explain the details of the study and our participants.

3.1 Conduct

Methodologically, like previous studies, 19 we divide the process of exercising the right to information about automated decision making into the five steps of (1) finding, (2) defining, (3)requesting, (4) receiving and (5) analyzing. Once a request is made, organizations have 30 days to respond, as per GDPR. Accordingly, our study is divided into two main interventions:

  1. Observation of finding, defining and requesting information with subsequent interview

  2. Receiving and joint analysis of the data followed by an interview.

In the first phase of the experiment, we pursued three objectives:

  1. Preliminary interview

    We addressed general attitudes and expectations regarding data protection, particularly the right to information about automated decision-making.

  2. Application

    We observed participants as they attempted to submit an information request.

  3. Evaluation

    Finally, we discussed the process up to the point of application submission with the participants.

Initially, study participants were asked demographic questions, self-assessed technical affinity, and their attitudes towards data protection in general. At this point, households were already aware from recruitment that we intended to exercise the right to information about the existence and logic of automated decision-making. In the second step, we tasked them with independently attempting to assert their rights with their household insurance provider. No specific instructions were given regarding the procedure. We encouraged participants beforehand – and if necessary, during observation – to explicitly vocalize their thoughts in a think-aloud manner during their process. For a final evaluation, questions were also prepared in a semi-structured interview guide. After the information request was successfully submitted, we discussed with the participants their assessment of the information process up to that point.

When searching for relevant information to make the request, participants were only instructed to use the internet or their own documents as search sources. Telephone inquiries were intentionally not considered. The information request was sent using the study participants’ email addresses, which were also known to the respective insurance companies. This was intended to avoid potential identity verification queries from the insurance company and to ensure the process was treated as a normal customer inquiry. To obtain more comparable results, all insurance companies received a template request with the same wording.

The second phase included the customers receiving their answer and a joined investigation. After sending the email, we had to wait for a response from the insurers. Accordingly, we asked the households to inform us upon receiving a reply to schedule a second appointment, or, if there were follow-up questions from the insurance company, to clarify these together. We asked the households not to look into the response yet, but to do this together with us. This way, the receipt of the data itself and the initial reaction to it could also be observed. The objectives in the second phase were:

  1. Data receipt

    We observed and discussed topics related to the phase of receiving, such as data retrieval and response style.

  2. Data analysis

    We observed and discussed the participants’ own analysis of the received information and their classification in terms of support for their explanatory approaches. We analyzed the provided information from the perspective of UX design guidelines.

The requests for information to the insurance companies took place in spring 2021. Accordingly, all requests for information were sent out in the same period in which the interviews were conducted. All 12 interviews took place at the premises of the study participants. During the interviews, the participants had their own household insurance documents and their preferred end device (smartphone, tablet or laptop) was available to them in order to create a familiar working environment. During the interviews, both the soun d and the screen of the participants’ computers were recorded. The pre-installed Windows 10 screen recording function was used for the screen recording. The participants’ consent was obtained before the interview. Reference was made to the pseudonymization of the data (Table 1).

Table 1:

Demographic information of participants.

ID Gender Age Tech savviness Insurance
T1 f 69 Low V1
T2 f 25 High V2
T3 m 72 Low V3
T4 f 24 High V4
T5 f 58 Mid V5
T6 f 43 Mid V6
T7 f 52 Mid V7
T8 f 32 High V8
T9 f 52 Low V9
T10 m 28 High V9
T11 m 55 High V10
T12 m 56 High V11

3.2 Participants

A total of 12 people (8 female, 4 male) took part in the study. The average age was 47 (min. 24, max. 69). The technical affinity of the participants is mixed (3 low, 3 medium, 6 high). None of the respondents are data protection nor IT experts, nor experts in the insurance business. The prerequisite for participation in the study was the possession of household contents insurance and the willingness to be interviewed with a subsequent request for information from the insurance company. Only two participants were insured with the same insurance company. This meant that a total of 11 insurance policies were covered. Participation was voluntary and was not compensated.

3.3 Analysis

Qualitative content analysis was used to evaluate the data collected. This is an evaluation method that processes texts that arise during data collection in the context of social science research projects. 32 The aim is to identify, condense and describe phenomena and concepts from the area of the research question(s) on the basis of a defined database (here: the interviews conducted). The audio and screen recordings made during the interviews serve as data material. These were then transcribed. Deductive coding based on Mayring’s 32 qualitative content analysis was used to analyze and categorize the transcripts, in which two of the authors coded the material independently of each other and combined the existing codes in a second step. The aim of the analysis was to analyze the usability of the existing information processes of the insurance companies and to identify possible deceptive or bright patterns. The responses of the insurance companies themselves have not been analyzed specifically, but rather served as a tool to spark discussion and reflection among participants. We generally describe the characteristics of the answers to the right to information on automated decision making in chapter 4.4.

4 Results

This chapter presents the evaluation results of the empirical study carried out. First, the results of the semi-structured interviews are presented. The results are presented according to the chronological order of the information process. This is followed by the results of the quantitative research. The analysis of the results presented in the context of the study follows in the next chapter.

4.1 Attitudes and expectations prior to the request for information

While some participants were aware of the GDPR in principle, no one stated that they had ever exercised their right to information. Participants who did not know anything about the GDPR or their corresponding rights were briefly informed about it. When asked about their expectations of the right to information, only two participants were optimistic and stated that the insurance company would certainly make the information process easy. This included T9, who said:

After all, the insurance company is obliged to make the whole thing easy, so I think they will. They want to keep or satisfy the customer.

The remaining participants were more pessimistic and did not believe that the request for information would be made easy. T2, T3, T4, T11 and T12 stated that their insurance company will certainly be skeptical if they receive a request for information. T4 commented:

I can well imagine that the insurance company will ask why I want to know. So, if for example, I’m unhappy with something or there’s a problem.

T2 suspected that the type of response could also depend on the case handler themselves:

[…] it certainly depends on who you end up with, i.e. what kind of expert advisor.

When asked whether the insurance company’s answers would be written in language that laypeople could understand, six participants expressed skepticism about this (T2, T3, T4, T5, T11, and T12), citing different reasons. While T2 fears technical terms from the insurance industry and IT (“I think they will use a lot of technical terms.”), T11 draws attention to the complexity of the topic of data protection:

The content will definitely be difficult to understand, just as you would expect from many data protection texts.

4.2 Phase 1: finding, tailoring and submitting the application

The first part of the information process, i.e. until the data is requested, is made up of three parts: The information search, the tailoring of the request and the information request itself.

4.2.1 Finding the entrance point

The application process itself already presented the first difficulties for users. On the insurance website, many participants first looked for the “Data protection” section. Participants with a higher technical affinity were at an advantage here, as they were familiar with the functions and options for setting up websites. They primarily used the website’s internal search function to search specifically for data protection information. T2 and T4 (younger participants with a high technical affinity) used the search bar on the homepage of their insurance company and entered keywords such as “data protection” or “GDPR”. This significantly shortened the search time for T2. However, T4 did not reach the data protection page even after searching for the word data protection.

Without the search function, the data protection information of the insurance companies was accessible at the bottom of the homepage via the data protection button. In contrast, most participants searched for the data protection information in the upper quarter of the homepage. In the main navigation area in particular, participants searched for and clicked on keywords such as “customer portal”, “consumer rights” or “household contents insurance”. The participants did not find what they were looking for under any of these keywords. The home pages were therefore searched several times and only after a few minutes did the participants find the relevant link.

An alternative click path was the route via the “Contact” button, which some participants quickly found and clicked on. The participants wanted to find the contact details of the data protection officer. However, none of the websites tested had the contact details of the data protection officer on the contact page. Therefore, the route via the data protection notices had to be taken. For the most part, these suffered from the design problems known in research, such as legal jargon and, in particular, a lack of clarity, which had a negative impact on the process.

In V1, the data protection page extended over several pages with small print. There were no visual or interactive tools to filter the text for specific topics or concerns. Accordingly, T1 searched the longest and expressed his lack of understanding:

How am I supposed to find the information with all this stuff? I’m going to be busy for days! The GDPR was introduced in 2018, so it’s practically new, so why don’t they put it higher up?

Insurance V9 was particularly user-unfriendly. There, the data subject rights could only be viewed via a PDF file, which could only be opened by clicking on the data protection page. The content of the file is several pages long and in a small font size. T9 said:

The PDF is a nice-to-have, but with so much clicking around and searching, I just don’t feel like it.

For some insurance companies, the data protection information was clearly presented using various smaller (digital) aids. For example, V4 made all GDPR rights of the data subject available on its data protection page in bullet points and broken down by data subject. This enabled T4 to view his rights in the shortest time of all participants. During the observations, it turned out that the participants found it easier to carry out the information process when the contact details of the data protection officer of the insurance company were placed in close proximity to the explanations of the data subject’s rights.

4.2.2 Tailoring and submitting a request for information

The process of tailoring and submitting the request for information was the same for all insurance companies. The data subject had to send their request for information either by post or email to the address provided by the data protection officer. Of all the insurers tested, V8 was the only one to also offer the option of sending the request for information online via its website. According to V8, the response would be sent by email.

In general, some participants expressed the wish that the information process could also be presented as a “process”, i.e. to be able to see an overview of the necessary steps. With regard to the presentation of information, the participants also often lacked stringent instructions. Even if they ultimately managed to submit their application as part of the study, it is highly doubtful that they would be motivated to search in a real-life application:

You can explain the process as a step-by-step explanation, can’t you? I would definitely understand it better than just having to read text. (T6)

In the existing information process, participants criticized the fact that the necessary information first had to be searched for and compiled in various places. It was also noted that explanatory videos or illustrations could help to better present the data protection information. Overall, the participants noted in the course of the user experience that it was obviously not the aim of the insurance companies to please their customers, but to meet the legal requirements with minimal effort:

I think it can be made a lot easier in some places. In the process, the insurance company only does the bare minimum, most of the work falls to me as the insured person. (T11)

T10 also makes a similar comment, which is aimed more at the knowledge required for an application:

It wasn’t difficult per se, but without background knowledge you can’t see through it at all.

4.3 Attitudes and expectations following the request for information

After the process, the participants were asked what expectations they now had of the answers. Many participants were rather pessimistic; the previous process had obviously already lowered their expectations. T10 let it be known how the search in the data protection information had impressed him:

The answer will probably look like the privacy information on the insurance site, small print with lots of blah blah blah.

Participants already had some suggestions for improvement:

Something like this could be made easier, e.g. just read it online and tick what you want. With the current method, I have to work out everything myself (T3).

T3 also pointed out that the layout of the letter and the legal jargon would not be accessible to everyone and that it could possibly make an impression on the recipients and possibly even influence the results:

I would have written that in simple sentences, I’m not a data protection expert. But I think that’s [the style] I would have received the answer then. We are now sending a proper sample request with details of the legal text etc., so they will make more of an effort to answer sensibly.

Here the participant brings a personalization of the information into play – albeit in a negative sense, because of the professionally formulated application. Whether this would of course be used strategically by the company or rather subconsciously by a clerk remains to be seen.

4.4 Phase 2 – receiving data: characterizing the answers

A total of 12 requests for information were sent out. One insurance company (V9) was contacted twice. Ten insurance companies responded to the request for information. One insurance company (V4) did not respond to the request for information. After two months, T4 called his insurance company and received the answer that the data protection e-mail address (via which the request for information was sent) was not read by any employee. The request for information was then sent to the insurance company’s main e-mail address. By the end of the study, T4 had not received a reply to the second request for information either.

As can be seen in Figure 1, there are large time discrepancies between sending the request for information and receiving the response. The length of the responses also varies considerably. Figure 1 shows that 10 of the 12 requests for information were received within four weeks. This means that nine out of 11 requested insurance companies met the data protection requirement in Art. 12 para. 3 sentence 1, according to which the response should not take longer than one month. Only V6 only replied after just over 4 months and V4 did not send a reply at all.

Figure 1: 
Characterizing answers by time, extend of answers and medium.
Figure 1:

Characterizing answers by time, extend of answers and medium.

While the shortest response consisted of three sentences, the longest response was six pages long.

4.5 The information provided and its contribution to transparency

This section describes users’ experiences in dealing with insurers’ responses and the data received, if any.

4.5.1 The right to information about automated decisions is still little known as a process

Three of the 12 insurers contacted responded with answers that, from the users’ perspective, did not address the question. V4 stood out negatively, providing no response even after follow-up. Other insurers gave answers where the information provided no context about the logic and scope of an automated decision. For example, V2 sent participant T2 a consent form for using acquisition data for advertising purposes as a response. No connection to the questions in the submitted letter was apparent.

V3 sent a compilation of the policyholder’s invoices that had been generated at the beginning of the contract. These partly originated from the building insurance that T3 had also taken out with V3, indicating an information exchange. However, how this information fed into automated decision-making was not explained.

V8 apparently considered the request under the general right of access. Unsolicited, all personal data stored about the insured was provided. For example, V8 gave the personal data stored about its insured T8 and added the name of the insurance advisor who concluded the household insurance with T8. Nevertheless, no information was found about the existence of automated decision-making and/or corresponding logic.

Overall, users were not satisfied with the answers to their questions and had more uncertainty than before at this point. Neither was it apparent from these responses whether automated decision-making existed at all, nor was any information provided about the underlying logic. Insurer V9 received two information requests to the same email address from two different policyholders, T9 and T10, as part of this study. V9’s responses were different and signed by four different insurance employees. Apparently, the fear of lack of standardization in the process was confirmed: The types of responses could depend on the individuals processing them.

4.5.2 Confidentiality as a limit to the right of information: logic involved

Several insurers stated in their responses that they did not have to provide information about the logic or indicated that they were allowed to limit their responses. V5, while providing data, limited its disclosure about the logic used to a reference to “Article 10 CoC” (Code of Conduct of the German Insurance Industry). To find the contents of this Code of Conduct, T5 had to search independently. The Code of Conduct can be found on the web, where basic information about insurance industry procedures is laid out, such as:

The insurance industry calculates the probability of the occurrence of insured events and their claim amounts based on statistics and empirical values using actuarial methods and develops tariffs on this basis. (Art. 10 CoC)

More specific statements about the logic were not found. Overall, it was also not clear to T5 whether automated decision-making takes place at all. Another argument for limiting the disclosure of data used and logic was the reference to the sensitivity of premium calculations as a value for insurers: Regarding automated individual case decisions, V10 and V11 referred to “trade secrets worthy of protection.”

4.5.3 Successful information provision and good practice

Sufficient information is provided in principle when the processor informs that no automated decision-making took place. V1 answered the request so concretely: No automated decision-making had been applied to T1 so far. V1 further asked to “specify” the information request and referred to the responsible insurance intermediary as the contact person.

Among the responses from V3, V5, V7, and V9, which provided information about data and logic to some extent, there were considerable variations in the comprehensibility of the answers. A positive example was explanatory sentences that explained individual data points and logic:

At V3, automated decision-making apparently took place, and the insurance company also provided corresponding information: The influencing factors of pricing were described in short and concise sentences. Subsequently, the information used was mentioned on the following pages. However, this was textually hidden in a legally tinged text.

T10 received an approximate description of how the calculation was made. The connection between the decision-making and the personal data was not drawn. While T9 was only informed about which factors influence the contribution calculation, T10 received the following description:

The contribution for household insurance depends on various criteria. On the one hand, it depends on the tariff zone, whether you live in a larger city or in the countryside, for example. In the city, the risk of burglary is much higher. Furthermore, the contribution varies depending on which insured risks are desired. Each insured risk has a risk contribution, depending on the tariff zone. This is multiplied by the living space. This gives us a risk contribution, from which any discounts are then deducted. Finally, the insurance tax is levied.

While this explanation does not describe the calculation of automated decision-making in relation to T10’s personal data, it does answer to some extent the question about meaningful information on the logic involved. The use of real-life examples can be evaluated as a helpful bridge between algorithms and the human everyday world. It should be noted restrictively that both V9 and V10 speak of “examples” and “various factors.” A conclusive list as with V7 is therefore not available.

Another positive design pattern was the clear structuring of the document and, in particular, the naming of the weighting of individual factors for pricing. For example, V7 listed all the information used for calculating household insurance for its insured T7 in bullet points. These were underpinned with some figures and percentages, creating a clear overview of the evaluation criteria. The presentation of the weighting enabled T7 to weigh up options for improving the scoring. In terms of transparency, this also resulted in at least potential comparability between providers for T7.

5 Discussion

In this chapter, we discuss the results presented above. We first discuss the perceived prioritization of data protection at the companies, derive the deceptive patterns used and evaluate the generalizability of our results.

5.1 Disappointed expectations about “data protection as added value”

The idea of positioning data protection as added value in customer communication is not entirely new, but it has gained renewed attention due to the extensive changes brought about – not exclusively, but certainly predominantly – by the GDPR. Our study shows that some of our participants, too, expected companies to please their customers in the area of data protection and data subject rights, and to ensure a positive customer experience. This expectation was already dampened by the appearance of the privacy notices and further disappointed by the partly evasive, partly refusing attitude of the insurance companies. As Pins et al. 19 demonstrate, the right to information about an automated decision is not an isolated case in user-unfriendly design.

5.2 Deceptive pattern in process design

For a long time, there have been indications that texts and information from the data protection domain are difficult for users to understand. The prototypical example for this are privacy policies, which have been long researched. 33 , 34 , 35 While this may be due to legal language and the aim of achieving legal certainty, the situation is different for other use cases: For several years, cookie banners have even been actively designed against users’ interests. Some research contributions have already discovered that the process design for the right to information lacks some of the most basic interaction patterns and usability criteria. The identification and research of deceptive design patterns has also dramatically gained importance. Our study provides indications of what such patterns may look like for exercising data subject rights. Finding the contact point was relatively easy. Almost all participants also received responses in principle. However, these differed fundamentally in terms of the companies’ willingness to provide explanations. In our exploratory study, we were thus able to identify several phenomena among the largest German insurers in the household insurance sector that constitute implementations of deceptive patterns. In the following, we (1) discuss reasons provided by insurance companies to limit or deny answers to the request. We then discuss deceptive patterns in (2) the process of exercising the right to information, and (3) the design of answers provided. As a classification scheme, we use the Guidelines of the European Data Protection Board, 30 which identified the deceptive patterns of “overloading”, “skipping”, “stirring”, “obstructing”, “fickle”, and “left in the dark”.

5.2.1 Rejection

In some cases, we did not receive substantive information in one way or another. The most direct rejection was from V4 in the form of not responding to our request. Ignoring an inquiry can lead to data subjects simply abandoning their concern, but this strategy also carries the greatest risk of making organizations vulnerable to legal prosecution. The GDPR sets fixed deadlines for responding to requests in accordance with data subject rights. In other cases, trade secrets were partially cited as the company’s own interest in not informing about the logic or even the mere existence of an automated decision. The restriction of data subject rights, such as the right to information, by trade secrets, for example, is possible in principle, but its boundaries still need to be defined. It is noteworthy that this right was not used comprehensively, so it does not seem to be a common practice among the largest German household insurers. Data subjects are deterred here and would need to find the motivation to pursue legal action to clarify their concerns to overcome deterring strategies put in place by companies.

5.2.2 Deceptive patterns in exercising data subjects’ right

In our study, there were several instances of companies hiding information or making it harder for data subjects to exercise their right as well as receiving or analyzing information. The challenges for consumers lie in:

  1. Time-consuming process to locate and access relevant information

  2. Need for legal knowledge to interpret responses

  3. Potential discouragement from pursuing their rights

In the following we link the design practices for handling data subject right requests to deceptive patterns established by the EDPB. 30

  1. Hidden costs” for finding the entry point: Some organizations hid pathways to initiate the information in remote places or between long paragraphs unrelated to data subject rights. Placing starting points in the privacy policy section still seems logical. However, these are often still barely structured and difficult for consumers to search through. Some insurance companies had not even indexed their privacy policies in their website’s own search function. One insurer went the furthest by hiding the data subject rights in an externally accessible PDF linked in the privacy policy. Concealing links to initiate information requests or burying relevant details in hard-to-find documents mirrors the “Hidden Costs” deceptive pattern, where crucial information is obscured until the last moment.

  2. Misdirection” after having requested information: We frequently encountered misinterpretation as a general request for the right of access (Art. 15 GDPR). While the respective responses provided personal data, they did not explain possible profiling or automated decision-making or its logic, effectively leaving out this aspect. In one case, information was not even provided, but instead, a “concretization” was demanded first, creating an additional hurdle for consumers to reach their goal.

  3. Information overload” when receiving data: In one case, the insurance company refused to provide information with a cross-reference to the Code of Conduct of the German insurance industry that they had signed, posing a potential “information overload” pattern by referring to a huge document. This abstract reference is likely to have a deterrent effect on consumers. In this case, information that could contribute to answering the request is even present in the publicly accessible Code of Conduct. However, the given reference to Article 10 was not helpful, as the information is found in Article 13. Codes of Conduct (CoCs) are likely to be unfamiliar to laypeople as a concept alone and are written in formalized language, typically not suitable for understanding by non-experts. Moreover, CoCs are written abstractly as they must be written for a group of organizations and thus for a series of different individual cases and processes, diluting their informational content. This resembles the related “Roach Motel” deceptive pattern, where information is easy to get into but difficult to comprehend or act upon. 36 As a consequence, these process design decisions collectively create a false sense of transparency while potentially obscuring important details about automated decision-making processes, making it challenging for consumers to exercise their rights effectively under data protection regulations like the GDPR.

5.2.3 Deceptive patterns in design of the response

A rather hidden strategy to reduce the informational content of the disclosure could involve explaining the scope of the logic through exemplary and abbreviated examples. The difficulty in detecting this strategy lies in recognizing the precision of the formulation. While some information is willingly provided to inform the data subjects, other information is withheld. The interest in preserving trade secrets – which may ultimately be legally justified – potentially plays a role here as well. Nevertheless, clever wording would create a false impression of comprehensive disclosure for consumers, making this approach a deceptive design pattern. This strategy of diluting the response is subtle and potentially deceptive. It involves:

  1. Misdirection” by providing partial information: Giving some details willingly while withholding others.

  2. Obfuscation” by using exemplary explanations: Offering simplified or abbreviated examples that don’t fully capture the complexity of the actual logic. This pattern involves focusing user attention on one thing to distract from another.

  3. Left in the dark” by careful wording: Employing language that creates an illusion of full disclosure. This pattern involves designing the interface or user experience in a way that causes users to forget or fail to consider certain data protection aspects as identified by the EDPB in their Guidelines. 30

The challenge for consumers lies in:

  1. Recognizing incomplete information: It’s difficult for non-experts to identify what’s missing.

  2. Understanding the implications: The provided examples might not reflect the full scope of how their data is used.

  3. Balancing transparency and trade secrets: Consumers may not know where the line between necessary disclosure and protected business information should be drawn.

These process designs could be considered a deceptive pattern because it gives the appearance of compliance and transparency while potentially obscuring important details about automated decision-making processes that affect consumers.

5.2.4 The double-edged sword of personalization

Personalization of data subject requests is an interesting topic that our participants have brought up. It is debatable whether the professional formulation (instead of a formulation by the data subjects themselves) is a disadvantage of the study design. In any case, a larger-scale study would be interesting to determine whether information disclosures contain different formulations depending on the form and jargon of the request. Personalization of the disclosure could certainly be helpful in terms of usability for applicants. Conversely, the deceptive design strategies discussed above could also be applied depending on the assumed user type to prevent disclosure as effectively as possible. Thus, personalization could be a double-edged sword in the context of data subject rights. Potential for further research may include a larger study to examine if disclosure content varies based on request format and jargon.

5.3 Limitations and future work

At this point, it should be noted that the study only focused on the information request process via the insurance companies’ websites, and the request was only made via email. The usability recommendations therefore only apply to this type of request. The selection of participants was an ad-hoc sample and could potentially benefit from a strategic planning of different user groups in a second step to provide more targeted results in this regard. Moreover, the study includes only a small number of participants, which does not allow for immediate generalization of the problems. However, this is not the goal of qualitative-exploratory studies. In a first step, we aimed to identify existing requirements and deceptive pattern through in-depth qualitative analysis, thereby making them accessible for further research. The present study is therefore an important starting point and only enables further research in the area, which could, for example, be more evaluative-quantitative in nature. A subsequent task is, for instance, to investigate the robustness of the identified deceptive patterns using quantitative methods. A similarly related limitation is the fact that we mainly acquired only one person per insurer. However, this was a conscious decision, as the behavior of the insurance companies should be quite standardized due to defined processes, and thus different behaviors of the organization could be better found through the study design. Future work could validate our hypothesis here. Another venue of interest regards a systematic investigation of personalization, particularly of the differences in information disclosures with different formulations of requests and to any (un)conscious unequal treatment that may exist. Last, but not least, the instances of deceptive design identified in our studies align well with the existing EDPB framework and thus provide first insights into potential issues with deceptive design specific to the insurance industry. Our findings therefore can be a starting point to draft potential solutions or even “bright patterns” in the future.

6 Conclusions

This study provides a critical examination of the implementation of the GDPR’s right to information in the context of automated decision-making within the German insurance sector. Through qualitative research involving 12 participants, we have uncovered significant usability challenges and design flaws that hinder consumers’ ability to understand automated processes and exercise their rights effectively. The findings reveal that many insurance companies prioritize minimal compliance with regulatory requirements over user-friendly design, often resulting in a lack of clarity and transparency in the information provided to consumers. This research highlights the prevalence of deceptive patterns in the user experience of exercising data subject rights, which can lead to consumer disengagement and mistrust in automated decision-making systems. By identifying these issues, our study emphasizes the need for a more user-centric approach in the design of information processes, advocating for clearer, more accessible communication that enhances consumer understanding and confidence in their rights under the GDPR. Ultimately, this work not only contributes to the discourse on usable security and privacy but also serves as a call to action for companies, regulators, and UX designers to prioritize the needs of consumers in the evolving landscape of data protection and automated decision-making.


Corresponding author: Timo Jakobi, Technische Hochschule Nürnberg Georg Simon Ohm, Nuremberg, Germany, E-mail: 

  1. Research ethics: Not applicable.

  2. Informed consent: Informed consent was obtained from all individuals included in this study, or their legal guardians or wards.

  3. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  4. Use of Large Language Models, AI and Machine Learning Tools: LLM were used throughout the document to check and improve language and consistency of style.

  5. Conflict of interest: The authors state no conflict of interest.

  6. Research funding: None declared.

  7. Data availability: The raw data can be obtained on request from the corresponding author.

References

1. Wiebe, A.; Helmschrot, C. Untersuchung der Umsetzung der Datenschutz Grundverordnung (DSGVO) durch Online Dienste. Bundesministerium der Justiz 2019.Search in Google Scholar

2. Transforming Claims and Underwriting with AI | Accenture: https://www.accenture.com/us-en/insightsnew/insurance/ai-transforming-claims-underwriting (accessed 2025-02-28).Search in Google Scholar

3. Anzahl der Personen in Deutschland, die eine Hausratversicherung im Haushalt besitzen, von 2018 bis 2022: 2022. Available from: https://de.statista.com/statistik/daten/studie/266295/umfrage/versicherungen-besitz-einer-hausratversicherung-in-deutschland.Search in Google Scholar

4. Davola, A.; Malgieri, G. Data, Power, and Competition Law: The (Im) Possible Mission of the DMA? The Economics and Regulation of Digital Markets; Emerald Publishing Limited: Leeds, 2023; pp 53–74.10.1108/S0193-589520240000031003Search in Google Scholar

5. Malgieri, G.; Niklas, J. Vulnerable Data Subjects. CLSR 2020, 37, 105415; https://doi.org/10.1016/j.clsr.2020.105415.Search in Google Scholar

6. Boxer, B.; Nelson, B.; Cantwell, M.; Pryor, M.; Mccaskill, C.; Klobuchar, A.; Warner, M.; Begich, M.; Blumenthal, R.; Schatz, B.; Markey, E.; Booker, C.; Thune, J.; Wicker, R. F.; Blunt, R.; Rubio, M.; Ayotte, K.; Heller, D.; Coats, D.; Scott, T.; Carolina, S.; Cruz, T.; Fischer, D.; Johnson, R.; Doneski, E. L. What Information Do Data Brokers Have on Consumers, and How Do They Use it Before. In Senate Committee on Commerce, Science, and Transportation 113th Congress First Session; Prepared Statement of the Federal Trade Commission: Washington DC, USA, 2013.Search in Google Scholar

7. Lehtiniemi, T.; Kortesniemi, Y. Can the Obstacles to Privacy Self-Management Be Overcome? Exploring the Consent Intermediary Approach. Big Data Soc. 2017, 4, 2. https://doi.org/10.1177/2053951717721935.Search in Google Scholar

8. Jakobi, T.; Patil, S.; Randall, D.; Stevens, G.; Wulf, V. It’s About What They Could Do with the Data: A User Perspective on Privacy in Smart Metering. ACM Trans. Comput.-Hum. Interact. 2019, 9 (4), 43; https://doi.org/10.1145/3281444.Search in Google Scholar

9. Rao, A.; Schaub, F.; Sadeh, N. What Do They Know About Me? Contents and Concerns of Online Behavioral Profiles. In PASSAT’ 14: Sixth ASE International Conference on Privacy, Security, Risk and Trust, 2015.Search in Google Scholar

10. The European Parliament and of the Council. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). 2018.Search in Google Scholar

11. Mahieu, R.; Asghari, H.; van Eeten, M. Collectively Exercising the Right of Access: Individual Effort, Societal Effect. SSRN Electron. J. 2017, 1–23; https://doi.org/10.2139/ssrn.3107292.Search in Google Scholar

12. Norris, C.; de Hert, P.; L’Hoiry, X.; Galetta, A. The Unaccountable State of Surveillance: Exercising Access Rights in Europe; Springer: Berlin Heidelberg, 2016.10.1007/978-3-319-47573-8Search in Google Scholar

13. Bösch, C.; Erb, B.; Kargl, F.; Kopp, H.; Pfattheicher, S. Tales from the Dark Side: Privacy Dark Strategies and Privacy Dark Patterns. Proc. Privacy Enhancing Technol. 2016, 2016 (4), 237–254. https://doi.org/10.1515/popets-2016-0038.Search in Google Scholar

14. Graßl, P.; Schraffenberger, H.; Zuiderveen Borgesius, F.; Buijzen, M. Dark and Bright Patterns in Cookie Consent Requests. J. Digital Soc. Res. 2021, 3 (1), 1–38. https://doi.org/10.33621/jdsr.v3i1.54.Search in Google Scholar

15. Gray, C. M.; Kou, Y.; Battles, B.; Hoggatt, J.; Toombs, A. L. The Dark (Patterns) Side of UX Design. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018; p 534.10.1145/3173574.3174108Search in Google Scholar

16. Mathur, A.; Kshirsagar, M.; Mayer, J. What Makes a Dark Pattern… Dark? Design Attributes, Normative Considerations, and Measurement Methods. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems; New York, NY, USA, 2021; pp 1–18.10.1145/3411764.3445610Search in Google Scholar

17. Die Marktführer im dankbaren Geschäft mit Hausratversicherungen - Die zehn größten Hausratversicherer: Durch die Bank beeindruckend im Geschäft – Sparten – Versicherungsbote.de: 2020. https://www.versicherungsbote.de/id/4900429/chapter/1/Die-Marktfuhrer-im-dankbaren-Hausrat-Geschaft/ (accessed 2024-04-10).Search in Google Scholar

18. The European Parliament and of the Council. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). 2018.Search in Google Scholar

19. Pins, D.; Jakobi, T.; Stevens, G.; Alizadeh, F.; Krüger, J. Finding, Getting and Understanding: the User Journey for the GDPR’S Right to Access. Behav. Inf. Technol. 2022, 41 (10), 2174–2200. https://doi.org/10.1080/0144929X.2022.2074894.Search in Google Scholar

20. Wachter, S.; Mittelstadt, B.; Floridi, L. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. SSRN Electron. J. 2016, 76–99; https://doi.org/10.2139/ssrn.2903469.Search in Google Scholar

21. Temme, M. Algorithms and Transparency in View of the New General Data Protection Regulation. Eur. Data Prot. Law Rev. 2017, 3 (4), 473–485. https://doi.org/10.21552/edpl/2017/4/9.Search in Google Scholar

22. Dexe, J.; Ledendal, J.; Franke, U. An Empirical Investigation of the Right to Explanation Under GDPR in Insurance. Trust, Privacy and Security in Digital Business; Springer International Publishing: Bratislava, 2020; pp 125–139.10.1007/978-3-030-58986-8_9Search in Google Scholar

23. Eisenberg, M. A. The Limits of Cognition and the Limits of Contract. Stanf. Law Rev. 1995, 47 (2), 211. https://doi.org/10.2307/1229226.Search in Google Scholar

24. Wauters, E.; Donoso, V.; Lievens, E. Optimizing Transparency for Users in Social Networking Sites. info 2014, 16 (6), 8–23. https://doi.org/10.1108/info-06-2014-0026.Search in Google Scholar

25. Alizadeh, F.; Jakobi, T.; Boldt, J.; Stevens, G. GDPR-realitycheck on the Right to Access Data. Claiming and Investigating Personally Identifiable Data from Companies. In Proceedings of Mensch und Computer; Forian Alt, Andreas Bulling und Tanja Döring. S, 2019; pp 811–814.10.1145/3340764.3344913Search in Google Scholar

26. Nie, L.; Zhao, Y.; Li, C.; Luo, X.; Liu, Y. Shadows in the Interface: A Comprehensive Study on Dark Patterns. Proc. ACM Softw. Eng. 2024, 1 (FSE), 10:204–10:225. https://doi.org/10.1145/3643736.Search in Google Scholar

27. Kelly, D.; Rubin, V. L. Identifying Dark Patterns in User Account Disabling Interfaces: Content Analysis Results. Soc. Media + Soc. 2024, 10 (1), 20563051231224269. https://doi.org/10.1177/20563051231224269.Search in Google Scholar

28. Hilton, M. Dark Patterns and User Mental Health: Identifying Theoretical Impacts of Deceptive Design on Vulnerable Demographics. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2023, 67 (1), 2124–2127. https://doi.org/10.1177/21695067231199684.Search in Google Scholar

29. Iantorno, M.; Guadagnolo, D.; Petterson, A. Dark Patterns and Pedagogy: Expanding Scholarship and Curriculum on Manipulative Marketing Practices. AoIR Sel. Pap. Internet Res. 2023. https://doi.org/10.5210/spir.v2023i0.13430.Search in Google Scholar

30. European Data Protection Board. Guidelines 03/2022 on Deceptive Design Patterns in Social Media Platform Interfaces: How to Recognise and Avoid Them. 2023. https://www.edpb.europa.eu/our-work-tools/our-documents/guidelines/guidelines-032022-deceptive-design-patterns-social-media_en.Search in Google Scholar

31. Turilli, M.; Floridi, L. The Ethics of Information Transparency. Ethics Inf. Technol. 2009, 11 (2), 105–112. https://doi.org/10.1007/s10676-009-9187-9.Search in Google Scholar

32. Mayring, P. Qualitative Inhaltsanalyse. Handbuch qualitative Forschung in der Psychologie; Springer: Wiesbaden, 2010; pp 601–613.10.1007/978-3-531-92052-8_42Search in Google Scholar

33. Cranor, L. F. Necessary But Not Sufficient: Standardized Mechanisms for Privacy Notice and Choice. J. Telecomm. High Tech. L. 2012, 10, 273.Search in Google Scholar

34. McDonald, A. M.; Cranor, L. F. The Cost of Reading Privacy Policies. Isjlp 2008, 4, 543.Search in Google Scholar

35. Milne, G. R.; Culnan, M. J.; Greene, H. A Longitudinal Assessment of Online Privacy Notice Readability. J. Publ. Pol. Market. 2006, 25 (2), 238–249; https://doi.org/10.1509/jppm.25.2.238.Search in Google Scholar

36. Zac, A.; Huang, Y-C.; von Moltke, A.; Decker, C.; Ezrachi, A. Patterns and Online Consumer Vulnerability. 2023. Available at SSRN 4547964.10.2139/ssrn.4547964Search in Google Scholar

Received: 2024-12-07
Accepted: 2025-03-10
Published Online: 2025-04-08

© 2025 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Frontmatter
  2. Special Issue on “Usable Safety and Security”
  3. Editorial on Special Issue “Usable Safety and Security”
  4. The tension of usable safety, security and privacy
  5. Research Articles
  6. Keeping the human in the loop: are autonomous decisions inevitable?
  7. iSAM – towards a cost-efficient and unobtrusive experimental setup for situational awareness measurement in administrative crisis management exercises
  8. Breaking down barriers to warning technology adoption: usability and usefulness of a messenger app warning bot
  9. Use of context-based adaptation to defuse threatening situations in times of a pandemic
  10. Cyber hate awareness: information types and technologies relevant to the law enforcement and reporting center domain
  11. From usable design characteristics to usable information security policies: a reconceptualisation
  12. A case study of the MEUSec method to enhance user experience and information security of digital identity wallets
  13. Evaluating GDPR right to information implementation in automated insurance decisions
  14. Human-centered design of a privacy assistant and its impact on perceived transparency and intervenability
  15. ChatAnalysis revisited: can ChatGPT undermine privacy in smart homes with data analysis?
  16. Special Issue on “AI and Robotic Systems in Healthcare”
  17. Editorial on Special Issue “AI and Robotic Systems in Healthcare”
  18. AI and robotic systems in healthcare
  19. Research Articles
  20. Exploring technical implications and design opportunities for interactive and engaging telepresence robots in rehabilitation – results from an ethnographic requirement analysis with patients and health-care professionals
  21. Investigating the effects of embodiment on presence and perception in remote physician video consultations: a between-participants study comparing a tablet and a telepresence robot
  22. From idle to interaction – assessing social dynamics and unanticipated conversations between social robots and residents with mild cognitive impairment in a nursing home
  23. READY? – Reflective dialog tool on issues relating to the use of robotic systems for nursing care
  24. AI-based character generation for disease stories: a case study using epidemiological data to highlight preventable risk factors
  25. Research Articles
  26. Towards future of work in immersive environments and its impact on the Quality of Working Life: a scoping review
  27. A formative evaluation: co-designing tools to prepare vulnerable young people for participating in technology development
Downloaded on 19.12.2025 from https://www.degruyterbrill.com/document/doi/10.1515/icom-2024-0071/html
Scroll to top button