Between computer science and philosophy, and: on the (im-)possibility of digital literacy
-
Mareike Lisker
Mareike Lisker studied Computer Science at the Technical University Berlin (B.Sc. 2019). Before that, she obtained a B.A. in Philosophy and Linguistics from the Humboldt-University of Berlin (2015). She also studied Theory and History of Science and Technology at the TU Berlin (M.A. 2023). Today, she is a research assistant and doctoral student at the Berlin University of Applied Sciences (HTW Berlin), where she teaches and researches at the interface between computer science and society, and at the University Hamburg (UHH). In 2024, Lisker was awarded Junior Fellow by the German Informatics Society, where she is an active member.
Abstract
It is the article’s overall aim to elucidate the contingency and volatility inherent to academic biographies. In order to derive these, but also underpin them, the article begins by outlining the argument that Mareike Lisker makes in her Master’s thesis, which explores the nexus between the disciplines of computer science and philosophy. There, she posits that the demand for more digital literacy places an onerous responsibility on individual users when it comes to the control of their own data. She argues that the individual users are structurally ill-equipped to meet that responsibility in face of all-encompassing tracking infrastructures. In accordance with the aim of the article, the thesis’ topic will be situated within Lisker’s scientific career path. This path will then be traced up until her current PhD project, which focusses on content moderation on decentralized platforms.
1 Introduction
In this article, I will first give an insight into the research I conducted in my master’s thesis, which was awarded the Weizenbaum Study Prize by the Forum InformatikerInnen für Frieden und gesellschaftliche Verantwortung. I will then outline my academic career including the challenges and fortunate circumstances I encountered along the way, before I position myself academically in the present. This will serve to illustrate the diversity of research biographies and make their contingency more tangible.
2 On the (im-)possibility of digital literacy: web tracking and the responsibilization of the individual on the internet
The concept of digital literacy[1] is currently a topic of considerable interest and discussion in the German digital political as well as activist sphere, where it is framed as panacea for any technological challenge inherent to digitalization. Often, what is included in the idea of digital literacy is that web users should have control over their own data. This seems to be an obvious idea, when data is conceptualized as some sort of ‘raw material’ such as oil that is ‘naturally’ generated and owned by users. The underlying presupposition to the idea that individual users should control their allegedly own data is that if something is in one’s possession, one can control how much of it is shared and how much is kept for one’s own pleasure. Just like with a cake, I could keep 3 tasty slices for myself, and give 5 slices to Google. In practice however, data is not something users naturally produce, but something that is mainly generated by extra-large companies and platforms. Users usually use the platforms’ services for what they superficially are, e.g. a navigation service, and not with the intention of giving away data. While also offering that superficial service, these companies generate and collect as much data as possible in the background.
Another discussion is happening about discrimination and bias in automated decision-making systems that are based on artificial intelligence and machine learning methodologies. These systems serve to perpetuate and reinforce historically determined discriminatory structures [3], [4], [5], [6]. Such systems are used in marketing, recruitment processes, credit granting and law enforcement, among other areas. Particularly prominent is the case of the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) software, which is used in court in some states in the USA to make predictions about the risk of recidivism of defendants or prison inmates. The risk of recidivism describes the risk of a person committing another crime if they were released early on parole. In an analysis of the people classified by this software as potentially recidivists, journalists were able to determine that the software discriminated based on the race category. For example, Black people were almost twice as likely to be falsely classified as recidivists, and at the same time white people were more often falsely classified as low risk [7]. A system from Amazon used in the recruitment process systematically classified female applicants with the same qualifications as less suitable than male applicants [8]. Structurally similar phenomena are manifesting themselves on the web. As such, women were shown fewer advertisements for jobs as software developers than men on Facebook ads. At the same time, advertisements for houses for sale were more often shown to white people, while rental properties were more often shown to BIPoC people [9].
Remarkably, none of these systems ‘knew’ the categories on the basis of which people were discriminated against by the system. COMPAS did not ‘know’ which race the defendants belonged to; the developers of Amazon’s algorithm had purposefully left out the category “gender”. Since 2019, Facebook has also excluded the characteristics of location, age, gender and ethnic origin as targeting categories – following several criminal proceedings for discrimination [10]. The fact that the systems still discriminate against people on the basis of sensitive categories is because they can identify so-called proxy characteristics, which are “seemingly innocuous attributes that correlate with socially-sensitive attributes, serving as proxies for the socially-sensitive attributes themselves” [11], p. 9942. Proxy characteristics can therefore be used to derive sensitive characteristics such as gender, race or sexuality from supposedly harmless data about people.
It has become clear, why it is of paramount importance to safeguard and regulate data. Currently, control and responsibility for that lie in the hands of the users. In my master’s thesis, I develop the argument that it is a structurally impossible endeavor for any individual user to exert control over their allegedly own data.
This impossibility of individual control over one’s flow of data begins at the interface level, where practices such as dark patterns, nudging, and cumbersome, lengthy policy documents make it challenging for us to exercise control. However, I posit that some sort of solution at the surface, the interface level, such as permitting dark patterns or standardizing privacy policies would not be enough and that the actual problem lies much deeper, at the underlying, infrastructural level. Let us enter said level.
In 1994, the foundation stone for a data-based market infrastructure was laid when software developers at the browser company Netscape developed the session cookie. The session cookie was invented in order to facilitate commercial interests such as a shopping cart. It represented a solution to issues that arose due to the statelessness of the Hypertext Transfer Protocol, which in practice meant that users of a web site could not be recognized when they did a subdomain call. Prior to the advent of cookies, internet browsing was thus an essentially private act. In the development phase, other solutions were also discussed. One idea was to assign each browser a unique identification number. However, the developer Lou Montulli identified the potential risks associated with cross-site tracking in browser IDs and rejected this proposal [12]. It is not without irony, that the very property that cookies were designed to avoid, i.e. tracking, while tackling the challenge of the statelessness of HTTP has become common practice. Today, cookies are predominantly employed for recognition of users in the sense of website-wide tracking, which has led to a new challenge: the erosion of privacy.
After session and shopping cart cookies, first-party analytics cookies were implemented. They permitted publishers to monitor their visitors across multiple sites across their website and allowed them to retrace their users’ customer journey, as it is called in marketing jargon. Ultimately, advertising agencies found an exploitable loophole in the workings of the web. To save storage space and duplicates, certain content from websites, such as images, are just referenced by a website and loaded from another, third-party server by the user’s browser the moment the website is built. Thus, when a website displayed an ad and this advertisement was loaded from a third-party ad server, that third party could also place a cookie. Third-party cookies were born and made cross-site tracking possible. With third-party cookies, a user’s entire customer journey through the internet could be traced. Eventually, the advertising industry appropriated cookies as a marketing tool.
The evolution of cookies, steered by the advertising industry, fundamentally changed the way advertisement is displayed. When shopping cart cookies were prevalent, online advertising was still largely based on traditional media such as print or television. Targeting was typically conducted on a broad basis, for example, by age, gender or income. Buying advertisement space meant to buy the right to display an ad in a certain context, and advertisement for sports shoes was displayed on a website offering sporting goods. The advent of first party analytics cookies made it possible to display advertisement for sports shoes in the books section of a large online retailer, because the user had looked at the retailer’s sports shoe offerings before. Now, third party cookies made it possible to display advertisement for sports shoes on a different website, e.g. news, as the third-party provider is aware that the user has previously viewed content related to sports shoes in the online shop. Buying advertisement space now means buying the right to display and ad to a certain user. The sale of advertising space today is conducted through automated algorithms on advertising exchanges, where transactions occur in real time within milliseconds.
Consequently, over the past three decades, an all-encompassing market infrastructure has been established based on cookies, comprising the generation of data through tracking, the production of knowledge from this data, and the utilization of this knowledge for the sale of advertising space [13]. On the 1,000,000 most visited websites alone, more than 3,000,000 tracking technologies are used [14]. This implies that, in principle, the vast majority of users are affected by tracking technologies. In contrast, the advertising market is dominated by only a handful of players who are not pure marketing companies, but whose core business model is still based on data. The most prominent of these is Google.
A variety of measures have been implemented with the objective of enhancing or safeguarding the privacy of users online, including the cookie banners prescribed by ePrivacy Directive, but also the simple deactivation of cookies in the browser, numerous browser extensions or the configuration of individual browser settings. However, the efficacy of privacy add-ons is limited in that they only ever protect certain people from certain tracking technologies. They can even serve to legitimize the status quo [15], because they appear to be a solution to the data protection problem, which is why a holistic solution does not have to be considered.
In many contexts, the anonymization of data is also seen as a data protection solution. However, it is possible to identify specific individuals even in anonymized data. For example, in 2021, a conservative Catholic news portal in the USA bought anonymized location data from the dating app Grindr through a data broker. The news service was able to identify a high-ranking Catholic priest by comparing their own data on his whereabouts with the anonymized location data. The priest was then forced to resign from office [16].
Even if actual anonymization were technically possible, it would still prove ineffective, just as technologies such as data protection add-ons do. That is, because in marketing and advertising, statistical methods such as predictive analytics are employed to derive or predict information about an individual from a vast array of data about said individual and many other users. Imagine the following: If 70 % of people who wear white sneakers also indicate that they prefer vanilla ice cream, a predictive model can be used to estimate that an unknown person who is only known to wear white shoes has a 70 % probability of liking vanilla ice cream. This 70 % probability is sufficient for a food manufacturer to classify the unknown person as a ‘vanilla ice cream lover’. Shoe color and ice cream preference are not political categories, but rather innocuous preferences.
In practice, however, predictive analytics is employed to infer sensitive attributes such as sexuality, race, personality traits, drug use, or a person’s mental health (the ice cream preference) from ostensibly innocuous and freely available, even anonymous data, including Facebook likes or a person’s location history (the color of the shoes) [17]. This is possible because a substantial amount of seemingly innocuous auxiliary data is fed into a predictive model. In addition, sensitive data can be generated about a limited number of users, which is also fed into the model, e.g. because they stated their sexuality in their Facebook profile. This data is then fed into the predictive model. And it directly influences the predictions that can be made about other individuals.
Thus, our data is interdependent, meaning that the data that can be generated about individuals A, B and C (e.g. that they like to wear white shoes and eat vanilla ice cream), has an effect on what can be inferred about me (e.g. if I wear white shoes, but do not want to be approached as an ‘vanilla ice cream lover’). Consequently, the efficacy of individual data protection settings is undermined. Some scientists have therefore proposed to re-evaluate the legality of informed consent, which is the legal mechanism currently employed to legitimize data collection, as the implications of consent decisions extend well beyond the individual person giving their consent, affecting other users as well [18], [19].
Predictive analytics is also the reason why actual anonymization would still be ineffective. For that, we must note that today, platforms and companies are not interested in the identity of an individual as a specific person with a specific name and surname, date of birth etc. They are interested in identifying an individual as consumer with certain desires and needs which they can capitalize – or even construct. Predictive analytics allows to infer or predict information about any individual’s desires and needs by combining innocuous data about them with behavioral data of all other users. As such, an individual is still ‘identifiable’, maybe not necessarily as name surname, born in 01.01.1980, but as a neurotic family father who likes horses and cooking, for example.
From the interdependence of data follows, that being digitally literate in the sense that one exerts control over one’s own data inherently entails control over the data of all other users. If a user wishes to exercise control over their data to the extent that no sensitive information about them is accessible, but sensitive information about them can be derived using predictive analytics, it is not sufficient to control only their own data; they would also have to control the data of all other users. The term ‘all others’ encompasses not only the users who are active at the time of data collection but also those who were active in the past and all future users, as their data has also been and will continue to be incorporated into the predictive model.
It is perplexing, then, that the demand to be digitally literate and to control our allegedly own data makes us responsible for something that we are not even capable of doing. There is a term for this, it is called responsibilization, a process characteristic for neoliberalism. In the context of responsibilization, the state transfers responsibility for tasks that it previously held itself or that were previously not assigned to any actor, to its individual citizens or to other non-state actors. The failure to fulfil this responsibility is programmed into responsibilization, as structural tasks and problems cannot be solved individually and the failure is always located on the side of the individual. This is a form of neoliberal blaming the victim [20]. Another prominent example for neoliberal responsibilization is climate crisis, where responsibility to avert it is put on the individual consumer.
That responsibilization still works in circumstances where individuals are structurally unable to fulfil their responsibilities is because it works via the idea of empowerment, which is perceived as a positive process [21]. An empowered subject is self-determined and independent of any external support, for example from the state. At first glance, this appears to be a desirable state of affairs. However, the idea of the empowered and independent subject completely negates the historically evolved social structures into which people are born and as a result of which they are dependent on other people or institutions.
Perhaps the demand for digital literacy is just neoliberal empowerment in a new guise. It would legitimize a form of responsibility in which we as individual users are given a responsibility that we cannot actually fulfill.
3 On settling down: between computer science and philosophy
In the previous chapter, I demonstrated the significance of combining socio-political discourses on digitalization with actual knowledge concerning the underlying technologies. In a similar vein, it is imperative to consider the social effects and implications of digital technologies.
Subtle Discouragement
- “Are you sure that’s something for you?”
- “I never thought that’s what you’d be doing!”
- “Oh, you’re wrong here. Did you blindly follow the boys once again? The information booth for social professions is in the other room.”
- ,,You?!“
It is subtle, sometimes even well-intentioned statements like this, that can have an all the greater effect on young girls’ as well as educated female computer scientists’ career choices.
I call them subtle discouragements. In a 2015 study [22], 38 % of girls who decided against a technical apprenticeship did so, because they experienced discouragement within their social environment. During my time as Junior Fellow of the German Informatics Society, I want to gain insights into subtle discouragement and identify the factors that contribute to encouraging experiences. Read more in the GI Radar [23].
In hindsight, what I now perceive as strength is the result of meandering between a range of disciplines, which was primarily driven by interest and curiosity, but has also been shaped by a sense of ‘not belonging’. In the following, I will trace my academic path, which has taken me through linguistics, philosophy and computer science.
I do not want to begin with the “primordial ooze”, but I do want to commence in 10th grade, where I took a mathematics class during orientation for a mandatory specialization in 11th grade. I had always enjoyed mathematics, but in that class, I simply did not feel at ease or as if I belonged among the other (male) students. Thus, I decided to go for the humanities, which also led me to study Philosophy and Linguistics. There, I had three pivotal insights: I realized how much fun logics was, that one does not have to conform to the stereotype of a computer scientist to be one, and how crucial the role is that digital technology plays for the us surrounding world. I decided that I wanted to pursue an active role in that matter, and studied computer science.
During my studies, I frequently found that my philosophically trained thinking reliably led me to consider the societal implications of statements and be indignant that the lecturers would not. I also frequently perceived the environment to be subtly hostile and only ever felt at ease when I was with a small peer group, consisting predominantly of women. Without peers with whom to share experiences and to express both curiosity and vulnerability, I would have undoubtedly encountered greater difficulties.
It was a challenge to find a person willing to actually supervise me as I knew that I wanted to consider societal topics in my thesis and not just “pure” computer science. In Frank Pallas, I found a supervisor where my interdisciplinary background also in Linguistics came in handy as he is researching at the intersection of society and computer science with a focus on legal topics. I finished my studies with a thesis on the automatic extraction of transparency information from privacy policies.
After finishing my Bachelor’s, I re-oriented towards a more philosophical stance and enrolled in a Master’s program in the Theory and History of Science and Technology. I wanted to find answers to the pressing questions about the societal implications of the partly blunt assumptions in computer science. You have just read the results of those studies. My supervisors Beate Krickel’s sharp view on argumentation and Rainer Mühlhoff’s inspiring approach to questions of power in the digital were pivotal for my master’s thesis quality and success, as it was honored with the 1st Prize of the Weizenbaum-Studienpreis by the Forum InformatikerInnen für Frieden und gesellschaftliche Verantwortung.
While working on my master’s thesis, I realized that although the questions of philosophy highly interest me, the praxis of working, of acquiring and generating knowledge in the domain of computer science aligns more closely with my modus operandi. Consequently, I made the decision to pursue a doctoral degree in the field of computer science. Here, as well as throughout the entirety of my academic career, my educated middle-class upbringing equipped me with the necessary self-assurance to pursue this notion. At this point though, I had only tenuous connections left with the academic community in computer science. At the same time, I recognized the importance of being discerning in my selection of a supervisor, as I had experienced that more often than not, I did not feel at ease in male-dominated IT environments. I proactively sought out female professors in the field of computer science, and networking played a significant role in this journey. I had the privilege of joining a group of female computer scientists at the German Informatics Society and I gained profound insights into the challenges and lessons they encountered and learned as women in the field of computer science. Their experiences served as a source of inspiration, and also helped me put myself in perspective. I was encouraged and supported by them, and they generously offered guidance and mentorship. I had the opportunity to meet many role models and establish new connections in both the academic and industrial realms of computer science.
It was during that period that I became aware of an available doctoral position within the working group of Helena Mihaljević, Professor for Applied Machine Learning and Data Science at the University of Applied Sciences Berlin. Here, again, my interdisciplinary background came in handy, as Prof. Mihaljević research focuses on the analysis of societal phenomena and issues through the application of data science methodologies. As universities of applied sciences are not (yet) (all) permitted to award doctorates, I have had to find a supervisor from a university in order to undertake a cooperative promotion. Once more, the process was facilitated by networking: I met my supervisor Anne Lauscher, Professor of Data Science at the University of Hamburg, as she was a speaker at the annual conference of the female computer scientists’ expert group, where she gave a talk on the pressing issue of discrimination in Large Language Models.
Today, I am in the early stages of my PhD, researching content moderation on decentralized platforms such as Mastodon. Content moderation has become an integral part of social media platforms, as it shapes not only what users can see on a platform but also how they feel while browsing it [24]. As such, it is a primary factor contributing to the migration of users between different social media platforms. Not only is it proving difficult to receive data from large platforms [25], but proposals for enhanced content moderation systems tend to be rebuffed by the dominant players in the field, who have their own agendas and research & development departments. In light of this, Tarleton Gillespie and Patricia Aufderheide put forth the proposition that: “innovative moderation strategies may emerge from smaller platforms, platforms outside of the US, and platforms that imagine themselves and their communities very differently than Facebook does” [25], p. 3].
In the context of the proliferation of hate speech, violence, misinformation and other content that contravenes platforms’ privacy policies, this issue assumes even greater significance. Nevertheless, prominent platform operators such as Elon Musk of X or Mark Zuckerberg of Meta have announced to – euphemistically formulated – “tone down” on content moderation, effectively leaving the platforms’ online public spaces at the disposal (or responsibility) of users.
Following Elon Musk’s acquisition of Twitter in 2022, a significant number of users have migrated to a decentralized service called Mastodon [26] which will be the focus of my research. Mastodon is a social media platform that is part of the Fediverse, which is an ensemble of decentralized services that facilitate communication amongst users. These services are all built on top of the same ActivityPub protocol which enables the service to federate content amongst each other. Each microblogging instance on Mastodon is subject to its own terms of service, privacy policy and content moderation policies. Accordingly, unlike other platforms such as X or Facebook, there is no central authority that reinforces seemingly consistent rules. Instead, each instance should be regarded individually, although all instances do abide to a broaders whole, the Mastodon Server Covenant, which means that governance on Mastodon can be described as digital federalism [27].
In my thesis, I will consider a broad range of perspectives and start off with a qualitative study conducting interviews with admins and content moderators on Mastodon. I want to understand why they host an instance in the first place, how their rules and guidelines came into being and how they describe their content moderation practice. I am also currently scraping the publicly available policies and rules of all instances every day, in order to find out how the documents relate to each other, how they change over time, and what kind of norms are set and enforced in them. I am also curious to find out, whether and if yes, how real-world events have impact on them. Furthermore, I am interested in researching the interactions between users and the moderation decisions on Mastodon in practice, but there are numerous ethical considerations that must be taken into account and consents to be obtained before any content data can be extracted from the platform.
An exploration of the intricacies of content moderation and governance on Mastodon provides invaluable insights into the future of communication and societal interaction that extends to the realm beyond social media, shedding light not only on the digital landscape.
In addition to my research, I am also teaching students at the University of Applied Sciences Berlin. This role represents a welcome change from research, which I also find personally fulfilling. In my teaching, I touch on the interface of computer science and society and I hope that this area will soon be included as a compulsory component of computer science study programs. Hopefully, I can contribute to that development in the future.
About the author

Mareike Lisker studied Computer Science at the Technical University Berlin (B.Sc. 2019). Before that, she obtained a B.A. in Philosophy and Linguistics from the Humboldt-University of Berlin (2015). She also studied Theory and History of Science and Technology at the TU Berlin (M.A. 2023). Today, she is a research assistant and doctoral student at the Berlin University of Applied Sciences (HTW Berlin), where she teaches and researches at the interface between computer science and society, and at the University Hamburg (UHH). In 2024, Lisker was awarded Junior Fellow by the German Informatics Society, where she is an active member.
-
Research ethics: Not applicable.
-
Informed consent: Not applicable.
-
Author contributions: The author has accepted responsibility for the entire content of this manuscript and approved its submission.
-
Use of Large Language Models, AI and Machine Learning Tools: DeepL was used to improve language.
-
Conflict of interest: Author states no conflict of interest.
-
Research funding: None declared.
-
Data availability: Not applicable.
References
[1] I. Kant, “Was ist Aufklärung?” Utopie Kreativ, vol. 159, no. 1, pp. 5–10, 2004 [1784].Search in Google Scholar
[2] P. Mecheril and M. Rangger, “Kolonialität der Mündigkeit. Anmerkungen zu einer grundlegenden Referenz politischer Bildung,” POLIS, vol. 27, no. 3, pp. 11–13, 2023, https://doi.org/10.46499/2243.2829.Search in Google Scholar
[3] R. Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code, 1st ed. Medford, MA, Polity, 2019.10.1093/sf/soz162Search in Google Scholar
[4] C. O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, New York, Crown, 2016.Search in Google Scholar
[5] S. U. Noble, Algorithms of Oppression: How Search Engines Reinforce Racism, Illustrated ed. New York, Combined Academic Publ., 2018.Search in Google Scholar
[6] V. Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, Illustrated ed. New York, NY, St Martin’s Press, 2018.Search in Google Scholar
[7] J. Angwin, J. Larson, S. Mattu, and L. Kirchner, “Machine bias,” ProPublica, [Online]. Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing Accessed: Mar. 2, 2023.10.1201/9781003278290-37Search in Google Scholar
[8] D. Meyer, “Amazon killed an AI recruitment system because it couldn’t stop the tool from discriminating against women,” Fortune, [Online]. Available at: https://fortune.com/2018/10/10/amazon-ai-recruitment-bias-women-sexist/ Accessed: Dec. 7, 2022.Search in Google Scholar
[9] K. Hao, “Facebook’s ad algorithms are still excluding women from seeing jobs,” MIT Technology Review, [Online]. Available at: https://www.technologyreview.com/2021/04/09/1022217/facebook-ad-algorithm-sex-discrimination/ Accessed: Dec. 7, 2022.Search in Google Scholar
[10] Facebook Meta, “Updates im Werbeanzeigenmanager zu Anzeigen für Immobilien, Jobangebote und Kredite,” Meta for Business, [Online]. Available at: https://de-de.facebook.com/business/news/updates-to-housing-employment-and-credit-ads-in-ads-manager Accessed: Oct. 25, 2024.Search in Google Scholar
[11] G. M. Johnson, “Algorithmic bias: on the implicit biases of social technology,” Synthese, vol. 198, no. 10, pp. 9941–9961, 2020. https://doi.org/10.1007/s11229-020-02696-y.Search in Google Scholar
[12] L. Montulli, “The irregular musings of Lou Montulli: the reasoning behind web cookies,” The irregular musings of Lou Montulli, [Online]. Available at: https://montulli.blogspot.com/2013/05/the-reasoning-behind-web-cookies.html Accessed: Oct. 5, 2022.Search in Google Scholar
[13] K. Mellet and T. Beauvisage, “Cookie monsters. Anatomy of a digital market infrastructure,” Consum. Mark. Cult., vol. 23, no. 2, pp. 110–129, 2020. https://doi.org/10.1080/10253866.2019.1661246.Search in Google Scholar
[14] builtwith, “Analytics technologies web usage distribution,” [Online]. Available at: https://trends.builtwith.com/analytics Accessed: Oct. 25, 2024.Search in Google Scholar
[15] S. Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, 1st ed. New York, NY, PublicAffairs, 2019.Search in Google Scholar
[16] heise online, “USA: Katholikenorganisation kauft App-Daten und findet Priester auf Grindr & Co.,” heise online, [Online]. Available at: https://www.heise.de/news/USA-Katholikenorganisation-kauft-App-Daten-und-findet-Priester-auf-Grindr-Co-7543463.html Accessed: Oct. 25, 2024.Search in Google Scholar
[17] M. Kosinski, D. Stillwell, and T. Graepel, “Private traits and attributes are predictable from digital records of human behavior,” Proc. Natl. Acad. Sci., vol. 110, no. 15, pp. 5802–5805, 2013. https://doi.org/10.1073/pnas.1218772110.Search in Google Scholar PubMed PubMed Central
[18] M. Engeler, “Der Konflikt zwischen Datenmarkt und Datenschutz,” NJW, no. 47, pp. 3398–3405, 2022.Search in Google Scholar
[19] R. Mühlhoff, “Prädiktive Privatheit: Kollektiver Datenschutz im Kontext von Big Data und KI,” in Künstliche Intelligenz, Demokratie und Privatheit, vol. Künstliche Intelligenz, Demokratie und Privatheit, M. Friedewald, A. Roßnagel, J. Heesen, N. Krämer, and J. Lamla, Eds., Nomos Verlagsgesellschaft mbH & Co. KG, 2022, pp. 31–58. [Online]. Available at: https://www.nomos-elibrary.de/10.5771/9783748913344-31/praediktive-privatheit-kollektiver-datenschutz-im-kontext-von-big-data-und-ki?page=1 Accessed: Nov. 16, 2022.10.5771/9783748913344-31Search in Google Scholar
[20] G. C. Gray, “The responsibilization strategy of health and safety: neo-liberalism and the reconfiguration of individual responsibility for risk,” Brit. J. Criminol., vol. 49, no. 3, pp. 326–342, 2009. https://doi.org/10.1093/bjc/azp004.Search in Google Scholar
[21] É. Hache, “La responsabilité, une technique de gouvernementalité néolibérale ? [Is responsibility a tool of neo-liberal governmentality?],” Raisons Polit., vol. 28, no. 4, p. 49, 2007. https://doi.org/10.3917/rai.028.0049.Search in Google Scholar
[22] acatech, “MINT Nachwuchsbarometer 2015,” 2015 [Online]. Available at: https://www.acatech.de/publikation/mint-nachwuchsbarometer-2015/ Accessed: Sep. 30, 2024.Search in Google Scholar
[23] Gesellschaft für Informatik, “GI-Radar 365: Subtile Entmutigungen,” [Online]. Available at: https://gi-radar.de/365-subtile-entmutigungen/ Accessed: Oct. 25, 2024.Search in Google Scholar
[24] R. DiResta, “The great decentralization,” 2025 [Online]. Available at: https://www.noemamag.com/the-great-decentralization Accessed: Jan. 9, 2025.Search in Google Scholar
[25] T. Gillespie, et al.., “Expanding the debate about content moderation: scholarly research agendas for the coming policy debates,” Internet Policy Rev., vol. 9, no. 4, 2020, https://doi.org/10.14763/2020.4.1512.Search in Google Scholar
[26] H. B. Zia, J. He, A. Raman, I. Castro, N. Sastry, and G. Tyson, “Flocking to mastodon: tracking the great twitter migration,” arXiv: arXiv:2302.14294, 2023 [Online]. Available at: http://arxiv.org/abs/2302.14294 Accessed: Oct. 24, 2023.Search in Google Scholar
[27] R. W. Gehl and D. Zulli, “The digital covenant: non-centralized platform governance on the mastodon social network,” Inf. Commun. Soc., vol. 26, no. 16, pp. 3275–3291, 2023. https://doi.org/10.1080/1369118X.2022.2147400.Search in Google Scholar
© 2025 the author(s), published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Frontmatter
- Editorial
- Junior fellows and distinguished dissertation of the GI & AI for crisis
- Self-Portrayals of GI Junior Fellows
- Natural Language Processing for Social Good: contributing to research and society
- A self-portrayal of GI Junior Fellow Franziska Boenisch: trustworthy machine learning for individuals
- Between computer science and philosophy, and: on the (im-)possibility of digital literacy
- Toward trustworthy AI with integrative explainable AI frameworks
- Distinguished Dissertations
- On the dissertation “Scalable SAT Solving and its Application”
- Research Article
- Queryfy: from knowledge graphs to questions using open Large Language Models
Articles in the same Issue
- Frontmatter
- Editorial
- Junior fellows and distinguished dissertation of the GI & AI for crisis
- Self-Portrayals of GI Junior Fellows
- Natural Language Processing for Social Good: contributing to research and society
- A self-portrayal of GI Junior Fellow Franziska Boenisch: trustworthy machine learning for individuals
- Between computer science and philosophy, and: on the (im-)possibility of digital literacy
- Toward trustworthy AI with integrative explainable AI frameworks
- Distinguished Dissertations
- On the dissertation “Scalable SAT Solving and its Application”
- Research Article
- Queryfy: from knowledge graphs to questions using open Large Language Models