Home The Right to One’s Own Reasons: Autonomy and Online Behavioural Influence
Article Open Access

The Right to One’s Own Reasons: Autonomy and Online Behavioural Influence

  • Joris Graff ORCID logo EMAIL logo
Published/Copyright: November 19, 2025

Abstract

The algorithmic curation of social media content and advertisements has a significant impact on many people’s behaviour. This influence is leading to increasing moral unease, calling for ethical reflection. A common analysis conceptualises problematic online influence as a form of manipulation. This paper argues that this analysis is flawed, since standard accounts of manipulation presuppose a manipulative intention or attitude, which cannot be identified in many cases of algorithmic curation. Instead, it is argued that an account of problematic online influence should focus on the risks posed to social media users’ personal autonomy. An outline of such an account is sketched by identifying two conditions for personal autonomy and arguing that algorithmic curation is likely to undermine both conditions. First, personal autonomy requires the capacity to act for normative reasons. However, social media algorithms often make users act for nonreasons and may erode capacities needed to recognise normative reasons. Second, personal autonomy requires that the reasons for which someone acts cohere with that person’s commitments and life plans. However, social media algorithms invite users to act for reasons isolated from their broader commitments. The conclusion is that social media platforms create wrongful risks to their users’ autonomy.

1 Introduction

Consider the following situation. One evening, Alice logs into her preferred social medium website, Sharely.[1] As she scrolls through her timeline, she encounters multiple photos of her friends attending parties, clubs, and other late-night social events. Watching these photos, Alice is filled with an increasing sense of dissatisfaction with her own life, specifically the fact that she has been spending all of her free evenings at home recently. After a number of these posts, her timeline is interrupted by an advertisement for a concert ticket seller. Her desire to attend more late-night events causes Alice to follow the link to the ticket seller’s website, where her attention is grabbed by a concert taking place the following weekend by a band that she knows superficially and vaguely enjoys. She sends a message to her friend Brandon, with whom she has gone to a few concerts before, to ask him if he would like to join. Brandon responds affirmatively, and Alice buys two tickets for the concert in question.

Of course, it was no coincidence that Sharely showed Alice the posts and advertisement that it did. The timeline of Sharely users, after all, is curated by a learning algorithm, for instance a deep neural network (DNN), taking user data (e.g. posts, likes, cursor movements, which posts the user spends most time watching, etc.) as its inputs. This algorithm is rewarded for optimising the time that users spend on Sharely and the frequency with which they click ads. That is because these metrics generate most profit for Sharely’s parent company, Telos. In this case, the DNN has learnt that users with profiles similar to Alice are likely to click on advertisements for social late-night events after being shown posts by friends attending similar events, especially in the evening.

Situations like this fictional story are very common, given that many people spend a significant part of their lives on social media platforms with algorithmically curated timelines. Such algorithmic curation has, in recent years, led to increasing moral unease. Spelling out the precise grounds (if any) for this unease is a topic of ongoing discussion. In this paper, I will defend an autonomy risk view of the moral harm done by algorithmic curation on social media. On this view, at least a large part of the intuited moral harm is that such curation algorithms endanger users’ personal autonomy, that is, roughly, their ability to make their own choices and shape their own lives. Algorithms employed in the service of social media (or advertisers on social media[2]) may compromise choices such as whether and how much time to spend on social media, but also choices concerning consumption, lifestyle, and political behaviour. The idea is that, when such choices are influenced by curation of timelines carried out by algorithms deployed in social media companies’ interests, they become less responsive to users’ real interests, plans, or identities – in a word, to ‘who they are’. Alice’s decision to buy the concert ticket may no longer really be her own.

The autonomy risk view is contrasted mainly with analyses taking a manipulation view, which says that the main moral harm of (certain) online algorithms is that such algorithms are manipulative. Manipulation, roughly, is the intentional attempt to make someone’s decision-making process deviate from certain rational standards. After introducing the notion of algorithmic social media curation more clearly (Section 2), I will argue (in Section 3) that it is difficult to apply standard accounts of manipulation to social media. This is because such accounts focus on manipulators’ intentions, but social media companies do not straightforwardly possess intentions. Instead, I will suggest that we shift the focus away from manipulators’ intentions and towards the problematic aspects of the influence itself, namely, undermining users’ personal autonomy. In Section 4, I sketch a notion of personal autonomy that poses two conditions for a person to be autonomous: that the person is able to decide in response to normative reasons and that the reasons for her decisions are coherent with her broader projects, commitments, and life plans. The subsequent two sections spell out how online algorithms can undermine both conditions: Section 5 shows how they may undermine reason-responsiveness and Section 6 suggests that they may also undermine personal coherence. Section 7 concludes.

Before continuing, I wish to shortly lay out my goals in analysing wrongful algorithmic influence. I take it that the general goal of philosophical accounts of problematic practices is to assist in pushing back against such practices. One of the main ways in which we could push back against social media platforms’ undermining personal autonomy is by means of regulation. The philosophical account offered here is meant to assist in this effort, not by proposing specific regulation (that would require more legal discussion than I can give here), but rather by helping identify social media practices that pose morally problematic risks to user autonomy, specifying what is wrong with these practices, and demarcating them from morally innocuous forms of online influence. This hopefully helps lay the groundwork for further discussions about regulation, by focusing and legitimising such regulation.

The philosopher’s ideal would be to specify necessary and sufficient criteria that clearly demarcate wrongful from unproblematic online influence. Unfortunately, I am uncertain whether this can be done. The class of problematic algorithmic influence may be too slippery to be neatly parsed under a philosopher’s scalpel. Instead, I will stick to the more modest goal of outlining two plausible effects of some forms of algorithmic influence that would make such influence pro tanto wrongful. I do not, in this paper, provide clear-cut criteria for recognising such cases, although I will suggest some features of social media algorithms that make these effects particularly likely. If some social media platform uses algorithms that have these features, this should ring alarm bells. The alarm may still be false, but the case at least requires further investigation. This does not decide the question whether individual cases of online influence are wrongful, but it helps focus it – which, I think, is as far as a general philosophical account can go.

2 Social Media and Algorithmic Influence

This paper is (mostly) concerned with algorithmic curation of timelines and search results on social media platforms, and with the influence such curation has on users (I will denote this influence as online influence for short).[3] The term social media is defined in various ways (Aichner et al. 2021). I here stipulatively define a social media platform as an online platform (either a website or an app) that allows users to share content and access a wide range of content shared by other users. The term content here covers different mediums of communication, including text, images, videos, and audio. The term wide range is meant to exclude platforms where the user can only access messages sent specifically to that user. This definition thus excludes messenger apps like WhatsApp.[4]

In most social media platforms, the main page (i.e. the page shown when a user logs into the platform) is formed by a timeline (also known as news feed or simply feed) (Altman et al. 2013). This is a regularly updated selection of content and ads, with more content available by scrolling or swiping. Users can usually also search for specific types of content. Since both timelines and search results generally contain only a (small) subset of all available content and ads, curation is needed to select and order those contents and ads that are shown to a user (Davis 2017; Khan and Bhatt 2018). Usually, the precise methods of this curation are proprietary, but a broad (although likely incomplete) picture of major social media platforms’ curation methods can be formed based on a combination of leaked documents, unofficial statements by (former) employees, and public statements and publications (some in response to hearings by the US Senate). From this picture, it is clear that deep-learning models such as DNNs, taking user data as inputs, play a significant role in curating major social media platforms’ timelines and search results. For Facebook, for instance, these data include, at least, clicks, likes, shares, comments, cursor movement (Ganjoo 2018), time spent reading posts (Griffin 2015), physical location (Feinder 2019), and, until 2021, biometric data (the latter tracking method was discontinued after privacy lawsuits) (Kracht et al. 2022). Moreover, these data are not only collected from Facebook itself, but also from a number of partner websites with which Facebook has data-sharing agreements.

Using these data, the deep-learning models are trained to maximise metrics correlated with social media platforms’ profits, such as time spent on the platform, ‘engagement’ (user interactions with content, such as liking, sharing, etc.), and click-through rate (an ad’s click-through rate is the number of times users click the ad divided by the number of times the ad is shown). That is, the model receives a positive feedback signal if a user continues to spend more time on the platform, interacts with more content, and/or clicks more ads, and a negative feedback signal (causing it to change its weights) otherwise. The way the deep-learning models’ outputs are translated into timelines or search results is not as simplistic as in the hypothetical example of Sharely, where a DNN directly determines a user’s entire timeline on the basis of all collected data. First, content recommendation, search and ad targeting algorithms (which together I denote as curation algorithms) tend to be separate (Narayanan 2023). Second, deep learning algorithms are likely combined with human-set parameters, preferences communicated by advertisers (e.g. advertisers on Facebook may target ads based on likes or interests), and sometimes human curating (e.g. to remove offensive posts), in order to determine the final timeline (Kim 2017; Narayanan 2023). Notwithstanding these complexities, machine learning-based methods, used to estimate which timeline or search result candidates and which ads are likely to maximise the target metrics for each user, are a significant part of the curating process.

Algorithms – specifically deep-learning models – thus have an important influence on what content and ads social media users see. Given the plausible assumption that what social media content and ads someone sees often influences her behaviour and given the amount of time many people spend on social media, these algorithms also have an important influence on people’s day-to-day lives. It is thus crucial to scrutinise whether, and under what conditions, this influence is problematic.

3 Wrongful Online Influence: from Manipulation to Autonomy

Existing attempts to demarcate between problematic and morally innocuous online influence often appeal to the distinction between manipulation and persuasion – indeed, a key philosophical collection on this matter is titled The Philosophy of Online Manipulation (Jongepier and Klenk 2022). The idea is that content and ads on social media sometimes influence users in a way that constitutes manipulation. Although several accounts of manipulation exist, many current discussions refer back to the now seminal accounts provided by Noggle (1996) and Baron (2003, 2014).[5]

According to Noggle (1996), manipulation is behaviour that strives to make someone’s beliefs, desires, or emotions diverge from certain normative ideals. The ideals in question are those that allow a person to exercise her rational and moral agency – or, more specifically, that the manipulator believes allow the manipulated person to exercise her rational and moral agency. Different actors may well disagree on which ideals allow such agency – what is problematic is making people diverge from ideals that you think are the right ones. As examples of ideals that are widely shared, Noggle mentions the ideal that beliefs should be true and relevant, the ideal that desires should be reason-responsive, and the ideal that emotions should be appropriate to the situation (i.e. should help the person in question focus on relevant pieces of information).[6]

Baron’s approach (2003, 2014) focuses, not on the aims behind individual actions, but on a broader character trait of the manipulative person. This character trait Baron calls manipulativeness. According to Baron, manipulativeness is a vice in the Aristotelian sense, which is to say that it is one extreme of a virtue, which also allows for an opposite extreme.

What one gets wrong, in these forms of manipulation, and what the corresponding virtuous person gets right, is how much to steer others – and which others, and how, and when, and toward what ends; and more generally, to what extent – and how and when and to whom and for what sorts of ends – to seek to influence others’ conduct. (Baron 2003, 48)

Manipulativeness, then, is not one clearly circumscribed character trait, but can manifest itself in a variety of ways – in steering others too often, or too much, or to inappropriate ends, etc. A person with the opposite vice (which Baron calls isolationism) tries to influence people too little, or too few people, etc. The reason why manipulativeness is morally bad, according to Baron, is that it shows a wrong attitude towards others: the manipulative person does not view others as rational beings capable of making their own decisions. Because this attitude is crucial, Baron, like Noggle, stresses that manipulation requires intent on the part of the manipulator (although this intent may be subconscious[7]).

Noggle’s and Baron’s accounts, then, are different but largely complimentary, the former focusing more on what is wrong with individual acts of manipulation, and the latter on what is wrong with the manipulative person. What both accounts have in common is the focus on some type of mental state or attitude that is supposed to underlie manipulative behaviour (whether it be an intention or a more general motivational attitude). This mentalistic aspect has been taken over by most authors aiming to analyse problematic online influence in terms of manipulation. Susser et al. (2019), in an influential account, for instance, define manipulation as ‘imposing a hidden or covert influence on another person’s decision-making’ (26), where it is specified that this hiddenness must be intentional.

There are, of course, good grounds to impose this mentalistic requirement on manipulation. In most instances where I influence someone in ways that undermine her rational or moral agency, but I have no intention of doing so, it would be counterintuitive to charge me with manipulation (though for possible counterexamples, see Manne 2014). For instance, imagine a variation of the above example where Alice has again been scrolling through posts of her promiscuously outgoing friends, building up a sense of insecurity. In this variation, instead of being presented with an advertisement, Alice is interrupted by a chat message from her friend Brandon, who asks her if she wants to join for the concert. We can imagine that, due to her anxious mental state, Alice’s decision does not meet ideals for rational agency. But we would not say that Brandon is manipulating Alice, for the reason that he did not know that she was in this state and therefore could hardly intend to make use of it (for similar examples, see Pepp et al. 2022). Examples like this one provide a plausible refutation of views on which manipulation is just any type of influence that undermines standards for rational decision-making.

However, the imposition of the mentalistic requirement comes at a price once we want to use standard accounts of manipulation to analyse online influence. This price is that, when the influence is brought about by a social media company, it is unclear to whom or what we can ascribe the relevant intention. There seem to be three options: either to ascribe the intention to the company, or to ascribe intentions to individuals within the company, or to say that the algorithm itself has intentions.

The first option runs into immediate controversy over the question whether group agents, such as companies, can have intentions. Although several philosophers have offered analyses of group intentions (or collective intentions) (Bratman 1993; Searle 1990; Tuomela 1991; Tuomela and Miller 1988), most of these state that such intentions are to be ascribed to, or reducible to intentions ascribed to, individuals. Perhaps it may be thought that we can ascribe intentions to companies in an interpretivist way: if a company’s policy decisions and public statements are easier to explain or predict by postulating an intention, then we should do so. But this strategy would run into epistemic problems: company decisions and especially statements are generally highly controlled, and therefore dubious grounds to establish whether or not manipulation is present. This echoes remarks by Noggle (1996, 51–52), who notes that it is difficult to identify cases of manipulation since it is difficult to identify intentions. Due to companies’ institutional mechanisms for controlling official statements, these epistemic problems are significantly heightened. Insofar as we have sufficient grounds to ascribe intentions to them, these are likely insufficiently specific to ground manipulation. For instance, Nys and Engelen (2022) argue that Google has a manipulative mens rea since it has stated: ‘to maximize your revenue, consider multiple monetization models for your app’ (cited in Nys and Engelen 2022, 142). But surely this in itself does not show that Google is intentionally manipulating anyone, since the quote says nothing about the types of influence by which Google aims to maximise its customers’ revenue.[8]

The second option is to say that some individual agent or agents within a social media company must have a manipulative intention for manipulation to be present. But the problem here is that companies may make decisions that are not explicitly intended by any individual within the company. As Van de Poel and Royakkers state, ‘we might sometimes judge that none of the individuals could reasonably foresee a certain harm, whereas at the collective level that same harm is foreseeable’ (2015, 6). If an outcome was not foreseeable for any individual, it could also not be intended by any individual. In such cases, every individual within a social media company can plausibly deny having intended the outcome, even though its products have foreseeable harmful effects on users’ autonomy or rational decision-making.

The third option is to say that social media algorithms themselves may have intentions. But ascription of mental states such as intentions to algorithms is highly controversial. Someone who wants to push back against wrongful online influence should not stake their analysis on claims that are so widely disputed.[9]

Can we do better? That is, can we analyse what is wrong with certain forms of online influence in a way that does not rest on (standard accounts of) manipulation, without including too many forms of influence (such as Brandon’s asking Alice out to the concert) as morally problematic? One suggestion is provided by Pepp et al., who aim to develop an influence-centric account of manipulation that focuses ‘on the manipulative influence rather than on the manipulator’s state of mind’ (Pepp et al. 2022, 93) (for a similar move, see Keeling and Burr 2022). At the same time, they aim to avoid counterintuitive implications similar to the idea that, in the above example, Brandon manipulated Alice. Their proposal is as follows:

For any entity (person, animal, institution, machine, etc.) X, a behavior or feature of X having a certain influence on another entity Y is an instance of manipulation only if the occurrence of the behavior or feature in X is partly explained by its tendency to have that influence on Y or on other entities relevantly like Y. (Pepp et al. 2022, 97)

Indeed, this suggestion correctly captures the above example: the fact that the timing of Brandon’s message has a certain negative influence on Alice’s decision-making does not explain his behaviour, since if the message had not had this effect, he would still have sent it in the same way, at the same moment. In contrast, the fact that Alice’s Sharely timeline has the same influence on Alice’s decision-making explains why the timeline is as it is, because Sharely’s algorithm has learnt that these types of timelines are particularly effective to stimulate advertisement clicks from users like Alice (during the evening).

Pepp et al.’s suggestion thus moves in the right direction as a starting point for analysing wrongful online influence. However, the question of what explains particular instances of online influence is often hard to answer. Especially when dealing with complex self-learning algorithms, such as DNNs, explanations of the algorithms’ outputs are often impossible to give. Still, intuitively, we should not let Sharely (or its parent company, Telos) off the hook because we cannot explain how its algorithm operates. What matters is that Telos, by letting its algorithm loose on potentially vulnerable users, predictably puts its users’ autonomy at risk. In other words, it either knew or should have known of this risk, and yet did not take steps to prevent it.

For these reasons, I propose an autonomy risk view: online influence is pro tanto wrongful if 1) it predictably risks some person’s or persons’ autonomy to be undermined and 2) the influencer could refrain from exercising it.[10] I remain neutral on the question whether the type of wrongful online influence I am henceforth concerned with should be described as a form of manipulation or not. My aim is to identify what is wrong with this influence, not to regiment its conceptual status (see Barnhill 2022).

On the autonomy risk view, wrongful online influence can be seen as a form of negligence, the preventable risking of harm – in this case, harm to autonomy.[11] The explanation why some forms of online influence are wrong, then, is analogous to the explanation why it is wrong to expose people to significant risks of other types of harms (here I use the word harm in a broad sense that is meant to be consistent with both consequentialist and nonconsequentialist accounts of the value of autonomy and thus of the wrongfulness of undermining autonomy). If, through negligence of a housing company, a building collapses onto its tenants’ heads, the housing company is clearly in the wrong, regardless of its precise intentions, because it is wrong to expose people to such risks.

Before continuing, let me consider one further objection to influence-centric accounts of wrongful online influence. Nys and Engelen (2022) claim that such accounts presuppose that users would, absent the influence, have acted autonomously (or rationally). But, they object, this is often not the case, since our behaviour is often led to diverge from rational standards (or to be nonautonomous) through accidental environmental factors anyway. I have two responses to this point. First, as we will see, autonomy is not an all-or-nothing matter. Therefore, it is not true that, on the autonomy risk view, ‘non-rational [or autonomy-undermining] factors should be absent for decisions and decisionmakers to be respected’ (Nys and Engelen 2022, 140). Such factors should only be less egregious than in the case of wrongful online influence.

Second, my account only requires that social media create a risk to autonomy for their influence to be wrongful, not that we can prove autonomy to be undermined in some specific instance. Compare: we do not have to know whether someone who was exposed to asbestos really develops cancer (or, if she does, whether the cancer was really caused by the asbestos) in order to know that she was put at an undue risk (and therefore that whoever is responsible for the presence of the asbestos is morally culpable). Of course, this analogy would not work unless people are generally speaking in a better position to exercise autonomy when not using social media than they are when influenced by algorithmic curation. I hope to establish in Sections 5 and 6 that this is indeed plausible. Social media algorithms target individual weaknesses in a way not present in day-to-day interactions and thus create risks to autonomy over and above those unavoidably present in the human condition. Before developing this argument, however, we need to clarify what the conditions of personal autonomy are, exactly.

4 Two Conditions for Personal Autonomy

Let us take stock. Standard manipulation-based accounts of wrongful online influence are flawed because they make it very difficult to identify problematic instances of online influence, which significantly hampers their pragmatic value. A more helpful account should therefore focus on features of the influence itself. So far, I have summarised these features under the broad header of ‘putting users’ autonomy at risk’. But here, the real work for an influence-centric account begins. What types of online influence can be said to risk undermining users’ autonomy?

Jettisoning the mentalistic requirement makes this question more pressing. Recall that Noggle, for instance, can remain relatively noncommittal on which standards for belief, etc. are the relevant ones to consider when determining if someone’s intention is manipulative. This is because he can, as it were, outsource these questions to the potential manipulator: it is the standards according to the influencer that count. But once we abstract away from the mental states of the influencer, we need a more substantive account of what forms of influence are and are not appropriate.

When I speak of autonomy, I mean personal autonomy (as opposed to for instance moral or political autonomy). I understand personal autonomy as a desideratum related to people’s personal lives – although, as Raz (1986, 372) notes, a ‘secondary sense’ of the notion of personal autonomy involves the capacity required for achieving autonomy as a desideratum.[12] A desideratum for what, exactly? One option is that autonomy is a desideratum for individual actions (e.g. Faden and Beauchamp 1986, 235–237). However, this misses what appears to be an important value underlying much talk about autonomy, namely, the value of being in control of one’s life more generally. Raz summarily expresses this desideratum as follows: ‘The ruling idea behind the ideal of personal autonomy is that people should make their own lives. The autonomous person is a (part) author of his own life’ (Raz 1986, 369). Following Raz, I will consider autonomy as a desideratum that applies to an agent’s life. However, we can still speak of autonomous actions, decisions, desires, etc., meaning that these contribute to an autonomous life. I assume (without arguing for it) that there is such a desideratum – leaving undecided the question whether autonomy is inherently valuable or has merely instrumental value (e.g. in being a means to satisfy one’s personal interests).

I do not here have the space to survey all of the rather expansive philosophical literature on the topic of personal autonomy.[13] Instead, I will, on the basis of some of the more prominent accounts, characterise two plausible conditions for a person to be autonomous. I do not claim that these conditions exhaust all that is meant with the term personal autonomy. Indeed, the conditions mostly concern what Mackenzie calls self-governance, which ‘involves having the skills and capacities necessary to make choices and enact decisions that express, or cohere with, one’s reflectively constituted diachronic practical identity’ (Mackenzie 2014, 17). Mackenzie views self-governance as only one out of three dimensions of personal autonomy (next to self-determination and self-authorisation). In focusing on self-governance, I do not claim this to be the most important dimension of personal autonomy, but it is the dimension most clearly affected by online influence.

4.1 Condition 1: Autonomy as Reason-Responsiveness

The first condition is that someone can only lead an autonomous life if she has the capacity to make her actions responsive to reasons (sufficiently often) – more specifically, to normative reasons.[14] As many authors have noted, the notion of reasons includes different subtypes. A common distinction is between normative and motivating reasons (e.g. Dancy 2000, 1–7; Alvarez 2010, 33–39). A normative reason is a fact or consideration that counts in favour of (e.g.) an action – the fact that the dentist could remove my pain is a normative reason for me to go to the dentist. A motivating reason is a fact or consideration for or in light of which someone acts – if I realise that the dentist could remove my pain, and this realisation motivates me to go to the dentist, then the fact that the dentist can remove my pain is also my motivating reason for going to the dentist. If, on the other hand, I do not realise this, then the fact remains a normative reason, since it remains the case that I would be better off going to the dentist due to her ability to remove my pain, even though it is not a motivating reason.

As the above example shows, the same consideration can sometimes serve as both a normative and a motivating reason. This overlap is crucial for leading an autonomous life. In order to be the author of her own life, a person needs to act in accordance with her values, aims, etc. These provide her with normative reasons. Therefore, in order to be autonomous, she needs to make these normative reasons her motivating reasons. Thus the first criterion of autonomy can be spelt out as such: autonomy requires the capacity to 1) act for motivating reasons (rather than, say, being moved by subconscious impulses) that 2) are also normative reasons (rather than considerations the person mistakenly takes to be good reasons).[15] With regards to clause 1, a person’s behaviour must not only be in line with her reasons, but she must also act for those reasons. This does not necessarily mean that she consciously goes through an internal inference of the form ‘X is a reason to do A, so I will do A’. But some line of reasoning linking reason X to decision A must be available, at least upon reflection (see Anscombe 1957, §42; Audi 1986). With regards to clause 2, there can of course be disagreement in particular cases whether or not someone is mistaken about what she takes to be good reasons. However, it seems clear that people are sometimes mistaken about this: if I go to the dentist because I think she can relieve my pain, but the dentist actually bribed her way through dental school and has no idea how to relieve my pain, then I am mistaken about having a good (normative) reason to go to the dentist.

The first condition, although intuitive, is not trivial, since it implies an externalist account of autonomy. Internalist accounts say that to be autonomous, a person’s decisions, desires, etc. need to be responsive to other states or attitudes that are internal to that person. An example is Frankfurt’s seminal account of freedom of will (1971), often applied to autonomy as well, according to which desires are autonomous if they are responsive to other (higher-order) desires. The problem with such accounts is that it seems possible that a person’s different mental states or attitudes are all the result of manipulation or oppressive socialisation (see Friedman 1986; Gorin 2014). If this is the case, even if these internal states or attitudes are responsive to each other, it would be strange to call the person autonomous. A classic example is the ‘deferential wife’ who willingly makes her interests and wishes subservient to her husband’s as a result of socialisation (Hill 1991, 5–6). To be able to criticise such forms of socialisation on grounds of autonomy, we need more constraints on what mental states, attitudes, or decisions plausibly count as autonomous.

There are roughly two ways to provide these additional constraints: a procedural (or weakly externalist) and a substantive (or strongly externalist) approach, respectively. The procedural approach says that decisions, preferences, etc. only count as autonomous if the process by which they were obtained meets certain conditions. This approach goes back to Dworkin, who characterises autonomy as ‘authenticity plus procedural independence’ (1981, 212) and has been further developed by Christman (1991) and Friedman (2003, chap. 1). Such accounts are externalist because the conditions of procedural independence do not fully depend on the person’s own mental states, but weakly so in the sense that no external constraints are placed on the content of the person’s decisions. On the other hand, substantivists, such as Benson (1991) or Stoljar (2000), do pose such constraints. For instance, some decisions, such as the decision to sell oneself into slavery, would be ruled out.

The condition that the autonomous person must be reason-responsive is meant to be neutral between proceduralist and substantivist accounts of autonomy. This is because normative reasons themselves can be understood in a proceduralist or a substantivist way. A proceduralist about reasons, such as Williams (1981), would say that people have normative reasons to do whatever they would desire to do when this desire is formed under conditions of rational deliberation. A substantivist about reasons, such as Parfit (2011, chaps. 2–3), would say that there are substantive constraints on the content of people’s normative reasons, regardless of what reasons they actually end up acting for (or would end up acting for under certain procedural conditions) (for further discussion, see Hooker and Streumer 2004). By remaining neutral between substantivist and proceduralist accounts of reasons, a reason-based account of autonomy can remain neutral between substantivist and proceduralist views of autonomy as well.

4.2 Condition 2: Autonomy as Coherence

The first condition on autonomy excludes obvious cases of manipulation, but it does not yet account for Raz’s observation that an autonomous agent is ‘(part) author of his own life’. A person who only follows courses of action for which she perceives good normative reasons is, it seems, only responding to her reasons, but never determining her own course of action. What is needed for this person to be autonomous, additionally, is that she decides between reasons in a way reflective of her broader commitments or projects (or lack thereof). To understand how this is possible, consider Raz’s observation that, in many situations, there is more than one course of action that is backed up by sufficient reasons (Raz 1986, chap. 13 and 388–389). Presumably, Alice could choose to spend her evening in several ways, all of which are reason-responsive. But some of her potential decisions may be more responsive to her broader commitments than others. In other words, while Alice could act for different reasons, only some of these reasons would be her own.[16]

The second condition, then, is that an autonomous person must maintain a certain coherence between her decisions and her commitments and projects, as well as between those commitments and what Meyers calls ‘life plans’: ‘A life plan is a comprehensive projection of intent, a conception of what a person wants to do in life’ (Meyers 1989, 49). Thus life plans create coherence between more specific projects, which in turn create coherence between individual decisions. Coherence here does not mean unity: a person may have many different commitments, or the commitment to act for many different types of reasons (see Raz 1986, 369–371). But such heterogeneous commitments must not conflict with each other. Neither should personal coherence be conceived as static. The autonomous person is likely to change her commitments and life plans in response to new experiences or insights (Meyers 1989, 42–49).

This second condition is in line with a diverse range of coherentist accounts of autonomy, such as those offered by Hurley (1989, chap. 15), Meyers (1989), and Ekstrom (1993, 2005). In Hurley’s view, for instance, persons can be seen as consisting of different ‘subsystems’, which is to say that they are responsive to different, often conflicting reasons, provided by conflicting values. Autonomy, as self-determination, consists in the ongoing process of deciding which values are most important in which situations:

Agents both are conflicted and seek personal coherence; in the presence of conflicting values, an agent may be uncertain what kind of person he is or wants to be, and in seeking coherence he may be trying to determine his very identity as a person in relation to various subsystems responsible to conflicting values. (Hurley 1989, 319)

Hurley spells out this process of ‘seeking coherence’ in somewhat intellectualist terms, as finding a theory that tells us how to order different values in different circumstances. No doubt, a general capacity for theoretical reflection on one’s values is important to achieve the kind of personal coherence that underlies autonomy, and we do sometimes choose between conflicting reasons by reflecting on what values we (should) care most about. But it is also plausible that the choice between different conflicting sets of reasons is sometimes a more practical matter. For instance, Alice may decide to spend some of her free evenings learning French, without an explicit belief that the value of learning new languages is, for her, higher than the competing value of dancing the night away. In order to be at least substantially autonomous, the decision may not need to have any deeper ground than a vague feeling that it may be a good idea to learn French. What matters for Alice’s autonomy is that, once the decision is made, she follows up on it or else knowingly revokes it.

Thus the coherence condition entails that autonomy has a historic element: an autonomous person’s decisions are responsive to projects she has previously decided to pursue. As Raz states (see also Bratman 2005):

A person who has projects is sensitive to his past in at least two respects. He must be aware of having the pursuits he has, and he must be aware of his progress in them. Normally one needs to know of one’s progress with one’s projects in order to know how to proceed with them (and unless one tries to pursue them rationally then they are not one’s projects anymore). (Raz 1986, 385)

Our commitments and life plans can thus arbitrate between competing reasons that would otherwise be deadlocked. Once Alice has made the decision to learn French, her reasons to learn French on any subsequent free evening take on an importance for her that they did not previously have. As Raz says, ‘One creates values, generates, through one’s developing commitments and pursuits, reasons which transcend the reasons one had for undertaking one’s commitments and pursuits’ (Raz 1986, 387). Of course, the degree to which Alice’s actions cohere with her commitments and life plans, and therefore the extent to which her life is autonomous, does not generally depend on individual decisions. One evening of learning French, rather than going to the club, does not significantly impact her overall autonomy. It is rather Alice’s pattern of choices that is relevant for her autonomy. If Alice were to frequently and repeatedly base her evening plans on reasons not tied up with her broader commitments, projects, and life plans, then her life would be less autonomous than it could have been, since an important part of it (how she spends her evenings) is not traceable to her own authorship. (This is presupposing, of course, that Alice does have some broader project or preference that has ramifications for how she should spend her evenings. It may be that her projects and preferences are neutral to the way in which she spends her evenings. But to be autonomous, she cannot be neutral to all parts of her life.) Although Alice may still be reason-responsive, she would be so in a shallow sense. The reasons she acts for would not really be her own.

4.3 Upshot

Thus, we have found that there are at least two conditions for someone’s being autonomous: 1) she must be able to (sufficiently often) act for normative reasons (whether these are procedurally or substantively defined), and 2) she must (sufficiently often) make decisions coherent with her broader commitments, projects, and life plans. Note that both criteria are gradual. First, a person can be more or less reason-responsive, since she can have a better or worse understanding of which normative reasons she has. Second, a person’s projects and decisions can be coherent to a greater or lesser extent. This means that personal autonomy is also a gradual property. Online influence cannot be expected to guarantee that users are maximally autonomous, since this would be an unreasonably high bar. What would be problematic is if such influence has a high probability of leaving users substantially less autonomous than they would otherwise be.

This gives us a tool for analysing whether and in what instances online influence can be harmful: it is harmful either if it risks making users substantially less reason-responsive or if it makes it difficult for users to develop and follow up on substantially coherent projects or patterns of choices. Both conditions of autonomy can be undermined in two ways (see also Oshana’s 2006 capacity-condition distinction). First, outside influence may undermine the competences required to respond to reasons or to develop and follow up on coherent projects. Second, outside influence may create conditions where even someone possessing competences required for autonomy cannot be expected to successfully exercise these competences. Thus there are (at least) four ways in total in which online influence may undermine autonomy, summarised in Table 1. It should be noted that the separations between these four categories are somewhat artificial. It will often be unclear whether someone is made to act for a reason that does not cohere with her projects or for no reason at all. It will also often be unclear whether online influence makes it impossible to make a specific choice autonomously or erodes competence more generally (if a person will repeatedly be unable to make autonomous choices, her autonomy competences will likely also diminish through lack of use).

Table 1:

Forms of undermining autonomy.

Reason-responsiveness Personal coherence
Competence Undermining the capacity to respond to reasons (e.g. eroding the ability to distinguish truth from falsehood, rational deliberation, etc.). Undermining the capacity to develop coherent projects and follow up on them (e.g. eroding self-esteem).
Conditions Presenting nonreasons as if they were reasons (e.g. presenting falsehoods, misleading framing, etc.). Providing strong temptation to act for noncohering reasons (e.g. playing into transient motivational states).

In the following two sections, I argue that some forms of online influence risk undermining autonomy in these four ways. The argument will proceed partially by pointing to characteristics of deep-learning curation models that make them particularly hazardous to autonomy and partially by means of examples. It should be kept in mind that these examples are mostly illustrative and not meant to offer comprehensive empirical evidence that the identified risks actually materialise – a more thorough overview of relevant empirical evidence will be left to future work.

5 How Algorithms May Undermine Reason-Responsiveness

Social media risk undermining both the competences and the conditions required for reason-responsiveness. Starting with the latter: people cannot be expected to respond to reasons when presented with considerations that are not normative reasons but are easily mistakable for normative reasons. It is not hard to find examples of such influence in the literature on social media. Many of the examples canvased by Sahebi and Formosa (2022) fall under this kind. For instance, fake news presents false information as if it were legitimate (Sahebi and Formosa 2022, 69). A user acting under the influence of fake news does not act for a normative reason, since what is false cannot count in favour of anything. Targeted advertising, based on psychological profiles of users, can target users’ psychological weaknesses, such as irrational fears, to make them act in ways that are irrational, and therefore not reason-responsive (Calo 2013; Spencer 2020). This may happen either because users act for motivating reasons that are not in fact normative reasons – that is, they are mistaken about what reasons they have – or because they fail to act for their acknowledged reasons due to (temporary) weakness of will (see also Noggle 2025, 69–74, for a distinction between manipulation through inducing faulty mental states and manipulation through inducing akrasia, which, as Noggle stresses, is often porous).

Social media influence can also undermine users’ competence to appreciate normative reasons, even if they are not actively misled or psychologically targeted. General cognitive skills are often needed to understand the effects of one’s actions and therefore to know whether there are good reasons for these actions. For instance, to assess whether there is a good reason to vote for a certain political candidate, a voter would need to gather potentially complex information about the candidate’s policies and the likely effects those policies would have and compare this information to information about rival candidates’ policies. Research suggests that (excessive) social media use can have several adverse cognitive effects, such as attention deficits and worse memory retention (for a review, see Shanmugasundaram and Tamilarasu 2023).

Instead of further going into examples (see the above references), I will consider the question why algorithm-enabled timelines tend to jeopardise autonomy in this way. The main problem lies in the way that the deep-learning algorithms that are likely used for timeline curation, such as DNNs, are trained. As mentioned in Section 2, these algorithms are trained to change users’ behaviour to maximise metrics such as time spent on a platform or click-through ratio. This training process is blind as to whether the stimuli used to maximise these metrics are reason-tracking or respect users’ own capacity to track reasons. A DNN does not mind whether its outputs are reason-tracking or not – indeed, it has no conception of normative reasons to begin with.

At the same time, the algorithms used in timeline curation may be very effective in selecting stimuli that influence the behaviour of a particular user, either at a particular point in time or repeatedly. This is because they have access to large amounts of fine-grained data that plausibly correlate with the user’s mental properties (Burr and Cristianini 2019) – both general traits and real-time mental states. On the plausible assumption that certain types of mental states make people more vulnerable to forms of influence that are not reason-tracking, this makes undermining users’ reason-responsiveness particularly likely (Calo 2013; Spencer 2020).

This combination of deep-learning models’ strong ability to track users’ mental properties and their inability to grasp normative concepts creates particular risks to the first condition of autonomy. Moreover, given the black-box nature of deep-learning algorithms, it is not possible for humans to actively engage with the way they generate their recommendations. Therefore, the only way that concerns for reason-responsiveness can be incorporated is to employ human moderators to filter the recommendations of the algorithms in question a posteriori – for instance by removing posts that contain false information. But such moderation is likely insufficient to remove the risk of undermining reason-responsiveness. First, the sheer amount of content on large social media platforms makes human moderation very difficult. Second, given their access to individualised information, social media algorithms may learn to target users’ individual biases (e.g. Alice, in the evening, is likely to respond to certain types of influence that do not provide her with normative reasons). To avoid such targeting, human moderators would need to have knowledge of individual users’ weaknesses, which is both infeasible and would be problematic in terms of privacy. Thus moderators can, at most, remove general falsehoods, which are only a subset of those types of online influence that undermine reason-responsiveness.

The discussion so far suggests a possible solution: to be more explicit about the ‘reasons’ timeline selections are based on, so that users are in a better position to critically assess these.[17] Indeed, some social media platforms have made steps in this direction. Facebook has a ‘Why am I seeing this ad?’ feature that explains to users why they are shown certain advertisements. In an example posted by Meta’s (Facebook’s parent company) global director of ads and monetisation privacy (Pavón 2023), an ad for bath bombs is explained by stating (among other things) that the user has interacted with ads about personal care and with pages and posts about haircare on Facebook and visited websites about personal care that shared data with Facebook. Of course, these explanations are very incomplete; they do not, for instance, explain the timing of advertisements. But we could imagine that a user, upon being faced with explanations like this, is better positioned to consider whether the considerations behind the advertisement correspond to normative reasons. Consider again the example of Alice’s being induced to go to a concert. This time, imagine that Alice is shown an explanation along the lines of ‘Your Sharely activity indicates an interest in spending your evenings more actively’, and decides, upon reflection, that she would indeed like to spend her evenings more actively. Here, her behaviour would more clearly be reason-responsive. Would that mean that Sharely’s influence would be acceptable? To answer this question, we must consider the second condition for personal autonomy.

6 How Algorithms May Undermine Personal Coherence

It may seem that influencing someone by making them respond to a reason is always unproblematic. However, Mitchell and Douglas (2024) have recently argued that this is not the case and that problematic reason-giving influence is especially likely in online contexts (although, unlike here, they focus only on persuasion, that is, on intentional reason-giving influence). Mitchell and Douglas claim that some of these cases of reason-giving influence are wrongful because they harm autonomy. The second criterion for personal autonomy, identified in Section 4.2, suggests another way (different from, though somewhat related to, those analysed by Mitchell and Douglas) why reason-giving influence can harm autonomy: the reasons for which someone is made to act do not cohere with that person’s broader commitments, projects, etc.

Again, we may differentiate between cases where an agent is placed in conditions that make it hard for her to act for reasons reflecting her broader commitments and cases where an agent’s general capacity to form and follow up on such commitments is eroded. The first type of case may occur when an agent is tempted by a reason that is particularly salient given a transient motivational state she is in. Alvarez (2010, 60–63) observes that there is a close connection between motivating reasons and motives. She characterises a motive as either a desire (e.g. the desire for revenge may be a motive), or an indication of a certain (type of) desire (e.g. ambition may be a motive which is not itself a desire but indicates that an agent desires success). When an agent has a certain motive, then, he has a certain desire, and ‘given that desire, certain facts (or apparent facts) will seem to the agent to be reasons for him to act’ (Alvarez 2010, 62). I wish to slightly broaden Alvarez’ notion of motives to what we may call motivational sets, which include all of an agent’s pro-attitudes (not only her desires) that make that agent more likely to act for certain types of considerations, by making these considerations appear to the agent as (strong) reasons. Since someone’s receptiveness to certain reasons is often dependent on temporary mental states (e.g. moods or emotional states), many motivational sets are transient. This means that certain motivational sets can cause a person to act for reasons she would not otherwise have acted for – potentially reasons that do not cohere with her broader commitments. An influencer who wants a person P to act for a certain reason R can make use of this, in two ways:

  1. The influencer can cause P to have a motivational set M that makes P more responsive to reasons of the same type as R. In this case, we can say that the influencer has induced M.

  2. The influencer can present R at exactly the moment when P has motivational set M, which makes P more responsive to reasons of the same type as R. In this case, we can say that the influencer has exploited M.

To be certain, inducing or exploiting a certain motivational set in someone is not always pro tanto wrongful. Noggle offers the following example (in arguing that nonrational persuasion is not necessarily manipulative): ‘Suppose you remind me of starving children in Rwanda and describe their plight in vivid detail in order to get me to feel sad enough to assign (what you take to be) the morally proper relevance to their suffering’ (Noggle 1996, 49). The vivid details, we may assume, are not themselves reasons for Noggle to do anything (e.g. donate to a famine relief charity). Assuming that there is a normative reason for Noggle to donate, it is the fact that the children are starving, but this fact remains the same no matter how it is presented. The influencer thus relies on inducing a motivational set, rather than directly providing a reason. Still, Noggle is surely right that such influence is sometimes innocuous or at least excusable.

However, we can also add further context to Noggle’s example such that the influencer is doing something pro tanto wrongful. Say that Noggle has a limited amount of money to spend on charity and has for a long time considered which cause to spend it on before deciding that he cares most about protecting the natural environment and should therefore donate to the World Wildlife Fund. If his acquaintance is aware of this but still proceeds to remind him of the plight of the Rwandan children, the behaviour appears more problematic (for a discussion of advice that is offered too late, see Mitchell and Douglas 2024). The coherence condition of autonomy explains this: Noggle has decided to prioritise the value of supporting the natural environment, but his acquaintance makes it more likely that he will act for a reason that is not in line with this commitment, thereby hindering his personal coherence.

The case where Alice is made to act on insecurity is similar to this latter case. Since the insecurity that Sharely has induced in Alice makes her particularly responsive to certain types of reasons (namely, those to do with popular activities), this makes it more difficult for her to act in line with her broader commitments (e.g. her commitment to learn French during her evenings). It may even be that the reason Alice has to attend the concert would not exist absent her insecure state of mind. In this case, although Alice has a normative reason, the normative weight of this reason only depends on changes that Sharely has made in her mental state. When she then acts for that reason, she acts for a reason that is not really her own. Once the bout of insecurity has faded away, Alice may feel alienated from her decision.

The problem still persists in case Sharely is not itself responsible for Alice’s insecurity. Say that Alice feels insecure for reasons that have nothing to do with her Sharely timeline. Still, Sharely’s algorithm may detect this insecurity through Alice’s clicks, cursor movements, etc. and as a result target the concert advertisement to her. (This possibility is not purely hypothetical, as we will see in a moment.) It may then still be the case that she acts for a reason that is less coherent with her overall commitments than some competing reason.

Again, Alice’s autonomy does not hinge on her decision to spend any particular evening this way or another. The problem arises when Alice’s evening plans are repeatedly interfered with by motivational sets pushing her away from her projects or commitments. This is likely to happen if she spends a lot of time on Sharely – like most people who are active on social media. After all, if Sharely’s algorithms have learnt that Alice is responsive to reasons playing into her insecurity during the evening, they are likely to provide such reasons whenever Alice logs in during the evening. Activities chosen for these reasons may therefore occlude Alice’s original goal to learn French.

Now, it can be questioned whether Sharely (or Telos, its parent company) can be blamed for Alice’s loss of personal coherence, no matter how unfortunate. In the second version of Noggle’s example, the pro tanto wrongness of the behaviour of Noggle’s acquaintance rested on the latter knowing that Noggle had a commitment to donating to natural preservation projects. But surely, social media algorithms – or the companies behind them – cannot be expected to know about the personal commitments of their users.

Perhaps not. But social media companies may still be in a position to predict that the way their algorithms work makes it very likely that users’ personal coherence is undermined, without knowing any details of specific users’ commitments. Indeed, two features of deep-learning social media algorithms, such as DNNs, make such effects plausible: 1) their lack of selectiveness in inducing and exploiting motivational sets and 2) their precision in detecting motivational sets.

With regards to 1, note that Noggle’s acquaintance, if he is not manipulative, ought to be selective in inducing and/or exploiting Noggle’s motivational sets. Specifically, he should only do so in order to help Noggle to act for reasons that he (the acquaintance) believes are particularly important – such as the reasons to donate to famine relief in Rwanda. However, social media algorithms have no such restraint. Their reward functions cause them to continuously affect users’ motivational sets, and to exploit detected motivational sets whenever this is likely to maximise one of the training metrics.

With regards to 2, algorithms have access to many real-time behavioural signs that would elude most human influencers, such as cursor movements, scrolling behaviours, etc., and these signs plausibly correlate with motivational states (Burr and Cristianini 2019). As an example, consider an internal Facebook document that was leaked to The Australian in 2017. The document states that by monitoring the online activity of teenagers (some as young as 14) in real-time, Facebook ‘can work out when young people feel “stressed”, “defeated”, “overwhelmed”, “anxious”, “nervous”, “stupid”, “silly”, “useless”, and a “failure”‘, so that they need ‘a confidence boost’ (Davidson 2017). Although Facebook denied that this information was used to target advertisements (Levin 2017), the apparent availability of such methods, combined with Facebook’s profit incentive, gives cause for concerns. What is more, given that users’ emotions at least in part depend on their timelines, social media algorithms may, at least in principle, learn to induce these emotions, then detect whether and when exactly they are successful in doing so, and subsequently provide advertisements that play into these emotions.

In short, the nature of social media algorithms creates a risk that users are, whenever they are on social media (which, for many people, is a significant part of their lives), pushed to act for a selective subset of the normative reasons they could reasonably act for (provided they are made to act for normative reasons at all). It would be very fortuitous if these reasons coincide with their personal projects and commitments. In particular, social media appear to make reasons related to popularity or social status particularly salient for many users. It has been found that Facebook users tend to compare themselves primarily to other users who they perceive as having more positive characteristics than themselves, for example, as being more popular or successful, which reduces their self-esteem (Vogel et al. 2014). Plausibly, this induces in some users a desire to be more popular, causing them to undertake action to align their lifestyle more with what is perceived as popular. Now, the fact that an option is popular may be a good reason to pursue it – for instance, if a tv series is very popular, it may be reasonable to assume that its quality is high and therefore to watch it. But the problem is that if a social media platform makes popularity-based decisions too frequently, users may lose a sense of their lifestyle choices as responsive to their personal commitments (unless doing what is popular is an enduring personal commitment). In this way, a series of choices that are individually reason-responsive may lead to a lifestyle pattern that a user ends up feeling alienated from.

This discussion also suggests ways in which social media users’ competence to maintain personal coherence is undermined (i.e. the top right cell of Table 1). Meyers (1989, 83–84) notes that one of the competences required to formulate and act in line with coherent life plans is the capacity to stand by one’s plans in the face of opposition from others (unless of course one decides to take this opposition to heart upon reflection). This requires a sense of self-worth and self-esteem as an independent decision-maker, which, Meyers argues, is often undermined by socialisation which pressures individuals, especially women, to defer to socially approved projects (Meyers 1989, 167–171; see also Hill 1991, chap. 2; Mackenzie 2014, 35–39). Research suggests that social media tend to reduce self-esteem not only as a transient emotional state (state self-esteem), but also as an enduring characteristic (trait self-esteem) (Vogel et al. 2014). Findings about the connection between social media use and self-esteem are somewhat inconclusive, however, possibly because the relation is strongly mediated by individual differences (Cingel et al. 2022; Saiphoo et al. 2020; Valkenburg et al. 2021). Still, it is likely that at least some social media users’ self-esteem is negatively impacted. Such users would plausibly be less able to fulfil their personal projects in the face of external pressure.

More radically, continued exposure to social media may imperil the formation of coherent life plans and self-conceptions altogether. Our preferences and commitments do not only influence our actions but are in turn influenced by them. If Alice continuously finds herself unable to follow up on her commitment to learn French, she may start to question this commitment itself, and eventually her self-conception as a person curious about language and cultures. In this context, Smith (2020) argues that social media users’ authentic digital identities are under threat from algorithmically created ‘corporate identities’ over which they have no control, and which thus undermine their autonomy. Similarly, Bonicalzi et al (2023) argue that recommender systems, such as those used by social media, ‘participate in the construction of users’ identity in ways that may or may not be responsive to their view of themselves or that may be detrimental to their healthy and satisfactory development’ (828), and as such threaten personal autonomy. I here cannot go more deeply into the question whether algorithmic online influence may undermine users’ life plans and identities as such, since this would require a discussion of identity theory that is beyond the scope of this paper. I merely note that a coherentist view of autonomy underscores the importance of formation and maintenance of a coherent identity and refer to Smith’s and Bonicalzi et al.’s work for further discussion.

7 Conclusions

Departing from the intuition that some forms of online influence exercised by social media platforms leave users in a position where they are insufficiently able to make their own choices, I have considered the commonly made claim that these platforms manipulate their users. The conclusion was that such claims are difficult to substantiate due to their general reference to intentions or other mental concepts and that it is better to shift to the autonomy risk view, which focuses directly on the predictable harms to autonomy that social media pose. I then sketched the outlines of an account of which cases of algorithmic influence (via social media) wrongly undermine users’ personal autonomy. In summary, it was concluded that a person’s autonomy can be wrongly undermined by a curation algorithm if (or insofar as) one of four situations obtains (and can be predicted to obtain with high probability). First, the algorithm may provide the user with considerations that are not normative reasons but are easily mistaken for reasons. Second, the algorithm may undermine the user’s competences required for recognising and acting for normative reasons. Third, the algorithm may make the user act for reasons that do not cohere with her broader commitments or life plans. Fourth, the algorithm may undermine the user’s competences required for forming and maintaining broader commitments and life plans. Due to the nature of deep-learning curation algorithms, it is likely that these algorithms regularly cause all four situations to obtain. If this is true, the companies deploying such algorithms put their users at an undue risk with regards to their autonomy and are therefore morally culpable.

Being the author of a coherent life is difficult – it requires the concentration and mental energy to dedicate oneself to a subset of all reasons one could possibly act for in the face of other reasons and other stimuli pulling one in different directions. The introduction of social media into many areas of our lives risks undermining these mental resources by continuously exposing us to nonreasons or to reasons that may appear particularly salient during certain motivational episodes (often induced by those same social media) but do not in the end reflect who we really are or want to be. The protection of ourselves as autonomous creatures requires that we get this risk more clearly in view so that we may act against it.


Corresponding author: Joris Graff, Utrecht University, Philosophy and Religious Studies, Utrecht, Netherlands, E-mail:

Acknowledgements

I thank Lucie White, Joel Anderson, Jan Broersen, and Dominik Klein for comments on and/or discussions of earlier versions of this paper. I also thank the organisers and participants of the Rethinking Ethics – Reimagining Technology conference, which took place on October 2–4 2024 at the University of Twente, Enschede, where a version of this paper was presented and received helpful comments. Many thanks also to the participants of the AI Ethics Colloquium at Utrecht University, where an earlier version of the paper was read and discussed. Finally, I thank an anonymous reviewer for their encouraging comments and suggestions.

References

Aichner, T., M. Grünfelder, O. Maurer, and D. Jegeni. 2021. “Twenty-Five Years of Social Media: a Review of Social Media Applications and Definitions from 1994 to 2019.” Cyberpsychology, Behavior, and Social Networking 24 (4): 215–22. https://doi.org/10.1089/cyber.2020.0134.Search in Google Scholar

Altman, E., P. Kumar, S. Venkatramanan, and A. Kumar. 2013. “Competition over Timeline in Social Networks.” In ASONAM 13: Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, Niagara Falls, Ontario, 1352–7. New York: Association for Computing Machinery.10.1145/2492517.2500243Search in Google Scholar

Alvarez, M. 2010. Kinds of Reasons: An Essay in the Philosophy of Action. Oxford: Oxford University Press.10.1093/acprof:oso/9780199550005.001.0001Search in Google Scholar

Anscombe, G. E. M. 1957. Intention. Oxford: Basil Blackwell.Search in Google Scholar

Audi, R. 1986. “Acting for Reasons.” Philosophical Review 95 (4): 511–46. https://doi.org/10.2307/2185049.Search in Google Scholar

Barnhill, A. 2022. “How Philosophy Might Contribute to the Practical Ethics of Online Manipulation.” In The Philosophy of Online Manipulation, edited by F. Jongepier, and M. Klenk, 49–71. New York/London: Routledge.10.4324/9781003205425-4Search in Google Scholar

Baron, M. 2003. “Manipulativeness.” Proceedings and Addresses of the American Philosophical Association 77 (2): 37–54. https://doi.org/10.2307/3219740.Search in Google Scholar

Baron, M. 2014. “The Mens Rea and Moral Status of Manipulation.” In Manipulation: Theory and Practice, edited by C. Coons, and M. Weber, 98–120. Oxford: Oxford University Press.10.1093/acprof:oso/9780199338207.003.0005Search in Google Scholar

Benson, P. 1991. “Autonomy and Oppressive Socialization.” Social Theory and Practice 17 (3): 385–408. https://doi.org/10.5840/soctheorpract199117319.Search in Google Scholar

Bonicalzi, S., M. De Caro, and B. Giovanola. 2023. “Artificial Intelligence and Autonomy: on the Ethical Dimension of Recommender Systems.” Topoi 42 (3): 819–32. https://doi.org/10.1007/s11245-023-09922-5.Search in Google Scholar

Bratman, M. E. 1993. “Shared Intention.” Ethics 104 (1): 97–113. https://doi.org/10.1086/293577.Search in Google Scholar

Bratman, M. E. 2005. “Planning Agency, Autonomous Agency.” In Personal Autonomy: New Essays on Personal Autonomy and its Role in Contemporary Moral Philosophy, edited by J. S. Taylor, 33–57. Cambridge: Cambridge University Press.10.1017/CBO9780511614194.002Search in Google Scholar

Burr, C., and N. Cristianini. 2019. “Can Machines Read our Minds?” Minds and Machines 29 (3): 461–94. https://doi.org/10.1007/s11023-019-09497-4.Search in Google Scholar

Cadwalladr, C., and E. Graham-Harrison. 2018. “Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach.” The Guardian. https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election (accessed March 28, 2024).Search in Google Scholar

Calo, R. 2013. “Digital Market Manipulation.” George Washington Law Review 82: 995–1051. https://doi.org/10.2139/ssrn.2309703.Search in Google Scholar

Christman, J. 1991. “Autonomy and Personal History.” Canadian Journal of Philosophy 21 (1): 1–24. https://doi.org/10.1080/00455091.1991.10717234.Search in Google Scholar

Cingel, D. P., M. C. Carter, and H.-V. Krause. 2022. “Social Media and Self-Esteem.” Current Opinion in Psychology 45: 101304. https://doi.org/10.1016/j.copsyc.2022.101304.Search in Google Scholar

Dancy, J. 2000. Practical Reality. Oxford: Oxford University Press.Search in Google Scholar

Davidson, D. 2017. “Facebook Targets ‘Insecure’ Young People to Sell Adds.” Australian. https://www.theaustralian.com.au/business/media/facebook-targets-insecureyoung-people-to-sell-ads/news-story/a89949ad016eee7d7a61c3c30c909fa6/ (accessed 30 March 2024).Search in Google Scholar

Davis, J. L. 2017. “Curation: a Theoretical Treatment.” Information, Communication & Society 20 (5): 770–83. https://doi.org/10.1080/1369118X.2016.1203972.Search in Google Scholar

Dworkin, G. 1981. “The Concept of Autonomy.” Grazer Philosophische Studien 12 (1): 203–13. https://doi.org/10.1163/18756735-90000122.Search in Google Scholar

Ekstrom, L. W. 1993. “A Coherence Theory of Autonomy.” Philosophy and Phenomenological Research 53 (3): 599–616. https://doi.org/10.2307/2108082.Search in Google Scholar

Ekstrom, L. W. 2005. “Autonomy and Personal Integration.” In Personal Autonomy: New Essays on Personal Autonomy and its Role in Contemporary Moral Philosophy, edited by J. S. Taylor, 33–57. Cambridge: Cambridge University Press.10.1017/CBO9780511614194.007Search in Google Scholar

Faden, R. R., and T. L. Beauchamp. 1986. A History and Theory of Informed Consent. Oxford: Oxford University Press.Search in Google Scholar

Feinder, L. 2019. “Facebook Fails to Convince Lawmakers It Needs to Track Your Location at all Times.” CNBC. https://www.cnbc.com/2019/12/17/facebook-responds-to-senators-questions-on-location-tracking-policy.html (accessed March 19, 2024).Search in Google Scholar

Frankfurt, H. G. 1971. “Freedom of the Will and the Concept of a Person.” The Journal of Philosophy 68 (1): 5–20. https://doi.org/10.2307/2024717.Search in Google Scholar

Friedman, M. 1986. “Autonomy and the Split-Level Self.” The Southern Journal of Philosophy 24 (1): 19–35. https://doi.org/10.1111/j.2041-6962.1986.tb00434.x.Search in Google Scholar

Friedman, M. 2003. Autonomy, Gender, Politics. Oxford: Oxford University Press.10.1093/0195138503.001.0001Search in Google Scholar

Ganjoo, S. 2018. Facebook Confirms that It Tracks How You Move Mouse on the Computer Screen. India Today https://www.indiatoday.in/technology/news/story/facebook-confirms-that-it-tracks-how-you-move-mouse-on-the-computer-screen-1258189-2018-06-12 (accessed March 30, 2024).Search in Google Scholar

Goodin, R. E. 1980. Manipulatory Politics. New Haven: Yale University Press.Search in Google Scholar

Gorin, M. 2014. “Towards a Theory of Interpersonal Manipulation.” In Manipulation: Theory and Practice, edited by C. Coons, and M. Weber, 73–97. Oxford: Oxford University Press.10.1093/acprof:oso/9780199338207.003.0004Search in Google Scholar

Gorin, M. 2022. “Gamification, Manipulation, and Domination.” In The Philosophy of Online Manipulation, edited by F. Jongepier, and M. Klenk, 199–215. New York/London: Routledge.10.4324/9781003205425-12Search in Google Scholar

Griffin, A. 2015. “Facebook News Feed Algorithm to Track How Long Users Spend Reading Stories.” The Independent. https://www.independent.co.uk/tech/facebook-news-feed-algorithm-to-track-how-long-users-spend-reading-stories-10320715.html (accessed March 19, 2024).Search in Google Scholar

Hill, T. E. 1991. Autonomy and Self-respect. Cambridge: Cambridge University Press.10.1017/CBO9780511609237Search in Google Scholar

Hooker, B., and B. Streumer. 2004. “Procedural and Substantive Practical Rationality.” In The Oxford Handbook of Rationality, edited by A. R. Mele, and P. Rawling, 57–74. Oxford: Oxford University Press.10.1093/0195145399.003.0004Search in Google Scholar

Hurley, S. L. 1989. Natural Reasons: Personality and Polity. New York: Oxford University Press, USA.Search in Google Scholar

Jongepier, F., and M. Klenk, eds. 2022. The Philosophy of Online Manipulation. New York/London: Routledge.10.4324/9781003205425Search in Google Scholar

Keeling, G., and C. Burr. 2022. “Digital Manipulation and Mental Integrity.” In The Philosophy of Online Manipulation, edited by F. Jongepier, and M. Klenk, 253–71. New York/London: Routledge.10.4324/9781003205425-15Search in Google Scholar

Khan, S., and I. Bhatt. 2018. “Curation.” In The International Encyclopedia of Media Literacy, edited by R. Hobbs, and P. Mihailidis. Hoboken (NJ): John Wiley & Sons.10.1002/9781118978238.ieml0047Search in Google Scholar

Kilic, B., and S. Crabbe-Field. 2021. “You Should Be Worried About How Much Info WhatsApp Shares with Facebook.” The Guardian. https://www.theguardian.com/commentisfree/2021/may/14/you-should-be-worried-about-how-much-info-whatsapp-shares-with-facebook (accessed April 01, 2025).Search in Google Scholar

Kim, S. A. 2017. “Social Media Algorithms: Why You See What You See.” Georgetown Law Technology Review 2 (1): 147–54.Search in Google Scholar

Klenk, M. 2021. Interpersonal Manipulation. SSRN.10.2139/ssrn.3859178Search in Google Scholar

Klenk, M. 2022. “(Online) Manipulation: Sometimes Hidden, Always Careless.” Review of Social Economy 80 (1): 85–105. https://doi.org/10.1080/00346764.2021.1894350.Search in Google Scholar

Kracht, T., L. Sotto, and B. Sooy. 2022. “Facebook Pivots from Facial Recognition System Following Biometric Privacy Suit.” Reuters. https://www.reuters.com/legal/legalindustry/facebook-pivots-facial-recognition-system-following-biometric-privacy-suit-2022-01-26/ (accessed March 19, 2024).Search in Google Scholar

Levin, S. 2017. “Facebook Told Advertisers It Can Identify Teens Feeling ‘Insecure’ and ‘Worthless’.” The Guardian. https://www.theguardian.com/technology/2017/may/01/facebook-advertising-data-insecure-teens (accessed March 30, 2024).Search in Google Scholar

Mackenzie, C. 2014. “Three Dimensions of Autonomy: a Relational Analysis.” In Autonomy, Oppression, and Gender, edited by A. Veltman, and M. Piper, 15–41. Oxford: Oxford University Press.10.1093/acprof:oso/9780199969104.003.0002Search in Google Scholar

Mackenzie, C., and N. Stoljar, eds. 2000. Relational Autonomy: Feminist Perspectives on Autonomy, Agency, and the Social Self. Oxford: Oxford University Press.10.1093/oso/9780195123333.001.0001Search in Google Scholar

Manne, K. 2014. “Non-Machiavellian Manipulation and the Opacity of Motive.” In Manipulation: Theory and Practice, edited by C. Coons, and M. Weber, 221–46. Oxford: Oxford University Press.10.1093/acprof:oso/9780199338207.003.0011Search in Google Scholar

Meyers, D. T. 1989. Self, Society, and Personal Choice. New York: Columbia University Press.Search in Google Scholar

Mills, C. 1995. “Politics and Manipulation.” Social Theory and Practice 21 (1): 97–112. https://doi.org/10.5840/soctheorpract199521120.Search in Google Scholar

Mitchell, T., and T. Douglas. 2024. “Wrongful Rational Persuasion Online.” Philosophy & Technology 37 (1): 35. https://doi.org/10.1007/s13347-024-00725-z.Search in Google Scholar

Narayanan, A. 2023. Understanding Social Media Recommendation Algorithms. New York: Knights First Amendment Institute. https://knightcolumbia.org/content/understanding-social-mediarecommendation-algorithms (accessed September 04, 2025).Search in Google Scholar

Noggle, R. 1996. “Manipulative Actions: a Conceptual and Moral Analysis.” American Philosophical Quarterly 33 (1): 43–55. https://www.jstor.org/stable/20009846 (accessed September 04, 2025).Search in Google Scholar

Noggle, R. 2025. Manipulation: Its Nature, Mechanisms, and Moral Status. Oxford: Oxford University Press.10.1093/9780198924920.001.0001Search in Google Scholar

Norlock, K. 2021. “Free and Always Will Be? on Social Media Participation as It Undermines Individual Autonomy.” Canadian Journal of Practical Philosophy 5 (1). https://doi.org/10.22329/cjpp.v5i1.8195. Search in Google Scholar

Nys, T., and B. Engelen. 2022. “Commercial Online Choice Architecture: When Roads are Paved with Bad Intentions.” In The Philosophy of Online Manipulation, edited by F. Jongepier, and M. Klenk, 135–55. New York/London: Routledge.10.4324/9781003205425-9Search in Google Scholar

Oshana, M. 2006. Personal Autonomy in Society. Aldershot: Ashgate.Search in Google Scholar

Parfit, D.. 2011. On What Matters, Vol. 1. Oxford: Oxford University Press.10.1093/acprof:osobl/9780199572809.003.0001Search in Google Scholar

Pavón, P. 2023. Increasing our Ads Transparency. https://about.fb.com/news/2023/02/increasing-our-ads-transparency/ (accessed April 06, 2025).Search in Google Scholar

Pepp, J., R. Sterken, M. McKeever, and E. Michaelson. 2022. “Manipulative Machines.” In The Philosophy of Online Manipulation, edited by F. Jongepier, and M. Klenk, 91–107. New York/London: Routledge.10.4324/9781003205425-6Search in Google Scholar

Raz, J. 1986. The Morality of Freedom. Oxford: Clarendon Press.Search in Google Scholar

Sahebi, S., and P. Formosa. 2022. “Social Media and Its Negative Impacts on Autonomy.” Philosophy & Technology 35 (3): 70. https://doi.org/10.1007/s13347-022-00567-7.Search in Google Scholar

Saiphoo, A. N., L. Dahoah Halevi, and Z. Vahedi. 2020. “Social Networking Site Use and Self-Esteem: A Meta-analytic Review.” Personality and Individual Differences 153: 109639. https://doi.org/10.1016/j.paid.2019.109639.Search in Google Scholar

Searle, J. 1990. “Collective Intentions and Actions.” In Intentions in Communication, edited by P. R. Cohen, J. Morgan, and M. Pollack, 401–15. Cambridge (MA): MIT Press.10.7551/mitpress/3839.003.0021Search in Google Scholar

Shanmugasundaram, M., and A. Tamilarasu. 2023. “The Impact of Digital Technology, Social Media, and Artificial Intelligence on Cognitive Functions: A Review.” Frontiers in Cognition 2: 1203077. https://doi.org/10.3389/fcogn.2023.1203077.Search in Google Scholar

Smith, C. H. 2020. “Corporatised Identities ≠ Digital Identities: Algorithmic Filtering on Social Media and the Commercialisation of Presentations of Self.” In Ethics of Digital Well-being: A Multidisciplinary Approach, edited by L. Floridi, and C. Burr, Vol. 140, 55–80. Philosophical Studies Series. London: Springer. https://doi.org/10.1007/978-3-030-50585-1-3.Search in Google Scholar

Spencer, S. B. 2020. “The Problem of Online Manipulation.” University of Illinois Law Review 2020 (3): 959–1005. https://doi.org/10.2139/ssrn.3341653.Search in Google Scholar

Stoljar, N. 2000. “Autonomy and the Feminist Intuition.” In Relational Autonomy: Feminist Perspectives on Autonomy, Agency, and the Social Self, edited by C. Mackenzie, and N. Stoljar, 94–111. Oxford: Oxford University Press.10.1093/oso/9780195123333.003.0005Search in Google Scholar

Susser, D., B. Roessler, and H. Nissenbaum. 2019. “Online Manipulation: Hidden Influences in a Digital World.” Georgetown Law Technology Review 4 (1). https://doi.org/10.2139/ssrn.3306006.Search in Google Scholar

Tuomela, R. 1991. “We Will Do It: An Analysis of Group-Intentions.” Philosophy and Phenomenological Research 51 (2): 249–77. https://doi.org/10.2307/2108127.Search in Google Scholar

Tuomela, R., and K. Miller. 1988. “We-Intentions.” Philosophical Studies 53: 115–37. https://doi.org/10.1007/BF00353512.Search in Google Scholar

Valkenburg, P., I. Beyens, J. L. Pouwels, I. I. van Driel, and L. Keijsers. 2021. “Social Media Use and Adolescents’ Self-Esteem: Heading for a Person-Specific Media Effects Paradigm.” Journal of Communication 71 (1): 56–78. https://doi.org/10.1093/joc/jqaa039.Search in Google Scholar

Van de Poel, I., L. Royakkers, and S. D. Zwart. 2015. Moral Responsibility and the Problem of Many Hands. New York: Routledge.10.4324/9781315734217Search in Google Scholar

Vogel, E. A., J. P. Rose, L. R. Roberts, and K. Eckles. 2014. “Social Comparison, Social Media, and Self-Esteem.” Psychology of Popular Media Culture 3 (4): 206–22. https://doi.org/10.1037/ppm0000047.Search in Google Scholar

Ware, A. 1981. “The Concept of Manipulation: Its Relation to Democracy and Power.” British Journal of Political Science 11 (2): 163–81. https://doi.org/10.1017/S0007123400002556.Search in Google Scholar

Williams, B. 1981. “Internal and External Reasons.” In Moral Luck: Philosophical Papers 1973–1980, 101–13. Cambridge: Cambridge University Press.10.1017/CBO9781139165860.009Search in Google Scholar

Received: 2025-05-02
Accepted: 2025-10-28
Published Online: 2025-11-19

© 2025 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 27.11.2025 from https://www.degruyterbrill.com/document/doi/10.1515/mopp-2025-0034/html
Scroll to top button