Our primary research paradigm is eye-tracking while listening. This method assesses comprehension in both literate and illiterate populations (e.g., your typical 5-year-old) and yields fine-grained measures of interpretations as they occur. Beyond simply providing more a sensitive index of language ability, this approach – when paired with savvy experimental design – reveals the architectural properties that support a learner’s capacity to understand language. We use two eyetracking methods to collect our data. The most basic is the Poor Man's Eyetracker, which consists of a wooden platform with stimuli in each of its four corners. We place a video camera in the center of the platform and record eye movements. Then, we code the eye movements using a computer program. The other eyetracking method is the Desktop Eyetracker, which runs on a computer program that automatically codes eye movements. In studies using the Desktop Eyetracker, the stimuli are displayed on a computer program on an adjacent testing computer.
We are always looking for motivated and dependable undergraduates to join our research team. You can get involved in several ways. First, during the school year, students get involved by registering for 2 or 3 credits of the HESP 499 Independent Study course (6 to 9 hours per week). In order to make this experience worthwhile, we ask students to commit to a minimum of two semesters in the lab (one to learn, another to teach). Second, during the summer, we run the LCL Emerging Scholars Research Internship. This valuable experience runs for a 40 hours a week, for 8 weeks during the summer. Finally, sophomores who are interested in undertaking their own research project can apply to the HESP Honors Program. Qualified candidates will work closely with a faculty to propose, conduct, and defend a research study during their junior and senior year. If you are interested in joining our research team, fill out our application form and send it with your resume to email@example.com.
Speech often comes at us very quickly. During a typical conversation, listeners may hear more than 2 words every second, yet they manage to integrate every new word they hear into an evolving sentence representation. Evidence suggests that this process happens in real time - we don't wait until speakers have completed their utterance to figure out what they're saying. With every new word we hear, we are constantly updating our representation of the structure and meaning of the sentence. Our lab is interested in how young children manage this process, and how it interacts with domain-general cognitive control, the ability to bias thoughts and actions toward a task-relevant goal. Our current research suggests that engaging 5 year-old children's cognitive control system (by having them do a difficult inhibition task) changes how they interpret sentences subsequently. In some cases, doing a difficult inhibition task can even help children to interpret difficult sentences, like passives.
Ovans, Z., Novick, J., & Huang, Y. (November, 2018). Rely on what’s reliable: Effects of cognitive-control engagement on children’s sentence comprehension. Poster presented at the 2018 Psychonomic Society Annual Meeting. New Orleans, LA.
Ovans, Z., Novick, J., & Huang, Y. (March, 2018). Better to be reliable than early: Cognitive-control effects on developmental parsing. Poster presented at the 31st annual CUNY conference on Human Sentence Processing. Davis, CA.
Huang, Y., Hsu, N., Gerard, J., Kowalski, A., & Novick, J. (November, 2016). Cognitive-control effects on the kindergarten path: Separating correlation from causation. Paper presented at the 41st Boston University Conference on Language Development. Boston, MA.
Within language development, divergences based on socioeconomic status (SES) are evident during the second year of life and are well established by the time a child enters school. Thus, understanding SES effects on early language skills is critical for reducing achievement gaps in school readiness. Current approaches often rely on coarse-grained measures of language abilities (e.g., overall vocabulary size, number of clauses), which provide limited insights into how SES-related differences came about in the first place. Our current research takes a more fine-grained approach, focusing on the demands associated with a single construction (passives) and examining its interpretation in populations that differ greatly in their input quantity (3- to 6-year-olds from lower- and higher-SES families). This work suggests that input quantity affects children’s real-time sensitivity to informative linguistic cues within spoken utterances, impacting their ability to effectively recruit these cues to reanalyze initial misinterpretations.
Huang, Y. & Hollister, E. (2019). Developmental parsing and linguistic knowledge: Reexamining the role of cognitive control in the kindergarten path effect. Journal of Experimental Child Psychology, 184, 210-219.
Huang, Y., Leech, K. & Rowe, M. R. (2017). Exploring socioeconomic differences in syntactic development through the lens of real-time processing. Cognition, 159, 61-75.
Leech, K., Rowe, M., & Huang, Y. (2016). Variations in the recruitment of syntactic knowledge contribute to SES differences in syntactic development. Journal of Child Language, 44, 995-1009.
Huang, Y., Zheng, X., Meng, X., & Snedeker, J. (2013). Assignment of grammatical roles in the online processing of Mandarin passive sentences. Journal of Memory and Language, 69, 589-606.
During communication, speakers often recruit prosody to evoke contrast. For example, they may accent nouns (e.g., “No, I want the PENCIL”) to distinguish referents from different categories (e.g., pen) or adjectives (e.g., “Give me the ORANGE horse”) to distinguish referents from same categories (e.g., red horse). Critically, the comprehension of prosody has clinical relevance for cochlear-implant users, for whom the primary cue is severely diminished (pitch) but the secondary cues remain intact (intensity, duration). Our research investigated whether the ability to exploit these secondary cues varies with the computational demands of establishing contrast sets. This work suggests that listeners’ ability to exploit cues to prosody depends not only the number of cues available in the signal but also on the (1) demands associated with mapping these cues to meaning and (2) amount of experience listeners have with making these mappings.
Huang, Y., Catalano, A., Newman, R., & Goupell, M. (2017). Using prosody to infer discourse prominence in cochlear-implant users and normal-hearing listeners. Cognition, 166, 184-200.
Huang, Y., Catalano, A., Newman, R., & Goupell, M. (March, 2015). Using prosody to infer discourse status in normal-hearing and cochlear-implant listeners. Paper presented at the 28th annual CUNY conference on Human Sentence Processing. Los Angeles, CA.
Catalano, A., Huang, Y., Goupell, M., & Newman, R. (November, 2014). The use of prosody to infer discourse status in degraded speech. Poster presented at the 2014 ASHA Convention. Orlando, FL.