15th ANNUAL EARLY HEARING DETECTION & INTERVENTION MEETING
March 13-15, 2016 • San Diego, CA

<< BACK TO AGENDA

3/15/2016  |   3:00 PM - 3:30 PM   |  Topical Session 7   |  Royal Palm 5/6   |  3 - Language Acquisition and Development

Development of young children’s skill in real-time processing of ASL

The ability to interpret language rapidly from moment to moment as the signal unfolds in time is critical for developing language proficiency. Research on spoken language learning by very young children has used eye movements during real-time sentence processing as a window into their emerging comprehension abilities. But, for children learning a visual language like American Sign Language (ASL), eye movements and gaze are used to process both the linguistic signal and relevant information in the visual world to which the linguistic signal is referring. Can gaze patterns by ASL learners also provide an index of their lexical processing skills? In this study, we developed the first measures of children’s real-time ASL comprehension abilities, investigating links between these skills, children’s age, vocabulary, and hearing status. Native ASL-learning children (16-53 mos, n=29, 16 deaf and 13 hearing) and fluent adult signers (n=19) participated in a task of real-time ASL comprehension. Children’s comprehension skills improved with age, moving toward the efficiency of adult signers. Importantly, children’s processing skills were significantly correlated with vocabulary size, showing that the ability to establish reference in real-time is linked to language learning. Finally, we found that both deaf and hearing ASL learners showed qualitatively similar patterns of looking behavior, suggesting that visual language processing skills are driven by experience with a visual language, and not by deafness. These new discoveries contribute to the literature highlighting parallels between signed and spoken language development when children are exposed to native sign input. And the development of this new eye-tracking procedure will provide a valuable method for researchers and educators to track developmental trajectories of early language learning in children learning signed languages like ASL.

  • Summarize how lexical processing efficiency supports language development.
  • Explain how eye gaze measures can be used to measure language abilities in early-identified deaf infants.
  • Discuss evidence that the development of visual language processing follows a similar trajectory as spoken language processing.

Presentation:
This presentation has not yet been uploaded or the speaker has opted not to make the presentation available online.

Handouts:
Handout is not Available

CART:
CART transcripts are NOT YET available, but will be posted shortly after the conference


Presenters/Authors

(), ;

ASHA DISCLOSURE:

Financial -

Nonfinancial -


Kyle MacDonald (Primary Presenter,Author), Stanford University, kyle.macdonald@stanford.edu;
Kyle MacDonald is a Ph.D candidate in the Department of Psychology at Stanford University. His research focuses on the intersection of early language learning, social cognition, and language processing. He has worked on projects investigating the relationship between early language skills and language input and helped develop new linguistic processing efficiency measures for children learning American Sign Language.

ASHA DISCLOSURE:

Financial - No relevant financial relationship exist.

Nonfinancial - No relevant nonfinancial relationship exist.


Todd LaMarr (Author), Center for Mind and Brain, tclamarr@ucdavis.edu;
Research Assistant

ASHA DISCLOSURE:

Financial -

Nonfinancial -


Virginia Marchman (Author), Stanford University, marchman@stanford.edu;
Virginia Marchman is a Developmental Psychologist at Stanford University. Her main areas of research are language development, language disorders, and early childhood development. Her specific interests include individual differences in typically-developing and late-talking children, and lexical and grammatical development in monolingual and bilingual learners. She has worked extensively with the MacArthur-Bates Communicative Development Inventories (CDI), developing the CDI Scoring program and is currently serving on the CDI Advisory Board.

ASHA DISCLOSURE:

Financial -

Nonfinancial -


David Corina (Author), University of California, Davis, dpcorina@ucdavis.edu;
David Corina Ph.D. is professor in the departments of Linguistics and Psychology at the University of California, Davis. He is the director of the Cognitive Neurolinguistics Laboratory at the Center for Mind and Brain. His work explores the neural representation of signed and spoken languages.

ASHA DISCLOSURE:

Financial - No relevant financial relationship exist.

Nonfinancial - No relevant nonfinancial relationship exist.


Anne Fernald (Author), Stanford University, afernald@stanford.edu;
Anne Fernald is the Josephine Knotts Knowles Professor in Human Biology at Stanford University. She specializes in children's language development, investigating the development of speed and efficiency in children's early comprehension in relation to their emerging lexical and grammatical competence.

ASHA DISCLOSURE:

Financial -

Nonfinancial -