From 2009-2018, I was a researcher in linguistics, specializing in:

  • phonetics (the study of the perception and production of speech sounds),
  • psycholinguistics (the study of real-time language processing, and how language and cognition affect one another), and
  • neurolinguistics (the study of the structure and function of the brain as it affects language).

This page is an overview of some of my projects during that time. You can find a full list of my published work here. A handful of these projects have example code hosted on GitHub.


Speech biomarkers of mental illness

Collaborators: Vijay Mittal, Matt Goldrick, Joseph Keshet

Can speech be used as a diagnostic tool in clinical psychology? Psychotic disorders like schizophrenia impact motor control, affecting posture and head and limb movement. There’s also some evidence for motor/motion disruptions in young adults who are at risk for these disorders, but haven’t yet been diagnosed. Because speaking depends on a high degree of motor control, it’s plausible that these motor disruptions also impact speech. If so, the voice may represent an early warning, non-invasive tool to detect psychosis risk in young adults.


Acquiring speech sounds in a new language

Your native language(s) has a big impact on how you perceive and pronounce speech sounds. In some cases, this can make it hard to acquire sounds in a new language, especially if you start learning as an adult. In my dissertation, I explored some different ways to teach adult learners challenging sounds in a new language, focusing both on auditory training (helping you hear the differences between new sounds) and articulatory training (helping you position your tongue, lips, and larynx to pronounce them).


Automatic detection of cognitive difficulty

Collaborators: Matt Goldrick, Rhonda (McClain) Mudry, Joseph Keshet, Yossi Adi

Do cognitive challenges (e.g. speaking a second language, being an older speaker) leave detectible signs in speech? This project attempts to answer this by developing and using automatic processing algorithms to detect subtle variations in speech. These subtle hints in pronunciation may indicate that a speaker has a more demanding cognitive task than usual in finding the right word or putting a sentence together.


Color perception and categorization

Collaborators: Terry Regier, Yang Xu, Joe Austerweil, Tom Griffiths

How accurate is your perception and memory of physical stimuli? When it comes to color perception, your experience is sometimes (but not always!) affected by the language you speak. Our work in this domain suggests that you are most influenced by the categories in by your native language (“basic” color words like red, blue, and green) when cognitive demands are high. In those circumstances, we see speakers of different languages recalling and classifying the same color in different ways.

Cibelli, E., Xu, Y., Austerweil, J. L., Griffiths, T. L., & Regier, T. (2016). The Sapir-Whorf Hypothesis and Probabilistic Inference: Evidence from the Domain of Color. PloS One, 11(7), e0158725.
(*Co-first authors.)


The neural pathways of word recognition

Collaborators: Keith Johnson, Eddie Chang, Matt Leonard

Measuring neural activity during language processing is a challenge - in part because it happens so quickly and across wide networks of the brain. A technique called electocorticography (ECoG) allows a rare look into the simultaneous spatial and temporal dimensions of language processing. We used this approach to look at the neural pathways used to process real words and word-like nonsense forms (e.g. “tesolivy”, “piteretion”) millisecond by millisecond, and millimeter by millimeter.

Cibelli, E.S., Leonard, M.K., Johnson, K., & Chang, E.F. (2015). The influence of lexical statistics on temporal lobe cortical dynamics during spoken word listening. Brain and Language 147, 66-75.

Speech and aging

Collaborators: Susanne Gahl, Kat Hall, Ronald Sprouse

We know that the voice changes throughout childhood, and again late in life. But what happens during the longest span of life - early and middle adulthood? This question is a challenge to study, because it’s rare to have recordings of a person’s voice over many decades. To tackle this challenge, we measured speech from the Up! movies - a documentary series that has followed the same group of people every seven years of their lives. This corpus is freely available for language and speech researchers to use.

Gahl, S., Cibelli, E., Hall, K., and Sprouse, R. (2014). The "Up" corpus: A corpus of speech samples across adulthood. Corpus Linguistics and Linguistic Theory 10(2), 315-328.