The purpose of this project is two-fold. First, it aims at adapting the Classification Image technique, developed in the visual domain, to speech-in-noise perception. This behavioral psycholinguistic method relies on the reverse correlation concept: using a statistical model, we can link the time-frequency configuration of a specific noise instance to the phonetic confusion it causes, on a trial-by-trial basis. This allows us to estimate the perceptual weighting of the different time-frequency regions for the decision. The second objective of the project is to apply this new psycholinguistic technique to a wide range of tasks, phonetic contrasts, and groups of listeners.
The Matlab toolbox that we develop is available as a github repository : https://github.com/aosses-tue/fastACI. It can be used to replicate the previous auditory revcorr experiments or to design new ones.
With this toolbox you can run listening experiments as used in the studies Varnet et al. 2013, Varnet et al. 2015, Osses & Varnet 2021, Varnet & Lorenzi 2022, and Osses & Varnet 2022 (see section « References« ). You can also reproduce some of the figures contained in the mentioned references.
Citation key | fastACI experiment name | Type of background noise | Target sounds |
---|---|---|---|
Varnet et al. 2013 | speechACI_varnet2013 |
white | /aba/-/ada/, female speaker |
Varnet et al. 2015 | speechACI_varnet2015 |
white | /alda/-/alga/-/arda/-/arga/, male speaker |
Osses & Varnet 2021 | speechACI_varnet2013 |
speech shaped noise (SSN) | /aba/-/ada/, female speaker |
Varnet & Lorenzi 2022 | modulationACI |
white | modulated or unmodulated tones |
Osses & Varnet subm | speechACI_Logatome |
white, bump, MPS | /aba/-/ada/, male speker from the OLLO database |
Administrative details
Fundings:
NSCo Doctoral School, L. Varnet’s thesis scholarship (2012-2015)
Agence Nationale de la Recherche, projet fast-ACI (2021-2023)
Selected publications and presentations:
English
Varnet, L., Knoblauch, K., Meunier, F., Hoen M. (2013). Using auditory classification images for the identification of fine acoustic cues used in speech perception. Frontiers in Human Neuroscience, 7:865. doi: 10.3389/fnhum.2013.00865. (article)
Varnet, L., Knoblauch, K., Serniclaes, W., Meunier, F., Hoen M. (2015). A Psychophysical Imaging Method Evidencing Auditory Cue Extraction during Speech Perception: A Group Analysis of Auditory Classification Images. PLoS ONE, 10(3):e0118009. doi:10.1371/journal.pone.011800. (article)
Osses Vecchi, A. & Varnet, L. (2021). Consonant-in-noise discrimination using an auditory model with different speech-based decision devices. DAGA proceedings (article)
Osses Vecchi, A. & Varnet, L. (subm). A microscopic investigation of the effect of random envelope fluctuations on phoneme-in-noise perception (preprint)
Presentation to the Hearing Institute (2021, Paris): « New methodologies for studying listening strategies in phoneme categorization tasks » (slides)
Presentation to the Laboratoire de Psychologie et NeuroCognition (2023, Grenoble): » Using reverse correlation to study speech perception » (slides)
Presentation to ARO 2022: « Auditory Classification Images: A Psychophysical Paradigm to Explore Listening Strategies in Phoneme Perception » (slides, video)
Français
Varnet, L. (2015). Identification des indices acoustiques utilisés dans la compréhension de la parole dégradée. Université Claude Bernard Lyon 1. (thèse)
Présentation à la Fête de la Science (2020, ENS Paris): « Observer l’esprit humain sans neuroimagerie » (vidéo)
Sur ce blog :
– L’image de classification auditive, partie 1 : Le cerveau comme boîte noire
– L’Image de Classification Auditive, partie 2 : À la recherche des indices acoustiques de la parole
– A visual compendium of auditory revcorr studies