The purpose of this project is two-fold. First, it aims at adapting the Classification Image technique, developed in the visual domain, to speech-in-noise perception. This behavioral psycholinguistic method relies on the reverse correlation concept: using a statistical model, we can link the time-frequency configuration of a specific noise instance to the phonetic confusion it causes, on a trial-by-trial basis. This allows us to estimate the perceptual weighting of the different time-frequency regions for the decision. The second objective of the project is to apply this new psycholinguistic technique to a wide range of tasks, phonetic contrasts, and groups of listeners.
The Matlab toolbox that we develop is available as a github repository : https://github.com/aosses-tue/fastACI. It can be used to replicate the previous auditory revcorr experiments or to design new ones.
Administrative details
Fundings:
NSCo Doctoral School, L. Varnet’s thesis scholarship (2012-2015)
Agence Nationale de la Recherche, projet fast-ACI (2021-2023)
Selected publications and presentations:
English
Varnet, L., Knoblauch, K., Meunier, F., Hoen M. (2013). Using auditory classification images for the identification of fine acoustic cues used in speech perception. Frontiers in Human Neuroscience, 7:865. doi: 10.3389/fnhum.2013.00865.(article)
Varnet, L., Knoblauch, K., Serniclaes, W., Meunier, F., Hoen M. (2015). A Psychophysical Imaging Method Evidencing Auditory Cue Extraction during Speech Perception: A Group Analysis of Auditory Classification Images. PLoS ONE, 10(3):e0118009. doi:10.1371/journal.pone.011800. (article)
Osses Vecchi, A. & Varnet, L. (2021). Consonant-in-noise discrimination using an auditory model with different speech-based decision devices. DAGA proceedings (article)
Presentation at the Speech Science Forum (2018, University College of London): « New methodologies for studying listening strategies in phoneme categorization tasks » (slides)
Presentation at the Hearing Institute (2021, Paris): slides
Presentation to ARO 2022: « Auditory Classification Images: A Psychophysical Paradigm to Explore Listening Strategies in Phoneme Perception » (slides, video)
Français
Varnet, L. (2015). Identification des indices acoustiques utilisés dans la compréhension de la parole dégradée. Université Claude Bernard Lyon 1. (thèse)
Présentation à la Fête de la Science (2020, ENS Paris): « Observer l’esprit humain sans neuroimagerie » (vidéo)
Sur ce blog :
– L’image de classification auditive, partie 1 : Le cerveau comme boîte noire
– L’Image de Classification Auditive, partie 2 : À la recherche des indices acoustiques de la parole
– A visual compendium of auditory revcorr studies