The Modulation Filterbank (MFB) model is a simple functional model of the human auditory system inspired by the work of Torsten Dau (PErceptual MOdel; Dau, Kollmeier & Kohlrausch, 1997) and Stefan Ewert (Envelope Power Spectrum Model; Ewert, Dau, 2000). These computational models aim at simulating the processing of temporal and spectral cues by human listeners.
The MFB model decomposes incoming sounds into multiple frequency bands, mimicking the cochlea’s filtering process. It then extracts the envelope of each band and further decomposes it into modulation channels, capturing both slow and fast temporal fluctuations. This approach allows the model to represent sounds in a three-dimensional space (time samples × frequency channels × modulation channels) providing a detailed account of how the peripheral auditory system encodes complex acoustic signals.
One of the key strengths of the MFB model is its ability to predict human performance in detecting AM and FM under various conditions. For instance, we have used the model in amplitude modulation detection and frequency modulation detection tasks, temporal integration, modulation masking, etc (see list of publications below). Beyond behavioral predictions, the MFB model has also been used to interpret neurophysiological data recorded from animal models, such as guinea pigs and gerbils, during auditory tasks. By adjusting the model to match the specific auditory processing characteristics of these animals, we have been able to predict neural responses at various stages of the auditory pathway.
The MFB is now available (together with 50 other models) in the new release of the Auditory Modeling Toolbox (AMT 1.0), an open access toolbox for Matlab and Octave. The model can be found under the name king2019 (see documentation and demo).
Administrative details
Collaborators: Christian Lorenzi, Alejandro Osses, Andrew King, Nicole Miller