One of my duties as a hearing scientist involves trying to better understand the mechanisms of hearing. Auditory models are essential tools in this regard, as they allow to formulate and validate theories about the relationship between psychoacoustical data (observations) and their underlying mechanisms (physiology). Auditory models can be of interest to audio signal processing as well. The "internal" representation of sound signals they provide at their output can be used in applications to account for auditory perception in the signal chain. A major drawback of auditory models in this context, though, is their rather high computational load.
My involvement in auditory modeling rather consists in working with current auditory models than developing my own model. In particular, I test the ability of state-of-the-art models to predict our data on time-frequency masking and, if necessary, add or modify some model parameters/stages so as to better account for the data. Besides that, I compare the different internal representations computed by these models with our perceptual time-frequency representations of sounds. The basic idea here is to assess the advantages and shortcomings of each approach (i.e. auditory model vs. perceptually motivated time-frequency transform) for signal processing in terms of inversion option (i.e. the possibility to synthesize a signal from the representation), resolution, redundancy, size of the parameters set, and computational efficiency. I am also a contributor to the Auditory Modeling Toolbox.