ORIGINAL ARTICLE
SIZE DISCRIMINATION OF TRANSIENT SOUNDS: PERCEPTION AND MODELLING
 
More details
Hide details
1
Institute of Sound and Vibration Research, Highfield, Southampton, United Kingdom
 
 
Publication date: 2013-09-30
 
 
Corresponding author
Stefan Bleeck   

Stefan Bleeck, Institute of Sound and Vibration Research, University Road, Highfield, Southampton, SO17 1BJ, United Kingdom, e-mail: bleeck@gmail.com
 
 
J Hear Sci 2013;3(3):32-44
 
KEYWORDS
ABSTRACT
Humans are able to get an impression of the size of an object by hearing it resonate. While this ability is well described for periodic speech sounds we investigate here the ability to discriminate the size of non-periodic transient impact sounds. Three experiments were performed on normal listeners (n=19) to investigate the importance of the spectral cue in different frequency regions. Recordings from pulse resonance sounds made by a metal ball hitting polystyrene spheres of 5 different sizes were used in the experiments. Recordings were manipulated in order to show that the same cues used in speaker size discrimination are used for transient signals. Results show that the most prominent resonances are the most important cue, but frequencies above 8 kHz also contribute. The results are explained by physiologically inspired model of size discrimination that is based on the Auditory Image Model, and its key part is the Mellin transform. The model can predict which of two objects is bigger. We conclude that similar cues that are used for speaker size discrimination are important for transient sounds.
 
REFERENCES (12)
1.
Patterson RD, Smith DRR, Dinther R, Walters TC et al. Size Information in the Production and Perception of Communication Sounds. In: Yost WA, Popper NA, Fay PR (eds.). Auditory Perception of Sound Sources. New York, Springer Science+Business Media, 2008l; 43–75.
 
2.
Irino T, Patterson RD, Kawahara H. Speech segregation using an auditory vocoder with event-synchronous enhancements. In: IEEE Transactions on Audio, Speech and Language Processing. [Online]. November 2002 Wakayama Univ, Fac Syst Engn, Wakayama 6408510, Japan Univ Cambridge, Dept Physiol Dev & Neuroci, Ctr Neural Basis Hearing, Cambridge CB2 3EG, England. 2002; 2212–21.
 
3.
Ives DT, Smith DRR, Patterson RD. Discrimination of speaker size from syllable phrases. Journal of the Acoustical Society of America. [Online], 2005; 118(6): 3816–22.
 
4.
Patterson RD, Robinson KEN, Holdsworth J, McKeown D, et al. Complex sounds and auditory images. In: K. Horner Y Cazals, L. Demany (eds.). Auditory physiology and perception, Proc. 9th International Symposium on Hearing. 1992; Oxford, Pergamon.
 
5.
Bleeck S, Ives T, Patterson RD, Ives DT. Aim-mat: The Auditory Image Model in MATLAB. Acta Acustica, 2004; 90(4): 781–87.
 
6.
Irino T, Patterson RD. Segregating information about the size and shape of the vocal tract using a time-domain auditory model: The stabilised wavelet-Mellin transform. Speech Communication [Online], 2002; 36(3–4): 181–203.
 
7.
Irino T. Noise suppression using a time-varying, analysis/synthesis gamma chirp filterbank. 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No.99CH36258). [Online], 1999; 1: 97–100.
 
8.
Carello C, Anderson KL, Kunkler-Peck AJ. Perception of Object Length by Sound. Psychological Science [Online], 1998; 9(3): 211–14.
 
9.
Houben MMJ, Kohlrausch A, Hermes DJ. Perception of the size and speed of rolling balls by sound. Speech Communication [Online], 2004; 43(4): 331–45.
 
10.
Grassi M. Recognising the size of objects from sounds with manipulated acoustical parameters. Fechner Day 2002: Proceedings of the International … [Online], 2002.
 
11.
Irino T, Patterson RD. A Dynamic Compressive Gammachirp Auditory Filterbank. IEEE transactions on audio, speech, and language processing. [Online], 2006; 14(6): 2222–32.
 
12.
Patterson R, Holdsworth J. A functional model of neural activity patterns and auditory images. Advances in speech, hearing and … [Online], 1996; 3547–58.
 
Journals System - logo
Scroll to top