Resumen:
|
[EN] This paper proposes improvements to the Aures' tonality metric, which can be used for estimating the frequency masking of complex sounds. The perception of tonality has been extensively studied in simple sounds, such ...[+]
[EN] This paper proposes improvements to the Aures' tonality metric, which can be used for estimating the frequency masking of complex sounds. The perception of tonality has been extensively studied in simple sounds, such as pure tones and narrowband noise signals, but there are no solid conclusions in the case of complex sounds. Previously, Aures' method has been mostly used in the psychoacoustic analysis of noise signals. The modifications presented here are a better spectral resolution, a lowered tonal threshold, and a different exponent in one of their weighting functions. These may appear to be minor changes from the original Aures (OA) method, but they have proven to be significant for perception. The improved Aures (IA) method has been validated by a subjective test using three different multitone signals in the presence of a narrowband noise, whose result is the subjectively perceived masking thresholds. Results show that the IA method presents an average error of 0.8 dB when predicting the subjective masking thresholds provided by the test, while the average errors of the OA and a baseline spectral flatness method exceed 5 dB. In addition, a second subjective test has been carried out to assess the perceptual equalization of a music signal using the proposed IA and spectral flatness methods. The second test confirms that the IA method is preferred. Therefore, the improved Aures method is proposed as a reliable tonality metric for complex sounds, such as multi-tone signals and music.
[-]
|
Agradecimientos:
|
This work has been partially supported by the GVA Regional Government through PROMETEO/2019/109, the Spanish Government through BES-2016-077899 and RED2018-102668-T, and the European Union together with the Spanish Government ...[+]
This work has been partially supported by the GVA Regional Government through PROMETEO/2019/109, the Spanish Government through BES-2016-077899 and RED2018-102668-T, and the European Union together with the Spanish Government through RTI2018-098085-BC41 (MCIU/AEI/FEDER) . This research is also part of the activities of the "Nordic Sound and Music Computing Network" (NordicSMC), NordForsk project No. 86892. Juan Estreder's work has been carried out partly during his research stay at the Acoustics Lab of Aalto University.
[-]
|