Modified DCTNet for audio signals classification
Abstract
In this paper, we investigate DCTNet for audio signal classification. Its output feature is related to Cohen's class of time-frequency distributions. We introduce the use of adaptive DCTNet (A-DCTNet) for audio signals feature extraction. The A-DCTNet applies the idea of constant-Q transform, with its center frequencies of filterbanks geometrically spaced. The A-DCTNet is adaptive to different acoustic scales, and it can better capture low frequency acoustic information that is sensitive to human audio perception than features such as Mel-frequency spectral coefficients (MFSC). We use features extracted by the A-DCTNet as input for classifiers. Experimental results show that the A-DCTNet and Recurrent Neural Networks (RNN) achieve state-of-the-art performance in bird song classification rate, and improve artist identification accuracy in music data. They demonstrate A-DCTNet's applicability to signal processing problems.
- Publication:
-
Acoustical Society of America Journal
- Pub Date:
- October 2016
- DOI:
- 10.1121/1.4970932
- arXiv:
- arXiv:1612.04028
- Bibcode:
- 2016ASAJ..140.3405X
- Keywords:
-
- Computer Science - Sound
- E-Print:
- International Conference of Acoustic and Speech Signal Processing (ICASSP). New Orleans, United States, March, 2017