A Study on Broadcast Networks for Music Genre Classification
Abstract
Due to the increased demand for music streaming/recommender services and the recent developments of music information retrieval frameworks, Music Genre Classification (MGC) has attracted the community's attention. However, convolutional-based approaches are known to lack the ability to efficiently encode and localize temporal features. In this paper, we study the broadcast-based neural networks aiming to improve the localization and generalizability under a small set of parameters (about 180k) and investigate twelve variants of broadcast networks discussing the effect of block configuration, pooling method, activation function, normalization mechanism, label smoothing, channel interdependency, LSTM block inclusion, and variants of inception schemes. Our computational experiments using relevant datasets such as GTZAN, Extended Ballroom, HOMBURG, and Free Music Archive (FMA) show state-of-the-art classification accuracies in Music Genre Classification. Our approach offers insights and the potential to enable compact and generalizable broadcast networks for music and audio classification.
- Publication:
-
arXiv e-prints
- Pub Date:
- August 2022
- DOI:
- 10.48550/arXiv.2208.12086
- arXiv:
- arXiv:2208.12086
- Bibcode:
- 2022arXiv220812086H
- Keywords:
-
- Computer Science - Sound;
- Computer Science - Artificial Intelligence;
- Computer Science - Multimedia;
- Electrical Engineering and Systems Science - Audio and Speech Processing;
- Electrical Engineering and Systems Science - Signal Processing
- E-Print:
- accepted for oral presentation at the World Congress on Computational Intelligence (WCCI 2022) - International Joint Conference on Neural Networks (IJCNN 2022)