Open set classification of sound event
Abstract
Sound is one of the primary forms of sensory information that we use to perceive our surroundings. Usually, a sound event is a sequence of an audio clip obtained from an action. The action can be rhythm patterns, music genre, people speaking for a few seconds, etc. The sound event classification address distinguishes what kind of audio clip it is from the given audio sequence. Nowadays, it is a common issue to solve in the following pipeline: audio pre-processing→perceptual feature extraction→classification algorithm. In this paper, we improve the traditional sound event classification algorithm to identify unknown sound events by using the deep learning method. The compact cluster structure in the feature space for known classes helps recognize unknown classes by allowing large room to locate unknown samples in the embedded feature space. Based on this concept, we applied center loss and supervised contrastive loss to optimize the model. The center loss tries to minimize the intra- class distance by pulling the embedded feature into the cluster center, while the contrastive loss disperses the inter-class features from one another. In addition, we explored the performance of self-supervised learning in detecting unknown sound events. The experimental results demonstrate that our proposed open-set sound event classification algorithm and self-supervised learning approach achieve sustained performance improvements in various datasets.
- Publication:
-
Scientific Reports
- Pub Date:
- January 2024
- DOI:
- 10.1038/s41598-023-50639-7
- Bibcode:
- 2024NatSR..14.1282Y