Structured Dropout for Weak Label and Multi-Instance Learning and Its Application to Score-Informed Source Separation
Abstract
Many success stories involving deep neural networks are instances of supervised learning, where available labels power gradient-based learning methods. Creating such labels, however, can be expensive and thus there is increasing interest in weak labels which only provide coarse information, with uncertainty regarding time, location or value. Using such labels often leads to considerable challenges for the learning process. Current methods for weak-label training often employ standard supervised approaches that additionally reassign or prune labels during the learning process. The information gain, however, is often limited as only the importance of labels where the network already yields reasonable results is boosted. We propose treating weak-label training as an unsupervised problem and use the labels to guide the representation learning to induce structure. To this end, we propose two autoencoder extensions: class activity penalties and structured dropout. We demonstrate the capabilities of our approach in the context of score-informed source separation of music.
- Publication:
-
arXiv e-prints
- Pub Date:
- September 2016
- DOI:
- 10.48550/arXiv.1609.04557
- arXiv:
- arXiv:1609.04557
- Bibcode:
- 2016arXiv160904557E
- Keywords:
-
- Computer Science - Machine Learning;
- Computer Science - Sound;
- I.2.6;
- H.5.5
- E-Print:
- Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), New Orleans, USA, pp. 2277-2281, 2017