A Unified Model For Voice and Accent Conversion In Speech and Singing using Self-Supervised Learning and Feature Extraction
Abstract
This paper presents a new voice conversion model capable of transforming both speaking and singing voices. It addresses key challenges in current systems, such as conveying emotions, managing pronunciation and accent changes, and reproducing non-verbal sounds. One of the model's standout features is its ability to perform accent conversion on hybrid voice samples that encompass both speech and singing, allowing it to change the speaker's accent while preserving the original content and prosody. The proposed model uses an encoder-decoder architecture: the encoder is based on HuBERT to process the speech's acoustic and linguistic content, while the HiFi-GAN decoder audio matches the target speaker's voice. The model incorporates fundamental frequency (f0) features and singer embeddings to enhance performance while ensuring the pitch & tone accuracy and vocal identity are preserved during transformation. This approach improves how naturally and flexibly voice style can be transformed, showing strong potential for applications in voice dubbing, content creation, and technologies like Text-to-Speech (TTS) and Interactive Voice Response (IVR) systems.
- Publication:
-
arXiv e-prints
- Pub Date:
- December 2024
- DOI:
- arXiv:
- arXiv:2412.08312
- Bibcode:
- 2024arXiv241208312C
- Keywords:
-
- Computer Science - Sound;
- Computer Science - Machine Learning;
- Computer Science - Multimedia;
- Electrical Engineering and Systems Science - Audio and Speech Processing;
- 68T50;
- I.2.7
- E-Print:
- 7 pages, 5 figures, 2 tables