University of Hertfordshire

By the same authors

Singing voice separation based on non-vocal independent component subtraction and amplitude discrimination

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Standard

Singing voice separation based on non-vocal independent component subtraction and amplitude discrimination. / Sofianos, Stratis; Ariyaeeinia, Aladdin; Polfreman, Richard.

In: Proceedings of the 13th International Conference on Digital Audio Effects, (DAFx 2010). Institute of Electronic Music and Acoustics, 2010. p. 221-224.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Harvard

Sofianos, S, Ariyaeeinia, A & Polfreman, R 2010, Singing voice separation based on non-vocal independent component subtraction and amplitude discrimination. in In: Proceedings of the 13th International Conference on Digital Audio Effects, (DAFx 2010). Institute of Electronic Music and Acoustics, pp. 221-224.

APA

Sofianos, S., Ariyaeeinia, A., & Polfreman, R. (2010). Singing voice separation based on non-vocal independent component subtraction and amplitude discrimination. In In: Proceedings of the 13th International Conference on Digital Audio Effects, (DAFx 2010) (pp. 221-224). Institute of Electronic Music and Acoustics.

Vancouver

Sofianos S, Ariyaeeinia A, Polfreman R. Singing voice separation based on non-vocal independent component subtraction and amplitude discrimination. In In: Proceedings of the 13th International Conference on Digital Audio Effects, (DAFx 2010). Institute of Electronic Music and Acoustics. 2010. p. 221-224

Author

Sofianos, Stratis ; Ariyaeeinia, Aladdin ; Polfreman, Richard. / Singing voice separation based on non-vocal independent component subtraction and amplitude discrimination. In: Proceedings of the 13th International Conference on Digital Audio Effects, (DAFx 2010). Institute of Electronic Music and Acoustics, 2010. pp. 221-224

Bibtex

@inproceedings{9b20e254a4854fde8a0b66b7134ed174,
title = "Singing voice separation based on non-vocal independent component subtraction and amplitude discrimination",
abstract = "Many applications of Music Information Retrieval can benefit from effective isolation of the music sources. Earlier work by the authors led to the development of a system that is based on Azimuth Discrimination and Resynthesis (ADRess) and can extract the singing voice from reverberant stereophonic mixtures. We propose an extension to our previous method that is not based on ADRess and exploits both channels of the stereo mix more effectively. For the evaluation of the system we use a dataset that contains songs convolved during mastering as well as the mixing process (i.e. “real-world” conditions). The metrics for objective evaluation are based on bss_eval.",
author = "Stratis Sofianos and Aladdin Ariyaeeinia and Richard Polfreman",
note = "Copyright Institute of Electronic Music and Acoustics",
year = "2010",
language = "English",
isbn = "978-3-2000-1940-9",
pages = "221--224",
booktitle = "In: Proceedings of the 13th International Conference on Digital Audio Effects, (DAFx 2010)",
publisher = "Institute of Electronic Music and Acoustics",

}

RIS

TY - GEN

T1 - Singing voice separation based on non-vocal independent component subtraction and amplitude discrimination

AU - Sofianos, Stratis

AU - Ariyaeeinia, Aladdin

AU - Polfreman, Richard

N1 - Copyright Institute of Electronic Music and Acoustics

PY - 2010

Y1 - 2010

N2 - Many applications of Music Information Retrieval can benefit from effective isolation of the music sources. Earlier work by the authors led to the development of a system that is based on Azimuth Discrimination and Resynthesis (ADRess) and can extract the singing voice from reverberant stereophonic mixtures. We propose an extension to our previous method that is not based on ADRess and exploits both channels of the stereo mix more effectively. For the evaluation of the system we use a dataset that contains songs convolved during mastering as well as the mixing process (i.e. “real-world” conditions). The metrics for objective evaluation are based on bss_eval.

AB - Many applications of Music Information Retrieval can benefit from effective isolation of the music sources. Earlier work by the authors led to the development of a system that is based on Azimuth Discrimination and Resynthesis (ADRess) and can extract the singing voice from reverberant stereophonic mixtures. We propose an extension to our previous method that is not based on ADRess and exploits both channels of the stereo mix more effectively. For the evaluation of the system we use a dataset that contains songs convolved during mastering as well as the mixing process (i.e. “real-world” conditions). The metrics for objective evaluation are based on bss_eval.

M3 - Conference contribution

SN - 978-3-2000-1940-9

SP - 221

EP - 224

BT - In: Proceedings of the 13th International Conference on Digital Audio Effects, (DAFx 2010)

PB - Institute of Electronic Music and Acoustics

ER -