Depression speaks: Automatic discrimination between depressed and non-depressed speakers based on nonverbal speech features

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This article proposes an automatic approach - based on nonverbal speech features - aimed at the automatic discrimination between depressed and non-depressed speakers. The experiments have been performed over one of the largest corpora collected for such a task in the literature (62 patients diagnosed with depression and 54 healthy control subjects), especially when it comes to data where the depressed speakers have been diagnosed as such by professional psychiatrists. The results show that the discrimination can be performed with an accuracy of over 75% and the error analysis shows that the chances of correct classification do not change according to gender, depression-related pathology diagnosed by the psychiatrists or length of the pharmacological treatment (if any). Furthermore, for every depressed subject, the corpus includes a control subject that matches age, education level and gender. This ensures that the approach actually discriminates between depressed and non depressed speakers and does not simply capture differences resulting from other factors.
Original languageEnglish
Title of host publicationInternational Conference on Acoustics, Speech, and Signal Processing (ICASSP)
Publication statusPublished - 3 Sept 2018

Fingerprint

Dive into the research topics of 'Depression speaks: Automatic discrimination between depressed and non-depressed speakers based on nonverbal speech features'. Together they form a unique fingerprint.

Cite this