Deperssion Detection in Naturalistic Environmental Condition

Main Article Content

T R Dineshkumar
U Savitha
T. Haritha vellam
R. Krishika
A.Juneha jabeen
Aravind Ramesh


depression, naturalistic environments


In light of depression's massive and growing burden on modern society, researchers are investigating ways to detect depression early using non-invasive, automated, scalable, and non-invasive methods. In naturalistic environments, though, speech-based methods are still needed for effective articulatory information capture. The intermodality and intramodality affinities are learned by integrating information from audio, video, and text modalities in a multimodal depression prediction model this article. As a result of The most important components of each modality in the multilevel attention are selected to be used for decision-making to reinforce the overall learning. Our study aims to build various regression models using inputs from audio, video, and text landmark duration and historic n-gram features on the DAIC _WOZ and SH2 datasets both performed well, either separately or in combination.

Abstract 129 | pdf Downloads 466


1. Author M. Popescu, R. T. Ionescu, and M.-I. Georgescu." Local learning with deep and handcrafted features for face emotion identification”.
2. Author Y.-H. Chen, M.-H. Su, Q.-B. Hong, M.-Y. Huang, C.-H. Wu, in Proc. "Speech emotion detection using deep neural network considering verbal and nonverbal speech sounds." IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP), May 2019, pp. 5866-5870.
3. Author A. Papaioannou, G. Zhao, B. Schuller, I. Kotsia, S. Zafeiriou, M. A. Nicolaou, D. Kollias, P. Tzirakis, and G. Zhao."Deep affect prediction in the wild: Aff-wild database and challenge, deep architectures, and beyond International” Journal of Computer Vision, vol. 127, pp. 1–23, June 2019.
4. Author:D. Kollias, A. Schulc, E. Hajiyev, and S. Zafeiriou,"Analyzing emotional behaviour in the inaugural ABAW 2020 competition”.
5. "Spatio-temporal encoderdecoder fully convolutional network for video-based dimensional emotion identification," IEEE Trans. Affect. Comput., early access, Sep. 10, 2019.
6. Author: N. Churamani, A. Sciutti, and P. Barros,"The FaceChannel: A light-weight deep neural network for facial expression detection," in Proc. 15th IEEE International Conf. Autom. Face Gesture Recognit. (FG), Buenos Aires, AR USA, April 2020.
7. Author. Giannakakis, N. Pugeault, M. Koujan, L. Alharbawee, and A. Roussos, in Proc. 15th IEEE Int. Conf. Autom. Face Gesture Recognit. (FG), "Real-time facial expression recognition 'in the wild' by disentangling 3D expression from identification," Buenos Aires, AR, USA, May 2020.
8. Author: H. Yang, M. Yu, and G. Zhao, "Expression recognition approach based on alightweight convolutional neural network IEEE Access, vol. 8, pp. 38528-38537, 2020.
9. Author: The authors are "A. F. Abate, P. Barra, S. Barra, C. Molinari, M. Nappi, and F. Narducci. "Clustering face attributes: Narrowing the road from soft to hard biometrics”, IEEE Access, vol. 8, 2020, pp. 9037-9045.
10. Author: S. Hou and H. Wang, I n Proc. "Facial expression detection based on the fusion of CNN and SIFT features," IEEE 10th Int. Conf. Electron. Inf. Emergency Commun. (ICEIEC), 2020.