얼굴 특징점 추적을 통한 사용자 감성 인식

Emotion Recognition based on Tracking Facial Keypoints

  • 이용환 (원광대학교 디지털콘텐츠공학과) ;
  • 김흥준 (경남과학기술대학교 컴퓨터공학과)
  • Lee, Yong-Hwan (Dept. of Digital Contents, Wonkwang University) ;
  • Kim, Heung-Jun (Dept. of Computer Science and Engineering, Gyeongnam National University of Science and Technology)
  • 투고 : 2019.03.19
  • 심사 : 2019.03.25
  • 발행 : 2019.03.31

초록

Understanding and classification of the human's emotion play an important tasks in interacting with human and machine communication systems. This paper proposes a novel emotion recognition method by extracting facial keypoints, which is able to understand and classify the human emotion, using active Appearance Model and the proposed classification model of the facial features. The existing appearance model scheme takes an expression of variations, which is calculated by the proposed classification model according to the change of human facial expression. The proposed method classifies four basic emotions (normal, happy, sad and angry). To evaluate the performance of the proposed method, we assess the ratio of success with common datasets, and we achieve the best 93% accuracy, average 82.2% in facial emotion recognition. The results show that the proposed method effectively performed well over the emotion recognition, compared to the existing schemes.

키워드

참고문헌

  1. Aleix Martinez and Shichuan Du, "A Model of the Perception of Facial Expressions of Emotion by Humans: Research Overview and Perspectives," Journal of Machine Learning Research, Vol. 13, pp. 1589-1608, 2012.
  2. Lacopo Masi, Yue Wu, Tal Hassner and Prem Natarajan, "Deep Face Recognition: A Survey," International Conference on Graphics, Patterns and Images, pp. 471-478, 2018.
  3. Jamy Li, "Social Robots as Interactive Technology Agents: Supporting Design with Exploratory Assessment," International Conference on Human-Robot Interaction, pp. 629-630, 2016.
  4. Changrong Yu, Jiehan Zhou and Kukka Riekki, "Expression and Analysis of Emotions: Survey and Expreiment," Workshops on Ubiquitous, Autonomic and Trusted Computing, pp. 428-433, 2009.
  5. G.J. Edwards, C.J. Taylor and T.F. Cootes, "Interpreting Face Images using Active Appearance Models," International Conference on Automatic Face and Gesture Recognition, 1998.
  6. Eui-Young Cha, Jung-Hwa Lee and Hyun-Jun Park, "Facial Expression Recognition based on AAM using Backpropagation," Journal of Korea Multimedia Society, Vol. 13, No. 1, pp. 227-230, 2010.
  7. T.F. Cootes, G.J. Edwards, C.J.Taylor, "Active Appearance Models," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23, No. 6, pp. 681-685, 2001. https://doi.org/10.1109/34.927467
  8. G. H. and P. M., "Automatic Temporal Segment Detection and Affect Recognition from Face and Body Display," IEEE Transactions on Systems, Man, and Cybernetics - Part B, Vol. 39, No. 1, pp. 64-84, 2009. https://doi.org/10.1109/TSMCB.2008.927269
  9. P. Ekman and W. Friesen, Facial Action Coding System (FACS): Investigator's Guide, Consulting Psychologists Press, 1978.
  10. I. Cohen, A. Garg, and T. S. Huang, "Emotion Recognition from Facial Expressions using Multilevel HMM," Workshop on Affective Computing, 2000.
  11. Y. Koda, Y. Yoshitomi, M. Nakano, and M. Tabuse, "Facial Expression Recognition for Speaker using Thermal Image Processing and Speech Recognition System," International Symposium on Robot and Human Interactive Communication, 2009.
  12. S. V. Ioannou, A. T. Raouzaiou, V. A. Tzouvaras, T. P. Mailis, K. C. Karpouzis, and S. D. Kollias. Emotion, "Recognition through Facial Expression Analysis based on a Neurofuzzy Network", Neural Networks, Vol. 18, No. 4, pp. 423-435, 2005. https://doi.org/10.1016/j.neunet.2005.03.004
  13. P. M. and I. Patras, "Dynamics of Facial Expression: Recognition of Facial Actions and their Temporal Segments from Face Profile Image Sequences," IEEE Transactions on Systems, Man and Cybernetics, Vol. 36, No. 2, pp.433-449, 2006. https://doi.org/10.1109/TSMCB.2005.859075
  14. P. M. and L. Rothkrantz, "Facial Action Recognition for Facial Expression Analysis from Static Face Images," IEEE Transactions on Systems, Man and Cybernetics - Part B, Vol. 34, No. 3, pp. 1449-1461, 2004. https://doi.org/10.1109/TSMCB.2004.825931
  15. M. Valster, P. M., Z. Ambadar, and J. Cohn, "Spontaneous vs. Posed Facial Behavior: Automatic Analysis of Brow Actions," ACM, pp. 162-170, 2006.
  16. P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, "The Extended Cohn-Kanade Dataset (ck+): A Complete Dataset for Action Unit and Emotion-Specified Expression," Conference on Computer Vision and Pattern Recognition, pp. 94-101, 2010.
  17. Y.H. Lee and H.J. Kim, "Face Detection using AdaBoost and ASM," Journal of the Semiconductor and Display Technology, Vol. 17, No. 4, 2018.