• Title/Summary/Keyword: facial expressions

Search Result 320, Processing Time 0.03 seconds

Generation of Robot Facial Gestures based on Facial Actions and Animation Principles (Facial Actions 과 애니메이션 원리에 기반한 로봇의 얼굴 제스처 생성)

  • Park, Jeong Woo;Kim, Woo Hyun;Lee, Won Hyong;Lee, Hui Sung;Chung, Myung Jin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.5
    • /
    • pp.495-502
    • /
    • 2014
  • This paper proposes a method to generate diverse robot facial expressions and facial gestures in order to help long-term HRI. First, nine basic dynamics for diverse robot facial expressions are determined based on the dynamics of human facial expressions and principles of animation for even identical emotions. In the second stage, facial actions are added to express facial gestures such as sniffling or wailing loudly corresponding to sadness, laughing aloud or smiling corresponding to happiness, etc. To evaluate the effectiveness of our approach, we compared the facial expressions of the developed robot when the proposed method is used or not. The results of the survey showed that the proposed method can help robots generate more realistic facial expressions.

A Design and Implementation of 3D Facial Expressions Production System based on Muscle Model (근육 모델 기반 3D 얼굴 표정 생성 시스템 설계 및 구현)

  • Lee, Hyae-Jung;Joung, Suck-Tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.5
    • /
    • pp.932-938
    • /
    • 2012
  • Facial expression has its significance in mutual communication. It is the only means to express human's countless inner feelings better than the diverse languages human use. This paper suggests muscle model-based 3D facial expression generation system to produce easy and natural facial expressions. Based on Waters' muscle model, it adds and used necessary muscles to produce natural facial expressions. Also, among the complex elements to produce expressions, it focuses on core, feature elements of a face such as eyebrows, eyes, nose, mouth, and cheeks and uses facial muscles and muscle vectors to do the grouping of facial muscles connected anatomically. By simplifying and reconstructing AU, the basic nuit of facial expression changes, it generates easy and natural facial expressions.

Japanese Political Interviews: The Integration of Conversation Analysis and Facial Expression Analysis

  • Kinoshita, Ken
    • Asian Journal for Public Opinion Research
    • /
    • v.8 no.3
    • /
    • pp.180-196
    • /
    • 2020
  • This paper considers Japanese political interviews to integrate conversation and facial expression analysis. The behaviors of political leaders will be disclosed by analyzing questions and responses by using the turn-taking system in conversation analysis. Additionally, audiences who cannot understand verbal expressions alone will understand the psychology of political leaders by analyzing their facial expressions. Integral analyses promote understanding of the types of facial and verbal expressions of politicians and their effect on public opinion. Politicians have unique techniques to convince people. If people do not know these techniques and ways of various expressions, they will become confused, and politics may fall into populism as a result. To avoid this, a complete understanding of verbal and non-verbal behaviors is needed. This paper presents two analyses. The first analysis is a qualitative analysis that deals with Prime Minister Shinzō Abe and shows that differences between words and happy facial expressions occur. That result indicates that Abe expresses disgusted facial expressions when faced with the same question from an interviewer. The second is a quantitative multiple regression analysis where the dependent variables are six facial expressions: happy, sad, angry, surprised, scared, and disgusted. The independent variable is when politicians have a threat to face. Political interviews that directly inform audiences are used as a tool by politicians. Those interviews play an important role in modelling public opinion. The audience watches political interviews, and these mold support to the party. Watching political interviews contributes to the decision to support the political party when they vote in a coming election.

Text-driven Speech Animation with Emotion Control

  • Chae, Wonseok;Kim, Yejin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3473-3487
    • /
    • 2020
  • In this paper, we present a new approach to creating speech animation with emotional expressions using a small set of example models. To generate realistic facial animation, two example models called key visemes and expressions are used for lip-synchronization and facial expressions, respectively. The key visemes represent lip shapes of phonemes such as vowels and consonants while the key expressions represent basic emotions of a face. Our approach utilizes a text-to-speech (TTS) system to create a phonetic transcript for the speech animation. Based on a phonetic transcript, a sequence of speech animation is synthesized by interpolating the corresponding sequence of key visemes. Using an input parameter vector, the key expressions are blended by a method of scattered data interpolation. During the synthesizing process, an importance-based scheme is introduced to combine both lip-synchronization and facial expressions into one animation sequence in real time (over 120Hz). The proposed approach can be applied to diverse types of digital content and applications that use facial animation with high accuracy (over 90%) in speech recognition.

Understanding the Importance of Presenting Facial Expressions of an Avatar in Virtual Reality

  • Kim, Kyulee;Joh, Hwayeon;Kim, Yeojin;Park, Sohyeon;Oh, Uran
    • International journal of advanced smart convergence
    • /
    • v.11 no.4
    • /
    • pp.120-128
    • /
    • 2022
  • While online social interactions have been more prevalent with the increased popularity of Metaverse platforms, little has been studied the effects of facial expressions in virtual reality (VR), which is known to play a key role in social contexts. To understand the importance of presenting facial expressions of a virtual avatar under different contexts, we conducted a user study with 24 participants where they were asked to have a conversation and play a charades game with an avatar with and without facial expressions. The results show that participants tend to gaze at the face region for the majority of the time when having a conversation or trying to guess emotion-related keywords when playing charades regardless of the presence of facial expressions. Yet, we confirmed that participants prefer to see facial expressions in virtual reality as well as in real-world scenarios as it helps them to better understand the contexts and to have more immersive and focused experiences.

The Effects of Chatbot Anthropomorphism and Self-disclosure on Mobile Fashion Consumers' Intention to Use Chatbot Services

  • Kim, Minji;Park, Jiyeon;Lee, MiYoung
    • Journal of Fashion Business
    • /
    • v.25 no.6
    • /
    • pp.119-130
    • /
    • 2021
  • This study investigated the effects of the chatbot's level of anthropomorphism - closeness to the human form - and its self-disclosure - delivery of emotional exchange with the chatbot through its facial expressions and chatting message on the user's intention to accept the service. A 2 (anthropomorphism: High vs. Low) × 2 (self-disclosure through facial expressions: High vs. Low) × 2 (self-disclosure through conversation: High vs. Low) between-subject factorial design was employed for this study. An online survey was conducted and a total of 234 questionnaires were used in the analysis. The results showed that consumers used chatbot service more when emotions were disclosed through facial expressions, than when it disclosed fewer facial expressions. There was statistically significant interaction effect, indicating the relationship between chatbot's self-disclosure through facial expression and the consumers' intention to use chatbot service differs depending on the extent of anthropomorphism. In the case of "robot chatbots" with low anthropomorphism levels, there was no difference in intention to use chatbot service depending on the level of self-disclosure through facial expression. When the "human-like chatbot" with high anthropomorphism levels discloses itself more through facial expressions, consumer's intention to use the chatbot service increased much more than when the human-like chatbot disclosed fewer facial expressions. The findings suggest that chatbots' self-disclosure plays an important role in the formation of consumer perception.

Recognition of Facial Expressions Using Muscle-eased Feature Models (근육기반의 특징모델을 이용한 얼굴표정인식에 관한 연구)

  • 김동수;남기환;한준희;박호식;차영석;최현수;배철수;권오홍;나상동
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.11a
    • /
    • pp.416-419
    • /
    • 1999
  • We Present a technique for recognizing facial expressions from image sequences. The technique uses muscle-based feature models for tracking facial features. Since the feature models are constructed with a small number of parameters and are deformable in the limited range and directions, each search space for a feature can be limited. The technique estimates muscular contractile degrees for classifying six principal facial express expressions. The contractile vectors are obtained from the deformations of facial muscle models. Similarities are defined between those vectors and representative vectors of principal expressions and are used for determining facial expressions.

  • PDF

Facial Expression Recognition with Fuzzy C-Means Clusstering Algorithm and Neural Network Based on Gabor Wavelets

  • Youngsuk Shin;Chansup Chung;Lee, Yillbyung
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.126-132
    • /
    • 2000
  • This paper presents a facial expression recognition based on Gabor wavelets that uses a fuzzy C-means(FCM) clustering algorithm and neural network. Features of facial expressions are extracted to two steps. In the first step, Gabor wavelet representation can provide edges extraction of major face components using the average value of the image's 2-D Gabor wavelet coefficient histogram. In the next step, we extract sparse features of facial expressions from the extracted edge information using FCM clustering algorithm. The result of facial expression recognition is compared with dimensional values of internal stated derived from semantic ratings of words related to emotion. The dimensional model can recognize not only six facial expressions related to Ekman's basic emotions, but also expressions of various internal states.

  • PDF

Realtime Facial Expression Control and Projection of Facial Motion Data using Locally Linear Embedding (LLE 알고리즘을 사용한 얼굴 모션 데이터의 투영 및 실시간 표정제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.117-124
    • /
    • 2007
  • This paper describes methodology that enables animators to create the facial expression animations and to control the facial expressions in real-time by reusing motion capture datas. In order to achieve this, we fix a facial expression state expression method to express facial states based on facial motion data. In addition, by distributing facial expressions into intuitive space using LLE algorithm, it is possible to create the animations or to control the expressions in real-time from facial expression space using user interface. In this paper, approximately 2400 facial expression frames are used to generate facial expression space. In addition, by navigating facial expression space projected on the 2-dimensional plane, it is possible to create the animations or to control the expressions of 3-dimensional avatars in real-time by selecting a series of expressions from facial expression space. In order to distribute approximately 2400 facial expression data into intuitional space, there is need to represents the state of each expressions from facial expression frames. In order to achieve this, the distance matrix that presents the distances between pairs of feature points on the faces, is used. In order to distribute this datas, LLE algorithm is used for visualization in 2-dimensional plane. Animators are told to control facial expressions or to create animations when using the user interface of this system. This paper evaluates the results of the experiment.

Realtime Facial Expression Control of 3D Avatar by PCA Projection of Motion Data (모션 데이터의 PCA투영에 의한 3차원 아바타의 실시간 표정 제어)

  • Kim Sung-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1478-1484
    • /
    • 2004
  • This paper presents a method that controls facial expression in realtime of 3D avatar by having the user select a sequence of facial expressions in the space of facial expressions. The space of expression is created from about 2400 frames of facial expressions. To represent the state of each expression, we use the distance matrix that represents the distances between pairs of feature points on the face. The set of distance matrices is used as the space of expressions. Facial expression of 3D avatar is controled in real time as the user navigates the space. To help this process, we visualized the space of expressions in 2D space by using the Principal Component Analysis(PCA) projection. To see how effective this system is, we had users control facial expressions of 3D avatar by using the system. This paper evaluates the results.

  • PDF