• 제목/요약/키워드: facial expressions

검색결과 321건 처리시간 0.029초

Facial Actions 과 애니메이션 원리에 기반한 로봇의 얼굴 제스처 생성 (Generation of Robot Facial Gestures based on Facial Actions and Animation Principles)

  • 박정우;김우현;이원형;이희승;정명진
    • 제어로봇시스템학회논문지
    • /
    • 제20권5호
    • /
    • pp.495-502
    • /
    • 2014
  • This paper proposes a method to generate diverse robot facial expressions and facial gestures in order to help long-term HRI. First, nine basic dynamics for diverse robot facial expressions are determined based on the dynamics of human facial expressions and principles of animation for even identical emotions. In the second stage, facial actions are added to express facial gestures such as sniffling or wailing loudly corresponding to sadness, laughing aloud or smiling corresponding to happiness, etc. To evaluate the effectiveness of our approach, we compared the facial expressions of the developed robot when the proposed method is used or not. The results of the survey showed that the proposed method can help robots generate more realistic facial expressions.

근육 모델 기반 3D 얼굴 표정 생성 시스템 설계 및 구현 (A Design and Implementation of 3D Facial Expressions Production System based on Muscle Model)

  • 이혜정;정석태
    • 한국정보통신학회논문지
    • /
    • 제16권5호
    • /
    • pp.932-938
    • /
    • 2012
  • 얼굴 표정은 상호간 의사소통에 있어 중요한 의미를 갖는 것으로, 인간이 사용하는 다양한 언어보다도 수많은 인간 내면의 감정을 표현할 수 있는 유일한 수단이다. 본 논문에서는 쉽고 자연스러운 얼굴 표정 생성을 위한 근육 모델 기반 3D 얼굴 표정 생성 시스템을 제안한다. 3D 얼굴 모델의 표정 생성을 위하여 Waters의 근육 모델을 기반으로 자연스러운 얼굴 표정 생성에 필요한 근육을 추가하여 사용하고, 표정 생성의 핵심적 요소인 눈썹, 눈, 코, 입, 볼 등의 특징요소들을 중심으로 얼굴 근육과 근육벡터를 이용하여 해부학적으로 서로 연결된 얼굴 근육 움직임의 그룹화를 통해 얼굴 표정 변화의 기본 단위인 AU를 단순화하고 재구성함으로써 쉽고 자연스러운 얼굴 표정을 생성할 수 있도록 하였다.

Japanese Political Interviews: The Integration of Conversation Analysis and Facial Expression Analysis

  • Kinoshita, Ken
    • Asian Journal for Public Opinion Research
    • /
    • 제8권3호
    • /
    • pp.180-196
    • /
    • 2020
  • This paper considers Japanese political interviews to integrate conversation and facial expression analysis. The behaviors of political leaders will be disclosed by analyzing questions and responses by using the turn-taking system in conversation analysis. Additionally, audiences who cannot understand verbal expressions alone will understand the psychology of political leaders by analyzing their facial expressions. Integral analyses promote understanding of the types of facial and verbal expressions of politicians and their effect on public opinion. Politicians have unique techniques to convince people. If people do not know these techniques and ways of various expressions, they will become confused, and politics may fall into populism as a result. To avoid this, a complete understanding of verbal and non-verbal behaviors is needed. This paper presents two analyses. The first analysis is a qualitative analysis that deals with Prime Minister Shinzō Abe and shows that differences between words and happy facial expressions occur. That result indicates that Abe expresses disgusted facial expressions when faced with the same question from an interviewer. The second is a quantitative multiple regression analysis where the dependent variables are six facial expressions: happy, sad, angry, surprised, scared, and disgusted. The independent variable is when politicians have a threat to face. Political interviews that directly inform audiences are used as a tool by politicians. Those interviews play an important role in modelling public opinion. The audience watches political interviews, and these mold support to the party. Watching political interviews contributes to the decision to support the political party when they vote in a coming election.

Text-driven Speech Animation with Emotion Control

  • Chae, Wonseok;Kim, Yejin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권8호
    • /
    • pp.3473-3487
    • /
    • 2020
  • In this paper, we present a new approach to creating speech animation with emotional expressions using a small set of example models. To generate realistic facial animation, two example models called key visemes and expressions are used for lip-synchronization and facial expressions, respectively. The key visemes represent lip shapes of phonemes such as vowels and consonants while the key expressions represent basic emotions of a face. Our approach utilizes a text-to-speech (TTS) system to create a phonetic transcript for the speech animation. Based on a phonetic transcript, a sequence of speech animation is synthesized by interpolating the corresponding sequence of key visemes. Using an input parameter vector, the key expressions are blended by a method of scattered data interpolation. During the synthesizing process, an importance-based scheme is introduced to combine both lip-synchronization and facial expressions into one animation sequence in real time (over 120Hz). The proposed approach can be applied to diverse types of digital content and applications that use facial animation with high accuracy (over 90%) in speech recognition.

Understanding the Importance of Presenting Facial Expressions of an Avatar in Virtual Reality

  • Kim, Kyulee;Joh, Hwayeon;Kim, Yeojin;Park, Sohyeon;Oh, Uran
    • International journal of advanced smart convergence
    • /
    • 제11권4호
    • /
    • pp.120-128
    • /
    • 2022
  • While online social interactions have been more prevalent with the increased popularity of Metaverse platforms, little has been studied the effects of facial expressions in virtual reality (VR), which is known to play a key role in social contexts. To understand the importance of presenting facial expressions of a virtual avatar under different contexts, we conducted a user study with 24 participants where they were asked to have a conversation and play a charades game with an avatar with and without facial expressions. The results show that participants tend to gaze at the face region for the majority of the time when having a conversation or trying to guess emotion-related keywords when playing charades regardless of the presence of facial expressions. Yet, we confirmed that participants prefer to see facial expressions in virtual reality as well as in real-world scenarios as it helps them to better understand the contexts and to have more immersive and focused experiences.

The Effects of Chatbot Anthropomorphism and Self-disclosure on Mobile Fashion Consumers' Intention to Use Chatbot Services

  • Kim, Minji;Park, Jiyeon;Lee, MiYoung
    • 패션비즈니스
    • /
    • 제25권6호
    • /
    • pp.119-130
    • /
    • 2021
  • This study investigated the effects of the chatbot's level of anthropomorphism - closeness to the human form - and its self-disclosure - delivery of emotional exchange with the chatbot through its facial expressions and chatting message on the user's intention to accept the service. A 2 (anthropomorphism: High vs. Low) × 2 (self-disclosure through facial expressions: High vs. Low) × 2 (self-disclosure through conversation: High vs. Low) between-subject factorial design was employed for this study. An online survey was conducted and a total of 234 questionnaires were used in the analysis. The results showed that consumers used chatbot service more when emotions were disclosed through facial expressions, than when it disclosed fewer facial expressions. There was statistically significant interaction effect, indicating the relationship between chatbot's self-disclosure through facial expression and the consumers' intention to use chatbot service differs depending on the extent of anthropomorphism. In the case of "robot chatbots" with low anthropomorphism levels, there was no difference in intention to use chatbot service depending on the level of self-disclosure through facial expression. When the "human-like chatbot" with high anthropomorphism levels discloses itself more through facial expressions, consumer's intention to use the chatbot service increased much more than when the human-like chatbot disclosed fewer facial expressions. The findings suggest that chatbots' self-disclosure plays an important role in the formation of consumer perception.

근육기반의 특징모델을 이용한 얼굴표정인식에 관한 연구 (Recognition of Facial Expressions Using Muscle-eased Feature Models)

  • 김동수;남기환;한준희;박호식;차영석;최현수;배철수;권오홍;나상동
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 1999년도 추계종합학술대회
    • /
    • pp.416-419
    • /
    • 1999
  • 얼굴특징들의 추적을 위한 근육을 기반으로 한 특징모델을 사용한다. 그 특징모델은 적은 파라메터와 범위와 방향으로 한정된 변형으로 구성되고, 각 특징점에 관한 검색 공간은 한정되어질 수 있다. 본 논문에서는 6개로 분류한 주요 얼굴표정에 대해 근육의 수축강도로 추정한다. 그 수축 벡터는 얼굴근육모델의 변화량으로부터 얻어지며, 유사도는 _그것들의 벡터와 대표하는 주요한 표정들의 벡터사이로 규정짓고, 얼굴표정들의 측정을 위해 사용된다.

  • PDF

Facial Expression Recognition with Fuzzy C-Means Clusstering Algorithm and Neural Network Based on Gabor Wavelets

  • Youngsuk Shin;Chansup Chung;Lee, Yillbyung
    • 한국감성과학회:학술대회논문집
    • /
    • 한국감성과학회 2000년도 춘계 학술대회 및 국제 감성공학 심포지움 논문집 Proceeding of the 2000 Spring Conference of KOSES and International Sensibility Ergonomics Symposium
    • /
    • pp.126-132
    • /
    • 2000
  • This paper presents a facial expression recognition based on Gabor wavelets that uses a fuzzy C-means(FCM) clustering algorithm and neural network. Features of facial expressions are extracted to two steps. In the first step, Gabor wavelet representation can provide edges extraction of major face components using the average value of the image's 2-D Gabor wavelet coefficient histogram. In the next step, we extract sparse features of facial expressions from the extracted edge information using FCM clustering algorithm. The result of facial expression recognition is compared with dimensional values of internal stated derived from semantic ratings of words related to emotion. The dimensional model can recognize not only six facial expressions related to Ekman's basic emotions, but also expressions of various internal states.

  • PDF

LLE 알고리즘을 사용한 얼굴 모션 데이터의 투영 및 실시간 표정제어 (Realtime Facial Expression Control and Projection of Facial Motion Data using Locally Linear Embedding)

  • 김성호
    • 한국콘텐츠학회논문지
    • /
    • 제7권2호
    • /
    • pp.117-124
    • /
    • 2007
  • 본 논문은 얼굴 모션 캡쳐 데이터를 재사용하여 실시간 표정 제어 및 표정 애니메이션을 생성하기 위한 방법론을 기술한다. 이 방법의 핵심요소는 얼굴 표정들을 정의할 수 있는 표정상태 표현법을 정하고, 이를 LLE 알고리즘에 적용하여 표정들을 적당한 공간에 분포시키는 방법론과, 이 공간을 사용하여 실시간 표정 애니메이션 생성 및 표정제어를 수행하기 위한 사용자 인터페이스 기법이다. 본 논문에서는 약 2400개의 얼굴 표정 프레임 데이터를 이용하여 공간을 생성하고, 애니메이터가 이 공간을 자유롭게 항해할 때, 항해경로 상에 위치한 얼굴 표정 프레임 데이터들이 연속적으로 선택되어 하나의 애니메이션이 생성되거나 표정제어가 가능하도록 하였다. 약 2400개의 얼굴 표정 프레임 데이터들을 직관적인 공간상에 분포하기 위해서는 얼굴 표정 프레임 데이터로부터 얼굴 표정상태를 표현할 필요가 있고, 이를 위해서는 임의의 두 마커 사이의 거리들로 구성된 거리행렬 벡터를 이용한다. 직관적인 공간에서의 데이터 배치는 얼굴 표정상태벡터들의 집합을 LLE 알고리즘에 적용하고, 이로부터 2차원 평면에 균일하게 분포하였다. 본 논문에서는 애니메이터로 하여금 사용자 인터페이스를 사용하여 실시간으로 표정 애니메이션을 생성하거나 표정제어를 수행하도록 하였으며, 그 결과를 평가한다.

모션 데이터의 PCA투영에 의한 3차원 아바타의 실시간 표정 제어 (Realtime Facial Expression Control of 3D Avatar by PCA Projection of Motion Data)

  • 김성호
    • 한국멀티미디어학회논문지
    • /
    • 제7권10호
    • /
    • pp.1478-1484
    • /
    • 2004
  • 본 논문은 사용자로 하여금 얼굴표정들의 공간으로부터 일련의 표정을 실시간적으로 선택하게 함으로써 3차원 아바타의 실시간적 얼굴 표정을 제어하는 기법을 기술한다. 본 시스템에서는 약 2400여개의 얼굴 표정 프레임을 이용하여 표정공간을 구성하였다. 본 기법에서는 한 표정을 표시하는 상태표현으로 얼굴특징점들 간의 상호거리를 표시하는 거리행렬을 사용한다. 이 거리행렬의 집합을 표정공간으로 한다. 3차원 아바타의 얼굴 표정은 사용자가 표정공간을 항해하면서 실시간적으로 제어한다. 이를 도와주기 위해 표정공간을 PCA투영 기법을 이용하여 2차원 공간으로 가시화했다. 본 시스템이 어떤 효과가 있는지를 알기 위해 사용자들로 하여금 본 시스템을 사용하여 3차원 아바타의 얼굴 표정을 제어하게 했는데, 본 논문은 그 결과를 평가한다.

  • PDF