• Title/Summary/Keyword: Text to Motion

Search Result 75, Processing Time 0.023 seconds

Design and Implementation of the Primitive Motion API for Kinetic Typography (키네틱 타이포그래피를 위한 기본모션 API 설계 및 개발)

  • Cho, YoonAh;Woo, Sung-Ho;Lim, Soon-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.6
    • /
    • pp.763-771
    • /
    • 2015
  • The kinetic typography animates the static text and it will enable the delivery the opinion and emotion, but we should use professional software or do complex coding precess to create a motion into an existing static text. In this paper, we propose the primitive motion API which is the way to configure the kinetic typography easily by adding a motion into the static text. In the pursuit of this purpose, we analyzed the movement of the text, defined the underlying levels of the movement and designed the primitive motion API to express the kinetic typography promptly. Furthermore, we verified the performance of the primitive motion API by testing the usability. Using the primitive motion API to implement the kinetic typography explicitly might substitute for tedious coding process and usage of the existing professional software so it makes anyone be able to apply the kinetic typography in a variety of applications.

Overlay Text Graphic Region Extraction for Video Quality Enhancement Application (비디오 품질 향상 응용을 위한 오버레이 텍스트 그래픽 영역 검출)

  • Lee, Sanghee;Park, Hansung;Ahn, Jungil;On, Youngsang;Jo, Kanghyun
    • Journal of Broadcast Engineering
    • /
    • v.18 no.4
    • /
    • pp.559-571
    • /
    • 2013
  • This paper has presented a few problems when the 2D video superimposed the overlay text was converted to the 3D stereoscopic video. To resolve the problems, it proposes the scenario which the original video is divided into two parts, one is the video only with overlay text graphic region and the other is the video with holes, and then processed respectively. And this paper focuses on research only to detect and extract the overlay text graphic region, which is a first step among the processes in the proposed scenario. To decide whether the overlay text is included or not within a frame, it is used the corner density map based on the Harris corner detector. Following that, the overlay text region is extracted using the hybrid method of color and motion information of the overlay text region. The experiment shows the results of the overlay text region detection and extraction process in a few genre video sequence.

Extension of Kinetic Typography System Considering Text Components (요소를 고려한 키네틱 타이포그래피 시스템의 확장)

  • Jung, Seung-Ah;Lee, Dasom;Lim, Soon-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.11
    • /
    • pp.1828-1841
    • /
    • 2017
  • In the previous research, we proposed a Kinetic typography font engine that can easily add motion to text with function call only. However, since it is aimed at constructing movements for a sentence, there is still inconvenience in the production of various kinetic typography motions in word or letter unit. We propose Kinetic Typical Extended Motion API(Application Programming Interface) that extends Kinetic Motion API. The extended Kinetic Typographic Font Engine aims to simplify the process of making kinetic typography in words and letters, including the kinetic typographic motion library provided as a function. In addition, various applications that can apply Kinetic typography A kinetic typography authoring interface is provided for facilitating the construction of a motion library for the robot.

Characterization of Motion Interpolation in 120Hz Systems

  • Shin, Byung-Hyuk;Kim, Kyung-Woo;Park, Min-Kyu;Berkeley, Brian H.
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2008.10a
    • /
    • pp.681-684
    • /
    • 2008
  • Motion interpolation is adopted and has been spread widely into market since it is effective in reducing motion blur, which is considered as weak characteristic due to slow response time of liquid crystal and hold-type display. 120Hz driving using interpolated frames achieves better moving picture quality with less motion blur and less motion judder. However, errors in the interpolated frames can cause visual artifacts such as static text breakup, halos, and occlusions. This paper focuses on categorizing characteristics of visual artifacts and on reducing side-effects by using information from original frames in special cases.

  • PDF

Ship Number Recognition Method Based on An improved CRNN Model

  • Wenqi Xu;Yuesheng Liu;Ziyang Zhong;Yang Chen;Jinfeng Xia;Yunjie Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.740-753
    • /
    • 2023
  • Text recognition in natural scene images is a challenging problem in computer vision. The accurate identification of ship number characters can effectively improve the level of ship traffic management. However, due to the blurring caused by motion and text occlusion, the accuracy of ship number recognition is difficult to meet the actual requirements. To solve these problems, this paper proposes a dual-branch network based on the CRNN identification network. The network couples image restoration and character recognition. The CycleGAN module is used for blur restoration branch, and the Pix2pix module is used for character occlusion branch. The two are coupled to reduce the impact of image blur and occlusion. Input the recovered image into the text recognition branch to improve the recognition accuracy. After a lot of experiments, the model is robust and easy to train. Experiments on CTW datasets and real ship maps illustrate that our method can get more accurate results.

Spatiotemporal Removal of Text in Image Sequences (비디오 영상에서 시공간적 문자영역 제거방법)

  • Lee, Chang-Woo;Kang, Hyun;Jung, Kee-Chul;Kim, Hang-Joon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.113-130
    • /
    • 2004
  • Most multimedia data contain text to emphasize the meaning of the data, to present additional explanations about the situation, or to translate different languages. But, the left makes it difficult to reuse the images, and distorts not only the original images but also their meanings. Accordingly, this paper proposes a support vector machines (SVMs) and spatiotemporal restoration-based approach for automatic text detection and removal in video sequences. Given two consecutive frames, first, text regions in the current frame are detected by an SVM-based texture classifier Second, two stages are performed for the restoration of the regions occluded by the detected text regions: temporal restoration in consecutive frames and spatial restoration in the current frame. Utilizing text motion and background difference, an input video sequence is classified and a different temporal restoration scheme is applied to the sequence. Such a combination of temporal restoration and spatial restoration shows great potential for automatic detection and removal of objects of interest in various kinds of video sequences, and is applicable to many applications such as translation of captions and replacement of indirect advertisements in videos.

An Analysis on the Design of Motion-Sensing Game Role Selection GUI (체감형 게임에서 캐릭터 선택 GUI 디자인 분석)

  • Huang, HaiBiao;Zheng, LingJing;Ryu, Seuc-HO
    • Journal of Digital Convergence
    • /
    • v.18 no.7
    • /
    • pp.383-387
    • /
    • 2020
  • This paper studies the GUI design of motion-sensing game character selection. The game's GUI plays a role in transferring information in human-computer interaction. The purpose of this game player-oriented GUI design is to optimize the human-computer interaction, make the operation more user-friendly, reduce the user's cognitive burden, and better adapt to the user's operation needs. This paper combines examples to compare and analyze these three games from three aspects: text, color and form. In the existing motion-sensing game games, the character selection GUI has a design feature of high recognition and strong visibility. Finally, suggestions for GUI design of motion-sensing game games are given. It is hoped that in the future, reference materials will be provided for the GUI design of experience game character selection.

A study on the Interactive Expression of Human Emotions in Typography

  • Lim, Sooyeon
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.122-130
    • /
    • 2022
  • In modern times, text has become an image, and typography is a style that is a combination of image and text that can be easily encountered in everyday life. It is developing not only for the purpose of conveying meaningful communication, but also to bring joy and beauty to our lives as a medium with aesthetic format. This study shows through case analysis that typography is a tool for expressing human emotions, and investigates its characteristics that change along with the media. In particular, interactive communication tools and methods used by interactive typography to express viewers' emotions are described in detail. We created interactive typography using the inputted text, the selected music by the viewer and the viewer's movement. As a result of applying it to the exhibition, we could confirm that interactive typography can function as an effective communication medium that shows the utility of both the iconography of letter signs and the cognitive function when combined with the audience's intentional motion.

A Model to Automatically Generate Non-verbal Expression Information for Korean Utterance Sentence (한국어 발화 문장에 대한 비언어 표현 정보를 자동으로 생성하는 모델)

  • Jaeyoon Kim;Jinyea Jang;San Kim;Minyoung Jung;Hyunwook Kang;Saim Shin
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.91-94
    • /
    • 2023
  • 자연스러운 상호작용이 가능한 인공지능 에이전트를 개발하기 위해서는 언어적 표현뿐 아니라, 비언어적 표현 또한 고려되어야 한다. 본 논문에서는 한국어 발화문으로부터 비언어적 표현인 모션을 생성하는 연구를 소개한다. 유튜브 영상으로부터 데이터셋을 구축하고, Text to Motion의 기존 모델인 T2M-GPT와 이종 모달리티 데이터를 연계 학습한 VL-KE-T5의 언어 인코더를 활용하여 구현한 모델로 실험을 진행하였다. 실험 결과, 한국어 발화 텍스트에 대해 생성된 모션 표현은 FID 스코어 0.11의 성능으로 나타났으며, 한국어 발화 정보 기반 비언어 표현 정보 생성의 가능성을 보여주었다.

  • PDF

Development of Motion Recognition Platform Using Smart-Phone Tracking and Color Communication (스마트 폰 추적 및 색상 통신을 이용한 동작인식 플랫폼 개발)

  • Oh, Byung-Hun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.5
    • /
    • pp.143-150
    • /
    • 2017
  • In this paper, we propose a novel motion recognition platform using smart-phone tracking and color communication. The interface requires only a camera and a personal smart-phone to provide a motion control interface rather than expensive equipment. The platform recognizes the user's gestures by the tracking 3D distance and the rotation angle of the smart-phone, which acts essentially as a motion controller in the user's hand. Also, a color coded communication method using RGB color combinations is included within the interface. Users can conveniently send or receive any text data through this function, and the data can be transferred continuously even while the user is performing gestures. We present the result that implementation of viable contents based on the proposed motion recognition platform.