• Title/Summary/Keyword: Multi-modal

Search Result 625, Processing Time 0.03 seconds

Brain Computer Interfacing: A Multi-Modal Perspective

  • Fazli, Siamac;Lee, Seong-Whan
    • Journal of Computing Science and Engineering
    • /
    • v.7 no.2
    • /
    • pp.132-138
    • /
    • 2013
  • Multi-modal techniques have received increasing interest in the neuroscientific and brain computer interface (BCI) communities in recent times. Two aspects of multi-modal imaging for BCI will be reviewed. First, the use of recordings of multiple subjects to help find subject-independent BCI classifiers is considered. Then, multi-modal neuroimaging methods involving combined electroencephalogram and near-infrared spectroscopy measurements are discussed, which can help achieve enhanced and robust BCI performance.

A Research of User Experience on Multi-Modal Interactive Digital Art

  • Qianqian Jiang;Jeanhun Chung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.80-85
    • /
    • 2024
  • The concept of single-modal digital art originated in the 20th century and has evolved through three key stages. Over time, digital art has transformed into multi-modal interaction, representing a new era in art forms. Based on multi-modal theory, this paper aims to explore the characteristics of interactive digital art in innovative art forms and its impact on user experience. Through an analysis of practical application of multi-modal interactive digital art, this study summarises the impact of creative models of digital art on the physical and mental aspects of user experience. In creating audio-visual-based art, multi-modal digital art should seamlessly incorporate sensory elements and leverage computer image processing technology. Focusing on user perception, emotional expression, and cultural communication, it strives to establish an immersive environment with user experience at its core. Future research, particularly with emerging technologies like Artificial Intelligence(AR) and Virtual Reality(VR), should not merely prioritize technology but aim for meaningful interaction. Through multi-modal interaction, digital art is poised to continually innovate, offering new possibilities and expanding the realm of interactive digital art.

Multi-Modal Wearable Sensor Integration for Daily Activity Pattern Analysis with Gated Multi-Modal Neural Networks (Gated Multi-Modal Neural Networks를 이용한 다중 웨어러블 센서 결합 방법 및 일상 행동 패턴 분석)

  • On, Kyoung-Woon;Kim, Eun-Sol;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.2
    • /
    • pp.104-109
    • /
    • 2017
  • We propose a new machine learning algorithm which analyzes daily activity patterns of users from multi-modal wearable sensor data. The proposed model learns and extracts activity patterns using input from wearable devices in real-time. Inspired by cue integration of human's property, we constructed gated multi-modal neural networks which integrate wearable sensor input data selectively by using gate modules. For the experiments, sensory data were collected by using multiple wearable devices in restaurant situations. As an experimental result, we first show that the proposed model performs well in terms of prediction accuracy. Then, the possibility to construct a knowledge schema automatically by analyzing the activation patterns in the middle layer of our proposed model is explained.

A Multi-Modal Complex Motion Authoring Tool for Creating Robot Contents

  • Seok, Kwang-Ho;Kim, Yoon-Sang
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.6
    • /
    • pp.924-932
    • /
    • 2010
  • This paper proposes a multi-modal complex motion authoring tool for creating robot contents. The proposed tool is user-friendly and allows general users without much knowledge about robots, including children, women and the elderly, to easily edit and modify robot contents. Furthermore, the tool uses multi-modal data including graphic motion, voice and music to simulate user-created robot contents in the 3D virtual environment. This allows the user to not only view the authoring process in real time but also transmit the final authored contents to control the robot. The validity of the proposed tool was examined based on simulations using the authored multi-modal complex motion robot contents as well as experiments of actual robot motions.

Analysis of Semantic Relations Between Multimodal Medical Images Based on Coronary Anatomy for Acute Myocardial Infarction

  • Park, Yeseul;Lee, Meeyeon;Kim, Myung-Hee;Lee, Jung-Won
    • Journal of Information Processing Systems
    • /
    • v.12 no.1
    • /
    • pp.129-148
    • /
    • 2016
  • Acute myocardial infarction (AMI) is one of the three emergency diseases that require urgent diagnosis and treatment in the golden hour. It is important to identify the status of the coronary artery in AMI due to the nature of disease. Therefore, multi-modal medical images, which can effectively show the status of the coronary artery, have been widely used to diagnose AMI. However, the legacy system has provided multi-modal medical images with flat and unstructured data. It has a lack of semantic information between multi-modal images, which are distributed and stored individually. If we can see the status of the coronary artery all at once by integrating the core information extracted from multi-modal medical images, the time for diagnosis and treatment will be reduced. In this paper, we analyze semantic relations between multi-modal medical images based on coronary anatomy for AMI. First, we selected a coronary arteriogram, coronary angiography, and echocardiography as the representative medical images for AMI and extracted semantic features from them, respectively. We then analyzed the semantic relations between them and defined the convergence data model for AMI. As a result, we show that the data model can present core information from multi-modal medical images and enable to diagnose through the united view of AMI intuitively.

A Study on the Multi-Modal Browsing System by Integration of Browsers Using lava RMI (자바 RMI를 이용한 브라우저 통합에 의한 멀티-모달 브라우징 시스템에 관한 연구)

  • Jang Joonsik;Yoon Jaeseog;Kim Gukboh
    • Journal of Internet Computing and Services
    • /
    • v.6 no.1
    • /
    • pp.95-103
    • /
    • 2005
  • Recently researches about multi-modal system has been studied widely and actively, Such multi-modal systems are enable to increase possibility of HCI(Human-computer Interaction) realization, enable to provide information in various ways and also enable to be applicable in e-business application, If ideal multi-modal system can be realized in future, eventually user can maximize interactive usability between information instrument and men in hands-free and eyes-free, In this paper, a new multi-modal browsing system using Java RMI as communication interface, which integrated by HTML browser and voice browser is suggested and also English-English dictionary search application system is implemented as example.

  • PDF

The Effects of Multi-Modality on the Use of Smart Phones

  • Lee, Gaeun;Kim, Seongmin;Choe, Jaeho;Jung, Eui Seung
    • Journal of the Ergonomics Society of Korea
    • /
    • v.33 no.3
    • /
    • pp.241-253
    • /
    • 2014
  • Objective: The objective of this study was to examine multi-modal interaction effects of input-mode switching on the use of smart phones. Background: Multi-modal is considered as an efficient alternative for input and output of information in mobile environments. However, there are various limitations in current mobile UI (User Interface) system that overlooks the transition between different modes or the usability of a combination of multi modal uses. Method: A pre-survey determined five representative tasks from smart phone tasks by their functions. The first experiment involved the use of a uni-mode for five single tasks; the second experiment involved the use of a multi-mode for three dual tasks. The dependent variables were user preference and task completion time. The independent variable in the first experiment was the type of modes (i.e., Touch, Pen, or Voice) while the variable in the second experiment was the type of tasks (i.e., internet searching, subway map, memo, gallery, and application store). Results: In the first experiment, there was no difference between the uses of pen and touch devices. However, a specific mode type was preferred depending on the functional characteristics of the tasks. In the second experiment, analysis of results showed that user preference depended on the order and combination of modes. Even with the transition of modes, users preferred the use of multi-modes including voice. Conclusion: The order of combination of modes may affect the usability of multi-modes. Therefore, when designing a multi-modal system, the fact that there are frequent transitions between various mobile contents in different modes should be properly considered. Application: It may be utilized as a user-centered design guideline for mobile multi modal UI system.

A Study on Strengthened Genetic Algorithm for Multi-Modal and Multiobjective Optimization (강화된 유전 알고리듬을 이용한 다극 및 다목적 최적화에 관한 연구)

  • Lee Won-Bo;Park Seong-Jun;Yoon En-Sup
    • Journal of the Korean Institute of Gas
    • /
    • v.1 no.1
    • /
    • pp.33-40
    • /
    • 1997
  • An optimization system, APROGA II using genetic algorithm, was developed to solve multi-modal and multiobjective problems. To begin with, Multi-Niche Crowding(MNC) algorithm was used for multi-modal optimization problem. Secondly, a new algorithm was suggested for multiobjective optimization problem. Pareto dominance tournaments and Sharing on the non-dominated frontier was applied to it to obtain multiple objectives. APROGA II uses these two algorithms and the system has three search engines(previous APROGA search engine, multi-modal search engine and multiobjective search engine). Besides, this system can handle binary and discrete variables. And the validity of APROGA II was proved by solving several test functions and case study problems successfully.

  • PDF

Multi-Grouped Particle Swarm Strategy for Multi-modal Optimization (Multi-modal 최적화를 위한 다중 그룹 Particle Swarm 전략)

  • Seo, Jang-Ho;Jung, Hyun-Kyo
    • Proceedings of the KIEE Conference
    • /
    • 2005.07b
    • /
    • pp.1026-1028
    • /
    • 2005
  • 본 논문에서는 PSO(Particle Swarm Optimization)에 기초하여 multi-modal 최적화를 위한 다중 그룹 Particle Swarm 최적화 알고리즘(MGPSO)을 제안하였다. 제안된 알고리즘은 PSO의 기본 특성을 유지하기 때문에 기존의 혼합형 타입의 최적화 방식에 비하여 빠른 수렴 시간을 가지며 구성방식이 간단하다. 여러 개의 피크를 가지는 테스트 함수를 통해 본 논문에서 제시한 알고리즘의 타당성을 입증하였으며, 영구자석 매입형 전동기의 최적 설계에 적용하여 그 유용성을 확인하였다.

  • PDF

A Link-Based Label Correcting Multi-Objective Shortest Paths Algorithm in Multi-Modal Transit Networks (복합대중교통망의 링크표지갱신 다목적 경로탐색)

  • Lee, Mee-Young;Kim, Hyung-Chul;Park, Dong-Joo;Shin, Seong-Il
    • Journal of Korean Society of Transportation
    • /
    • v.26 no.1
    • /
    • pp.127-135
    • /
    • 2008
  • Generally, optimum shortest path algorithms adopt single attribute objective among several attributes such as travel time, travel cost, travel fare and travel distance. On the other hand, multi-objective shortest path algorithms find the shortest paths in consideration with multi-objectives. Up to recently, the most of all researches about multi-objective shortest paths are proceeded only in single transportation mode networks. Although, there are some papers about multi-objective shortest paths with multi-modal transportation networks, they did not consider transfer problems in the optimal solution level. In particular, dynamic programming method was not dealt in multi-objective shortest path problems in multi-modal transportation networks. In this study, we propose a multi-objective shortest path algorithm including dynamic programming in order to find optimal solution in multi-modal transportation networks. That algorithm is based on two-objective node-based label correcting algorithm proposed by Skriver and Andersen in 2000 and transfer can be reflected without network expansion in this paper. In addition, we use non-dominated paths and tree sets as labels in order to improve effectiveness of searching non-dominated paths. We also classifies path finding attributes into transfer and link travel attribute in limited transit networks. Lastly, the calculation process of proposed algorithm is checked by computer programming in a small-scaled multi-modal transportation network.