• Title/Summary/Keyword: MPI

Search Result 506, Processing Time 0.025 seconds

TIGRIS Grid MPI Service based on WSRF (WSRF기반의 TIGRIS 그리드 MPI 서비스)

  • Kwon, Oh-Kyoung;Hahm, Jae-Gyoon;Lee, Pill-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.1
    • /
    • pp.137-142
    • /
    • 2008
  • In this paper, we describe TIGRIS Grid MPI Service, which is 4he WS-Resource Framework (WSRF) based services to enable an MPI job to be executed on Grid environments. It covers heterogeneous compute resources and diverse MPI libraries. The main functionalities are as follows. First, it allows an MPI user to seamlessly launch a job without knowing how to use the specific MPI library. Secondly, it executes an MPI job on the cross-site resources by supporting the Grid-enabled MPI library such as MPICH-G2. Thirdly, it enables the user to launch a job using the source code without compiling. The service is implemented on top of the services of Globus Toolkit. We provide the user interface as a web portal and CLI(Command Line Interface).

Design and Implementation of a User-based MPI Checkpointer for Portability (이식성을 고려한 사용자기반 MPI 체크포인터의 설계 및 구현)

  • Ahn Sun-Il;Han Sang-Yong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.1_2
    • /
    • pp.35-43
    • /
    • 2006
  • An MPI Checkpointer is a tool which provides fault-tolerance through checkpointing The previous researches related to the MPI checkpointer have focused on automatic checkpointing and recovery capabilities, but they haven't considered portability issues. In this paper, we discuss design and implementation issues considered for portability when we developed an MPI checkpointer called STFT. In order to increase portability, firstly STFT supports the abstraction interface for a single process checkpointer. Secondly, STFT uses a user-based checkpointing method, and limits possible checkpointing places a user can make. Thirdly, STFT lets the MPI_Init create network connections to the other MPI processes in a fixed order. With these features, we expect STFT can be easily adaptable to various platforms and MPI implementations, and confirmed STFT is easily adaptable to LAM and MPICH/P4 with the prototype Implementation.

TIGRIS Grid MPI Service based on WSRF (WSRF기반의 TIGRIS 그리드 MPI 서비스)

  • Kwon, Oh-Kyoung;Park, Kyung-Lang;Kwon, Oh-Young;Hahm, Jaegyoon;Lee, Pill Woo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2007.11a
    • /
    • pp.212-216
    • /
    • 2007
  • In this paper, we describe TIGRIS Grid MPI Service, which is the WS-Resource Framework (WSRF) based services to enable an MPI job to be executed on Grid environments. It covers heterogeneous compute resources and diverse MPI libraries. The main functionalities are as follows. First, it allows an MPI user to seamlessly launch a job without knowing how to use the specific MPI library. Secondly, it executes an MPI job on the cross-site resources by supporting the Grid-enabled MPI library such as MPICH-G2. Thirdly, it enables the user to launch a job using the source code without compiling.

  • PDF

A Log-based Analysis on the Characteristics and Structure of MPI-IO (MPI-IO를 위한 로그 기반 특성 및 구조 분석)

  • Cha, Kwangho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.05a
    • /
    • pp.114-116
    • /
    • 2013
  • 메시지 전달 방식의 병렬 프로그래밍에서 사용되는 MPI는 프로그램의 확장성 보장에 적합한 MPI-IO라는 파일 I/O 방법을 제공하고 있다. MPI-IO는 동시적인 병렬 I/O 수행으로 인한 성능 저하를 최소화하기 위하여 내부적으로 데이터 재정렬 후 I/O를 수행한다. 본 연구에서는 이와 같은 MPI-IO의 내부 처리과정을 기록하기 위한 방안을 강구하여 실행 시간 로그를 기록하였고 이를 바탕으로 MPI-IO의 특성을 살펴보았다.

The difference between the two methods for myocardial performance index in children (소아에서 심근 수행 지수 측정 방법간의 차이)

  • Joung, Jae-Il;Lee, Chang-Hyun;Kim, Jae-Kwang;Park, Kie-Young;Kim, Bong-Sung;Lee, Jung-Ju;Han, Myung-Ki
    • Clinical and Experimental Pediatrics
    • /
    • v.49 no.12
    • /
    • pp.1324-1328
    • /
    • 2006
  • Purpose : The object of this study was to determine the difference between two methods for myocardial performance index(MPI) in children, using the conventional and pulsed Doppler echocardiography. Methods : A total of 27 children with anatomically normal hearts were enrolled for the study. all were examined by conventional and pulsed Doppler echocardiography at Gangneung Asan Hospital between December, 2005 and February, 2006. First, we measured the time interval(a1) between the mitral inflows from apical 4-chamber view, and the ejection time(ET1) from apical 5-chamber view. And then, we calculated MPI1, isovolumic contraction time(ICT1) and isovolumic relaxation time (IRT1). Secondly, we measured ICT2, ET2 and IRT2 from apical 5-chamber view with a Dopper signal placed at just below junction between mitral and aortic valve at the same cardiac cycle. And then, we calculated MPI2. We compared MPI1 to MPI2. All MPIs were calculated by using the formula, MPI=(ICT+IRT)/ET. Results : The mean age was $5.7{\pm}2.2years$ old(M:F=15:12). The MPI2 was higher than MPI1: $0.277{\pm}0.083$ vs. $0.428{\pm}0.081$(MPI1 vs MPI2, P=0.000). Also, the ICT2 was higher than ICT1: $56{\pm}15msec$ vs $97{\pm}18msec$(ICT1 vs ICT2, P=0.000) and the IRT2 was higher than IRT1: $42{\pm}8msec$ vs $53{\pm}9msec$(IRT1 vs IRT2, P=0.000). But, the ET2 was lower than ET1: $260{\pm}16msec$ vs $254{\pm}14msec$ (ET1 vs ET2, P=0.01). There was, as well, positive linear correlation between MPI1 and MPI2. Conclusion : This study showed that there is a difference between MPI1 and MPI2 in connection with estimating methods. However, the two MPIs had a positive linear correlation. Judging from our results, the MPI of the new method might be a useful index of venticular global function in children.

A Fault-Tolerant Linear System Solver in a Standard MPI Environment (표준 MPI 환경에서의 무정지형 선형 시스템 해법)

  • Park, Pil-Seong
    • Journal of Internet Computing and Services
    • /
    • v.6 no.6
    • /
    • pp.23-34
    • /
    • 2005
  • In a large scale parallel computation, failures of some nodes or communication links end up with waste of computing resources, Several fault-tolerant MPI libraries have been proposed so far, but the programs written by using such libraries have a portability problem since fault-tolerant features are not supported by the MPI standard yet, In this paper, we propose an application-level fault-tolerant linear system solver that uses the asynchronous iteration algorithm and the standard MPI functions only, which does not have a portability problem and is more efficient by adopting a simplified recovery mechanism.

  • PDF

A Study on Efficient Executions of MPI Parallel Programs in Memory-Centric Computer Architecture

  • Lee, Je-Man;Lee, Seung-Chul;Shin, Dongha
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.1
    • /
    • pp.1-11
    • /
    • 2020
  • In this paper, we present a technique that executes MPI parallel programs, that are developed on processor-centric computer architecture, more efficiently on memory-centric computer architecture without program modification. The technique we present here improves performance by replacing low-speed data communication over the network of MPI library functions with high-speed data communication using the property called fast large shared memory of memory-centric computer architecture. The technique we present in the paper is implemented in two programs. The first program is a modified MPI library called MC-MPI-LIB that runs MPI parallel programs more efficiently on memory-centric computer architecture preserving the semantics of MPI library functions. The second program is a simulation program called MC-MPI-SIM that simulates the performance of memory-centric computer architecture on processor-centric computer architecture. We developed and tested the programs on distributed systems environment deployed on Docker based virtualization. We analyzed the performance of several MPI parallel programs and showed that we achieved better performance on memory-centric computer architecture. Especially we could see very high performance on the MPI parallel programs with high communication overhead.

Implementation DSM system over MPI (MPI상에서 분산 공유메로리(DSM)시스템의 구현)

  • 장우현;이성우;유기영
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10a
    • /
    • pp.703-705
    • /
    • 1998
  • 본 논문에서는 MPI를 이용하여 분산 공유 메모리 시스템을 구현한다. 또한 사이클이 없는 방향성 그래프를 기반으로 한 분산 락 알고리즘을 기반으로 네트윅 환경에 적당한 알고리즘을 제안하고 구현하다. 사용된 MPI 는 분산 메모리 시스템의 메시지교환의 표준이므로 MPI 가 구현되어 있는 대부분의 분산 메모리 시스템에서 활용이 가능하여 높은 이식성을 가진다.

  • PDF

Efficient Executions of MPI Parallel Programs in Memory-Centric Computer Architecture (메모리 중심 컴퓨터 구조에서 MPI 병렬 프로그램의 효율적인 수행)

  • Lee, Je-Man;Lee, Seung-Chul;Shin, Dong-Ha
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.07a
    • /
    • pp.257-258
    • /
    • 2019
  • 본 논문에서는 "프로세서 중심 컴퓨터 구조"에서 개발된 MPI 병렬 프로그램을 수정하지 않고 "메모리 중심 컴퓨터 구조"에서 더 효율적으로 수행시키는 기술을 제안한다. 본 연구에서 제안하는 기술은 메모리 중심 컴퓨터 구조가 가지는 "빠른 대용량 공유 메모리" 특징을 이용하여 MPI 표준 라이브러리가 수행하는 네트워크 통신을 통한 느린 데이터 전달을 공유 메모리를 통한 빠른 데이터 전달로 대체하여 효율성을 얻는다. 본 연구에서 제안한 기술은 도커 가상화 기술을 사용한 분산 시스템 환경에서 MC-MPI-LIB 라이브러리 및 MC-MPI-SIM 시뮬레이터로 구현되었으며 다수의 MPI 병렬 프로그램으로 시험 수행하여 효율성이 있음을 보였다.

  • PDF

An Implementation of Fault-Tolerant Message Passing Interface on Parallel Computers (병렬 컴퓨터에서의 결함 허용 메시지 전달 인터페이스 구현)

  • Song, Dae-Ki;Lee, Cheol-Hoon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.3
    • /
    • pp.319-328
    • /
    • 2000
  • The Message-Passing Interface(MPI) is a standard interface for parallel programming environment, based on that application programs run on the processors of a parallel computer. Processor nodes execute processes consisting the program by passing messages to one another. During executing, however, if a fault occurs on a processor node or a process, this will result an inconsistent state, and consequently, the whole program will have to be stopped. To solve this problem, in this paper, we propose a fault-tolerant message passing interface(FT-MPI) by adding a fault manager module to MPI. The proposed FT-MPI does not need any hardware support, and each application program based on MPI can run on the FT-MPI without any modification. The proposed fault tolerance scheme uses the so-called hot-spare process duplication method, and verified by simulations that application programs run despite of any fault with less than 5% overhead on execution time.

  • PDF