• Title/Summary/Keyword: Neural Machine Translation

Search Result 59, Processing Time 0.033 seconds

Effect of Korean Analysis Tool (UTagger) on Korean-Vietnamese Machine Translations (한-베 기계번역에서 한국어 분석기 (UTagger)의 영향)

  • Nguyen, Quang-Phuoc;Ock, Cheol-Young
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.184-189
    • /
    • 2017
  • With the advent of robust deep learning method, Neural machine translation has recently become a dominant paradigm and achieved adequate results in translation between popular languages such as English, German, and Spanish. However, its results in under-resourced languages Korean and Vietnamese are still limited. This paper reports an attempt at constructing a bidirectional Korean-Vietnamese Neural machine translation system with the supporting of Korean analysis tool - UTagger, which includes morphological analyzing, POS tagging, and WSD. Experiment results demonstrate that UTagger can significantly improve translation quality of Korean-Vietnamese NMT system in both translation direction. Particularly, it improves approximately 15 BLEU scores for the translation from Korean to Vietnamese direction and 3.12 BLEU scores for the reverse direction.

  • PDF

Effect of Korean Analysis Tool (UTagger) on Korean-Vietnamese Machine Translations (한-베 기계번역에서 한국어 분석기 (UTagger)의 영향)

  • Nguyen, Quang-Phuoc;Ock, Cheol-Young
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.184-189
    • /
    • 2017
  • With the advent of robust deep learning method, Neural machine translation has recently become a dominant paradigm and achieved adequate results in translation between popular languages such as English, German, and Spanish. However, its results in under-resourced languages Korean and Vietnamese are still limited. This paper reports an attempt at constructing a bidirectional Korean-Vietnamese Neural machine translation system with the supporting of Korean analysis tool - UTagger, which includes morphological analyzing, POS tagging, and WSD. Experiment results demonstrate that UTagger can significantly improve translation quality of Korean-Vietnamese NMT system in both translation direction. Particularly, it improves approximately 15 BLEU scores for the translation from Korean to Vietnamese direction and 3.12 BLEU scores for the reverse direction.

  • PDF

Simultaneous neural machine translation with a reinforced attention mechanism

  • Lee, YoHan;Shin, JongHun;Kim, YoungKil
    • ETRI Journal
    • /
    • v.43 no.5
    • /
    • pp.775-786
    • /
    • 2021
  • To translate in real time, a simultaneous translation system should determine when to stop reading source tokens and generate target tokens corresponding to a partial source sentence read up to that point. However, conventional attention-based neural machine translation (NMT) models cannot produce translations with adequate latency in online scenarios because they wait until a source sentence is completed to compute alignment between the source and target tokens. To address this issue, we propose a reinforced learning (RL)-based attention mechanism, the reinforced attention mechanism, which allows a neural translation model to jointly train the stopping criterion and a partial translation model. The proposed attention mechanism comprises two modules, one to ensure translation quality and the other to address latency. Different from previous RL-based simultaneous translation systems, which learn the stopping criterion from a fixed NMT model, the modules can be trained jointly with a novel reward function. In our experiments, the proposed model has better translation quality and comparable latency compared to previous models.

Study on Decoding Strategies in Neural Machine Translation (인공신경망 기계번역에서 디코딩 전략에 대한 연구)

  • Seo, Jaehyung;Park, Chanjun;Eo, Sugyeong;Moon, Hyeonseok;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.11
    • /
    • pp.69-80
    • /
    • 2021
  • Neural machine translation using deep neural network has emerged as a mainstream research, and an abundance of investment and studies on model structure and parallel language pair have been actively undertaken for the best performance. However, most recent neural machine translation studies pass along decoding strategy to future work, and have insufficient a variety of experiments and specific analysis on it for generating language to maximize quality in the decoding process. In machine translation, decoding strategies optimize navigation paths in the process of generating translation sentences and performance improvement is possible without model modifications or data expansion. This paper compares and analyzes the significant effects of the decoding strategy from classical greedy decoding to the latest Dynamic Beam Allocation (DBA) in neural machine translation using a sequence to sequence model.

Symbolizing Numbers to Improve Neural Machine Translation (숫자 기호화를 통한 신경기계번역 성능 향상)

  • Kang, Cheongwoong;Ro, Youngheon;Kim, Jisu;Choi, Heeyoul
    • Journal of Digital Contents Society
    • /
    • v.19 no.6
    • /
    • pp.1161-1167
    • /
    • 2018
  • The development of machine learning has enabled machines to perform delicate tasks that only humans could do, and thus many companies have introduced machine learning based translators. Existing translators have good performances but they have problems in number translation. The translators often mistranslate numbers when the input sentence includes a large number. Furthermore, the output sentence structure completely changes even if only one number in the input sentence changes. In this paper, first, we optimized a neural machine translation model architecture that uses bidirectional RNN, LSTM, and the attention mechanism through data cleansing and changing the dictionary size. Then, we implemented a number-processing algorithm specialized in number translation and applied it to the neural machine translation model to solve the problems above. The paper includes the data cleansing method, an optimal dictionary size and the number-processing algorithm, as well as experiment results for translation performance based on the BLEU score.

Character-Level Neural Machine Translation (문자 단위의 Neural Machine Translation)

  • Lee, Changki;Kim, Junseok;Lee, Hyoung-Gyu;Lee, Jaesong
    • Annual Conference on Human and Language Technology
    • /
    • 2015.10a
    • /
    • pp.115-118
    • /
    • 2015
  • Neural Machine Translation (NMT) 모델은 단일 신경망 구조만을 사용하는 End-to-end 방식의 기계번역 모델로, 기존의 Statistical Machine Translation (SMT) 모델에 비해서 높은 성능을 보이고, Feature Engineering이 필요 없으며, 번역 모델 및 언어 모델의 역할을 단일 신경망에서 수행하여 디코더의 구조가 간단하다는 장점이 있다. 그러나 NMT 모델은 출력 언어 사전(Target Vocabulary)의 크기에 비례해서 학습 및 디코딩의 속도가 느려지기 때문에 출력 언어 사전의 크기에 제한을 갖는다는 단점이 있다. 본 논문에서는 NMT 모델의 출력 언어 사전의 크기 제한 문제를 해결하기 위해서, 입력 언어는 단어 단위로 읽고(Encoding) 출력 언어를 문자(Character) 단위로 생성(Decoding)하는 방법을 제안한다. 출력 언어를 문자 단위로 생성하게 되면 NMT 모델의 출력 언어 사전에 모든 문자를 포함할 수 있게 되어 출력 언어의 Out-of-vocabulary(OOV) 문제가 사라지고 출력 언어의 사전 크기가 줄어들어 학습 및 디코딩 속도가 빨라지게 된다. 실험 결과, 본 논문에서 제안한 방법이 영어-일본어 및 한국어-일본어 기계번역에서 기존의 단어 단위의 NMT 모델보다 우수한 성능을 보였다.

  • PDF

An Evaluation of Translation Quality by Homograph Disambiguation in Korean-X Neural Machine Translation Systems (한-X 신경기계번역시스템에서 동형이의어 분별에 따른 변역질 평가)

  • Nguyen, Quang-Phuoc;Shin, Joon-Choul;Ock, Cheol-Young
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.504-509
    • /
    • 2018
  • Neural machine translation (NMT) has recently achieved the state-of-the-art performance. However, it is reported failing in the word sense disambiguation (WSD) for several popular language pairs. In this paper, we explore the extent to which NMT systems are able to disambiguate the Korean homographs. Homographs, words with different meanings but the same written form, cause the word choice problems for NMT systems. Consistent with the popular language pairs, we discover that NMT systems fail to translate Korean homographs correctly. We provide a Korean word sense disambiguation tool-UTagger to use for improvement of NMT's translation quality. We conducted translation experiments using Korean-English and Korean-Vietnamese language pairs. The experimental results show that UTagger can significantly improve the translation quality of NMT in terms of the BLEU, TER, and DLRATIO evaluation metrics.

  • PDF

A Study on the Performance Improvement of Machine Translation Using Public Korean-English Parallel Corpus (공공 한영 병렬 말뭉치를 이용한 기계번역 성능 향상 연구)

  • Park, Chanjun;Lim, Heuiseok
    • Journal of Digital Convergence
    • /
    • v.18 no.6
    • /
    • pp.271-277
    • /
    • 2020
  • Machine translation refers to software that translates a source language into a target language, and has been actively researching Neural Machine Translation through rule-based and statistical-based machine translation. One of the important factors in the Neural Machine Translation is to extract high quality parallel corpus, which has not been easy to find high quality parallel corpus of Korean language pairs. Recently, the AI HUB of the National Information Society Agency(NIA) unveiled a high-quality 1.6 million sentences Korean-English parallel corpus. This paper attempts to verify the quality of each data through performance comparison with the data published by AI Hub and OpenSubtitles, the most popular Korean-English parallel corpus. As test data, objectivity was secured by using test set published by IWSLT, official test set for Korean-English machine translation. Experimental results show better performance than the existing papers tested with the same test set, and this shows the importance of high quality data.

Understanding recurrent neural network for texts using English-Korean corpora

  • Lee, Hagyeong;Song, Jongwoo
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.3
    • /
    • pp.313-326
    • /
    • 2020
  • Deep Learning is the most important key to the development of Artificial Intelligence (AI). There are several distinguishable architectures of neural networks such as MLP, CNN, and RNN. Among them, we try to understand one of the main architectures called Recurrent Neural Network (RNN) that differs from other networks in handling sequential data, including time series and texts. As one of the main tasks recently in Natural Language Processing (NLP), we consider Neural Machine Translation (NMT) using RNNs. We also summarize fundamental structures of the recurrent networks, and some topics of representing natural words to reasonable numeric vectors. We organize topics to understand estimation procedures from representing input source sequences to predict target translated sequences. In addition, we apply multiple translation models with Gated Recurrent Unites (GRUs) in Keras on English-Korean sentences that contain about 26,000 pairwise sequences in total from two different corpora, colloquialism and news. We verified some crucial factors that influence the quality of training. We found that loss decreases with more recurrent dimensions and using bidirectional RNN in the encoder when dealing with short sequences. We also computed BLEU scores which are the main measures of the translation performance, and compared them with the score from Google Translate using the same test sentences. We sum up some difficulties when training a proper translation model as well as dealing with Korean language. The use of Keras in Python for overall tasks from processing raw texts to evaluating the translation model also allows us to include some useful functions and vocabulary libraries as well.

English-Korean Neural Machine Translation using MASS (MASS를 이용한 영어-한국어 신경망 기계 번역)

  • Jung, Young-Jun;Park, Cheon-Eum;Lee, Chang-Ki;Kim, Jun-Seok
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.236-238
    • /
    • 2019
  • 신경망 기계 번역(Neural Machine Translation)은 주로 지도 학습(Supervised learning)을 이용한 End-to-end 방식의 연구가 이루어지고 있다. 그러나 지도 학습 방법은 데이터가 부족한 경우에는 낮은 성능을 보이기 때문에 BERT와 같은 대량의 단일 언어 데이터로 사전학습(Pre-training)을 한 후에 미세조정(Finetuning)을 하는 Transfer learning 방법이 자연어 처리 분야에서 주로 연구되고 있다. 최근에 발표된 MASS 모델은 언어 생성 작업을 위한 사전학습 방법을 통해 기계 번역과 문서 요약에서 높은 성능을 보였다. 본 논문에서는 영어-한국어 기계 번역 성능 향상을 위해 MASS 모델을 신경망 기계 번역에 적용하였다. 실험 결과 MASS 모델을 이용한 영어-한국어 기계 번역 모델의 성능이 기존 모델들보다 좋은 성능을 보였다.

  • PDF